RSS
 

Posts Tagged ‘Depth’

3 Elements of Getting Shallow Depth of Field Images

01 Jun

Depth of field is one of the things that beginners often struggle with understanding. So I found a couple of videos on YouTube to help you out.

Depth of Field Basics

In this first video from B&H Photo, Kelly Mena explains the three elements that affect depth of field and how they work. The three elements are:

  1. The aperture
  2. Focal length of your lens
  3. The subject to camera distance

Okay, that explains things really well. Let’s look at another video example showing the same three elements.

Depth of Field the Easy Way

Next up, is this video by photographer Ray Scott. He gives some great image examples showing both ends of the spectrum from shallow to wide depth of field and how to combine the three elements to best control your background and foreground focus.

If you have had trouble understanding how to get a shallow depth of field, I hope this has helped you get a better handle on it.

The post 3 Elements of Getting Shallow Depth of Field Images appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on 3 Elements of Getting Shallow Depth of Field Images

Posted in Photography

 

Apollo app for iOS uses dual-cam depth map to create impressive lighting tricks

25 May
Apple’s dual-camera setup can create a depth map to simulate background blur – but now, someone’s figured out how to simulate lighting effects with an impressive level of control.

Apple’s dual camera devices (the 7 Plus, 8 Plus and X to be precise) generate a depth map to create the effects of Portrait Mode and Portrait Lighting that we’ve all come to know well. Whether you love, hate or feel generally ‘meh’ toward fake background blur, things get interesting when Apple makes that depth map information available to third party app developers. Enter Apollo: Immersive illumination, a $ 1.99 iOS app with an unusual name and a few interesting tricks up its sleeve.

Apollo uses the depth map not for background-blurring purposes, but to allow users to add realistic lighting effects to photos after they’re taken. Up to 20 light sources can be positioned throughout an image, with the ability to adjust intensity, color and distance. With the depth information provided, light sources interact with subjects in a three-dimensional fashion, and can even be positioned behind a subject to create a rim light.

It’s hard not to be a little taken aback the first time you drag a light source around your image and see how it interacts with your subject

It’s essentially an interactive version of Apple’s Portrait Lighting, which applies different light style effects to images. Apollo’s effects are highly customizable, and with so many parameters to play with it’s naturally quite a bit more complicated to use than Apple’s very simple lighting modes.

In use

We’ve been messing around with the Apollo app (for an admittedly short period of time), and have to say we’re impressed with what it’s capable of – but that doesn’t mean we don’t have a few requests for the next version.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_7103999524″,”galleryId”:”7103999524″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Click through to see the images full-screen and see how many lights were used in the Apollo app.

It’s hard not to be a little taken aback the first time you drag a light source around your image and see how it interacts with your subject(s). You are able to adjust the color, brightness and spread of your source, which are all fairly self descriptive.

You can also change the ‘Distance’ of your light, or it’s position in Z-space; this means you can move the light to be closer to you, the photographer, or further away into the background of your scene.

Lastly, there are two global adjustments, ‘Shadows’ and ‘Effect Range.’ Shadows essentially controls overall image brightness, though it biases toward the darker tones. Effect Range adjusts the brightness of all of your lights simultaneously in the image, though keeping the brightness ratios between them constant as it does so.

Along the bottom are the parameters you’re allowed for each light source you create (up to 20). Two global adjustments are ‘Shadows’ which adjusts overall brightness and Effect Range which adjusts the brightness of all lights simultaneously.

Overall, it’s an incredibly neat – and kind of addictive – first effort. But there are a few things that we’d like to see addressed in future versions.

Currently, every new ‘light’ you create starts out with a certain set of default parameters. This is alright, except for the fact that the default color is a yellowy tungsten sort of thing; it should really just begin as ‘white.’

Also, if I’ve already tuned in a ‘light’ and just want another one based on those, it’d be nice to be able to duplicate one that I’ve already created instead of having to start from scratch each time.

And once you’ve finished with your new creation, you can save it out as a JPEG – but there’s no way to save the lights themselves so that you can come back and tweak later. Each time you exit to tackle another image, the app asks you, ‘Close photo and discard all changes?’ Well, I’d rather not discard them, but if I have to, then I suppose that’s that.

Lastly, it doesn’t look like there’s any way to preserve the blurriness of the background once you’ve added your lights. It’d be great to be able to still take advantage of the depth map and progressive blurring while adding in your own lighting sources.

Wrapping up

Okay, so those are some fairly major requests on our part. But we make them because we’re really blown away by what the app already offers, and are excited to see how it evolves. It wasn’t so long ago you’d need a powerful workstation and some serious software skills to manipulate lighting in the same way that this app does with a few taps and drags.

If you have a dual camera iPhone and want to give the Apollo app a try, head on over to the App Store yourself and take it for a spin.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Apollo app for iOS uses dual-cam depth map to create impressive lighting tricks

Posted in Uncategorized

 

How to Make Fake Shallow Depth of Field Using Photoshop

11 Apr

Do you see your photo and wish the subject stood out a bit more? Does your photo look somewhat flat? Or maybe the background has people or objects that are unappealing? All of these can be fixed with one simple thing: a shallow depth of field.

In this tutorial, you will learn how to achieve this in post-production using Photoshop.

Fake Depth of field tutorial

Let’s start by clarifying that depth of field is the area of your photograph that is in focus, it’s also called focus range. There are three factors that affect your depth of field.

First is the aperture or f-number. The smaller the number, the smaller the focal range and vice versa, so f/5.6 will have a shallower depth of field than f/22. The second and third factors are very linked together; the focal length and the distance to the subject.

If you are using a telephoto lens and you can, therefore, stand farther from the subject you’ll have a shallower depth of field than standing closer with a wide-angle lens. You can learn more about this relationship and its effect on depth of field in my previous tutorial, How to Use Still-life Subjects to Understand Focal Lengths.

However, if you didn’t manage to set up these things when you were shooting, or you still need more (blur in the background) you can also fake the effect of shallow depth of field in post-production. Here are two techniques to do it using Photoshop.

Technique #1 – When the subject and the background are separated

Before Focus Range Fake Depth of Field Tutorial

Before image example.

With your image already opened in Photoshop, start by duplicating the layer by going to Menu > Layer > Duplicate Layer, then make the canvas bigger. You can do this by going to Menu > Image > Canvas Size.

It doesn’t matter the size of the canvas or the direction because you’ll be cropping it later. However, it’s important that there’s enough room for your main subject to be dragged into it on the next step.

Duplicate Layer Canvas Size Blur Background Tutorial

Then select your subject. It doesn’t have to be precise so you can simply use the Lasso tool and draw a selection around it. Now change to the Content-Aware Move tool which you’ll find hidden behind the Healing Brush on the tools panel. Next, drag your selection out of the image into the empty canvas size that you created before.

Content Aware Move Tool Blur Background Tutorial

Once you drag it out, Photoshop’s algorithm will fill the space you’re leaving empty with the information from the surrounding area. If you skip this step and blur the background with the subject still on it, the colors will spill out so it’s important that you do this part.

Drag Content Aware Move Tool Blur Background Tutorial

Now you can crop out the extra background, including the subject you dragged out and change the canvas back to its original size. Your background is now ready for you to blur it. Go to Menu > Filter > Blur > Field Blur. When the blur applies, a wheel appears in the center with a percentage on how strong the blur is. Adjust it to your liking.

Field Blur Filter Fake DepthofField Tutorial

With this blurred layer still selected, add a layer mask to it by clicking on the button that looks like a rectangle with a circle in the middle on the bottom of the Layers Panel. Then paint on the mask with a black brush, over the subject you want to keep sharp from the original image.

Layer Mask Fake DepthofField Tutorial

The part that you painted black is now transparent so the layer beneath it, which is your original image will be visible. Finally just flatten your image and you’re done!

After Focus Range Fake Depth of Field Tutorial jpg

Technique #2 – When the objects are closer together

The technique you just learned is very useful if your subject is separated from the background, but what happens if you want a shallower depth of field because the objects are closer together? Or because it’s the same subject but you only want a part of it in focus?

In these cases, you need to create an effect that is graduated (fades from one end to the other). To do this here is another technique.

Before Shallow Depth of Field Tutorial

First of all, you need to duplicate the layer by going to Menu > Layer > Duplicate Layer like you did in the previous example, or use the shortcut by dragging the background layer into the Duplicate layer button on the bottom of the panel (or hit Ctrl/Cmd+J).

Then apply a Layer Mask to the new layer by clicking on the mask icon. Inside the mask, you will use the Gradient tool to mark where you want the sharp areas. In this case, I used the circular one but you can use a linear one or whichever is best for your image. I turned off the background layer so you see what I mean.

Grading Layer Mask Fake DepthofField Tutorial

Now go to Menu > Filters > Blur > Lens Blur and a new window will pop-up. Here you’ll see your image with the filter applied and a panel for adjustments on the right side.

Lens Blur Filter Fake Depth of Field Tutorial

It’s important that you set Layer Mask as the source, that way the gradient selection that you did before is what will determine how the filter gets applied.

Once you do that, the Blur Focal Distance slider will be enabled and you can start adjusting it to your liking. I also adjusted the radius and blade curvature, but you should move all the settings to get a feeling for the effects until you’re satisfied.

Finishing up

Hit OK to apply and flatten the image to finalize the result. That’s it!

Remember that every image will need a different treatment to look realistic because there are many things that determine the depth of field so keep experimenting and show us the results in the comments section.

The post How to Make Fake Shallow Depth of Field Using Photoshop appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on How to Make Fake Shallow Depth of Field Using Photoshop

Posted in Photography

 

Halide update adds ‘blazing fast portrait mode,’ depth maps and more to the iOS app

10 Mar

Halide—the feature-rich third-party camera app for the iPhone—just released version 1.7 which adds support for the dual-camera setups of the iPhones 7 Plus, 8 Plus, and X, using the two lenses to “see” in three dimensions.

When shooting a photo, you can now apply a background-blurring portrait effect or darken the background, similar to Apple’s ‘Portrait Lighting’ effect. But this isn’t just Apple’s portrait mode pasted into Halide, the app allegedly does it better:

In an App Store first, Halide’s Portrait mode uses a combination of smart facial detection and point-of-interest detection to allow Portrait mode with zero waiting; users can snap a shot at any time to get beautiful background blur effects on a subject.

Additionally, the app is capable of storing the actual depth map as a separate .png-file for later fine-tuning of the results in an image processor, and a new ‘Augmented Reality Depth Photo Viewer allows you to “place Depth-Enabled captures like images shot with Portrait Mode in AR.”

Once placed into 3D space, you can walk around and through the captured scene and ‘explore’ your depth map. It’s gimmicky… but actually really cool:

Halide 1.7 is already available to purchase on iTunes for $ 3. To learn more about the app’s new depth mapping feature set, head over to the Halide blog. And if you’re curious about Halide in general, you can read our hands-on of the app’s launch version here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Halide update adds ‘blazing fast portrait mode,’ depth maps and more to the iOS app

Posted in Uncategorized

 

Light.co launches ‘Depth Collective’ to support photojournalism with the L16 camera

03 Mar

Light.co, the company behind the innovative (if still in its infancy) Light L16 camera, has announced a new initiative called Depth Collective that aims to support photojournalists in their efforts at “pursuing the truth.” The initiative revolves around the L16 camera itself, which Light.co presents as an inconspicuous alternative to DSLRs for photojournalists who don’t want to be noticed.

“In the past few years,” the company said, “we’ve seen some photojournalists swap their DSLRs for iPhones to stay inconspicuous in their reporting—but they sacrifice quality to do so.” The L16 is a better option, says the company, thanks to its 16 individual camera modules, computational approach to photography, and 52MP max resolution.

Depth Collective members are given multiple perks under the membership, including a $ 500 discount off the L16 camera, early previews of new L16 updates and features, a shot at a bi-annual $ 5,000 reporting grant, plus a free Peak Design pouch and wrist strap.

Any visual artist or photojournalist can apply for Depth Collective membership, but they must have a UK or US address to which the L16 camera can be shipped—shipping elsewhere will start “soon,” but a specific date hasn’t been provided. Applicants must provide a link to their website or portfolio, as well as a brief statement about how the L16 camera will help them with their photojournalism. A full Depth Collective FAQ is available here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Light.co launches ‘Depth Collective’ to support photojournalism with the L16 camera

Posted in Uncategorized

 

Leica unveils Noctilux-M 75mm F1.25 ASPH lens with ‘hair-thin depth of focus’

29 Nov

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_2468863494″,”galleryId”:”2468863494″,”isEmbeddedWidget”:true,”standalone”:false,”selectedImageIndex”:0,”startInCommentsView”:false,”isMobile”:false}) });

Leica unveiled a new low-light monster of a lens today, adding to the ‘Noctilux legacy’ with the Leica Noctilux-M 75mm F1.25 ASPH. According to Leica, the new lens boasts ‘impeccable speed’ and ‘exceptional imaging performance’ as well as “hair-thin depth of focus [that] isolates subjects with extreme precision.”

This is the fourth Noctilux lens ever created and only the second released this century, this lens follows in the footsteps of the Noctilux-M 50mm F0.95 ASPH released in 2008. But while Leica is calling this the “co-founder of a new family of lenses,” the company is also quick to point out that the new Noctilux-M 75mm F1.25 boasts some advantages over its older brother:

The upgraded features of the Noctilux-M 75 mm f/1.25 ASPH. open up entirely new opportunities in portrait and close-up photography, including a shallower depth of focus than that of the Noctilux-M 50 mm f/0.95 APSH. and a close focusing distance of 0.85m, making for a reproduction ratio of 1:8.8 for even more precise isolation of subjects. Additionally, the eleven blades of its iris ensure a soft and harmonious bokeh in out-of-focus areas.

Inside, you’ll find six groups made up of nine lens elements that have been manufactured from glasses with “high anomalous partial dispersion and low chromatic dispersion.” Two of those elements are aspherical, and the lens uses a floating element with what Leica describes as a “complex focusing mechanism” (aren’t they all?) that promises high-quality performance all the way from minimum focus distance to infinity.

You can read more about the Noctilux-M 75mm F1.25 in the full press release and tech specs below, but if you like what you read, be ready to drop some serious cash. According to Leica, the lens will retail for $ 12,795 when it shows up at Leica stores, boutiques and dealers in the beginning of 2018.

Press Release

Leica Camera Pushes Photographic Boundaries With the New Leica Noctilux-M 75 mm f/1.25 ASPH Lens

True to the Noctilux legacy, the new lens boasts impeccable speed and exceptional imaging performance

November 29, 2017 – For more than 50 years, the name ‘Noctilux’ has been synonymous with exceptional speed and outstanding optical design. Today, Leica Camera announces the newest addition to their lens portfolio – the Leica Noctilux-M 75 mm f/1.25 ASPH. Coupled with exceptional imaging performance and unique bokeh, its hair-thin depth of focus isolates subjects with extreme precision, ideal for portraits with an unmistakable “Leica look”.

A legacy of excellence

The first lens of the Noctilux family, the Leica Noctilux 50 mm f/1.2, was announced at photokina in 1966. While the original lens innovated with revolutionary optical properties, ongoing developments led to the launch of two additional generations of the Noctilux in 1975 and 2008. The additional lenses were developed under the premise of further pushing the envelope for imaging performance, each with a faster aperture than its predecessor. All Noctilux-M lenses to this day are special for their rendering and aesthetic when shot wide-open, yielding a three-dimensional “pop” that separates its subjects from the background like no other lenses. The out-of-focus areas behind the subject is smooth and pleasing to the eye, giving a lovely soft background even in the darkest of lighting scenarios.

Together with the Leica Noctilux-M 50 mm f/0.95 ASPH., the Leica Noctilux-M 75 mm f/1.25 ASPH. is the co-founder of a new family of lenses. The two current members of this family are both distinguished by their extreme maximum aperture and exceptionally high performance at all apertures, even wide open, and lend themselves to the creation of timeless images marked by a distinctive and revered Leica aesthetic.

Superior imaging performance

The upgraded features of the Noctilux-M 75 mm f/1.25 ASPH. open up entirely new opportunities in portrait and close-up photography, including a shallower depth of focus than that of the Noctilux-M 50 mm f/0.95 APSH. and a close focusing distance of 0.85m, making for a reproduction ratio of 1:8.8 for even more precise isolation of subjects. Additionally, the eleven blades of its iris ensure a soft and harmonious bokeh in out-of-focus areas.

To guarantee this extraordinary imaging performance, the nine elements in six groups that make up its optical design are manufactured from glasses with high anomalous partial dispersion and low chromatic dispersion. Two of the elements are aspherical, and reduce other potential aberrations to a hardly detectable minimum. The use of a floating element within the complex focusing mechanism guarantees a constantly high level of imaging performance throughout the entire focusing range of the lens – from its minimum focus distance to infinity.

When shooting at maximum aperture, the exceptionally shallow depth of field of the Noctilux-M 75 mm f/1.25 APSH. can be easily focused in when an electronic viewfinder such as the Leica Visoflex. Additionally, the Leica M-Adapter L transforms the Noctilux-M into an excellent lens to use in conjunction with the Leica SL. When the lens is mounted on the Leica SL, the 4.4 megapixel resolution of the camera’s EyeRes® electronic viewfinder enables particularly comfortable and extremely precise focusing.

The Noctilux-M 75mm f/1.25 ASPH. features the convenience of an integrated lens hood, which can be extended or retracted in one simple twist. The lens is complemented by a tripod adapter for safe and secure mounting of the lens on a tripod.

The Leica Noctilux-M 75 mm f/1.25 ASPH will be available at Leica Stores, Boutiques and Dealers at the beginning of 2018.

Technical Data

Angle of view
(diagonal, horizontal, vertical)

For 35 mm format (24 x 36 mm):

~ 32°, 27°, 18°

For Leica M8 models (18 x 27 mm):

~ 24°, 20°, 14°, equivalent to FL of ~ 100 mm in 35 mm format1

Optical design

Number of elements/groups

Aspherical surfaces

Position of entrance pupil

(at infinity)

9/6

2

26.9 mm (in front of the bayonet)

Focusing

Working range

Scales

Smallest object field/

largest reproduction ratio

0.85 m to ?

Combined metre/feet graduation

For 35 mm format: ~ 212 x 318 mm / 1:8.8,
For Leica M8 models: ~ 159 x 238 mm / 1:8.8

Aperture

Settings/functions

Smallest aperture

With click stops, half-stop detents

16

Bayonet

Leica M quick-change bayonet with 6-bit bar coding for Leica M digital cameras2

Filter mount

Inner thread for E67 screw-mount filters, non-rotating

Lens hood

Integrated, with twist-out function

Viewfinder

Camera viewfinder3

Finish

Black anodised

Dimensions and weight

Length to bayonet flange

Largest diameter

Weight

~ 91 mm

~ 74 mm

~ 1055 g

Compatible cameras

All Leica M-Cameras3, 4, Leica SL-Cameras with Leica M-Adapter L

1 The nominal focal lengths of the Leica M-Lenses relate to 35 mm format, i.e. original image frame dimensions of 24 x 36 mm. However, with dimensions of 18 x 27 mm, the sensor of the Leica M8 models is a little smaller, by a factor of 0.75. For this reason, the angle of view of this lens when mounted on a Leica M8 model corresponds to that of a lens with a focal length that is longer by a factor of 1.33 (1.33 = reciprocal of 0.75).

2 The 6-bit coding on the lens bayonet (7) enables Leica M8 digital models to identify the lens type mounted on the camera. The cameras utilise this information for the optimisation of exposure parameters and image data.

3 With the exception of the Leica M3 and the former version of the Leica MP ( professional version of the M3), all Leica M-Cameras without a 75 mm bright line frame can be retrofitted with this frame by the Customer Care department of Leica Camera AG (it then appears in the viewfinder together with the frame for 50 mm lenses).

4 This is independent of the image frame format of the respective camera – whether 18 x 27 mm (sensor size) for the Leica M8 models or 24 x 36 mm for all other Leica M models.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Leica unveils Noctilux-M 75mm F1.25 ASPH lens with ‘hair-thin depth of focus’

Posted in Uncategorized

 

Camera+ 10 brings depth editing and HEIF support

04 Oct

Third party camera apps are a great way of customizing operation and expanding the feature set of your smartphone camera. However, with mobile imaging technology advancing at lightning speed app makers are constantly having to catch up with the device makers’ latest hard and software developments.

The makers of Camera+, one of the most popular third party apps for the iPhone, have now just done that and released version 10 of their app which brings support for Apple’s new HEIF image format and selective depth editing.

The latter makes use of the dual-camera features on the iPhone 7 Plus and 8 Plus and lets you sharpen, tint and otherwise edit different depth levels in an image that contains depth information.

In addition there are a new “Smile to shoot” trigger mode and a completely overhauled camera interface to incorporate the new features. Camera+10 is available for $ 2.99 on the Apple App Store.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Camera+ 10 brings depth editing and HEIF support

Posted in Uncategorized

 

Video – How to Use Light for Depth and Drama in Your Landscape Photos

06 Sep

If you enjoy landscape photography, here is a short video with photographer Andrew Marr as he explores a gorgeous location in Glencoe Scotland. Learn how he looks for light to add depth and more drama to his landscape photos, and see how you apply these tips to your photography.

If you want more landscape tips, check out these dPS articles:

  • 6 of the Best Smartphone Apps for Travel and Landscape Photography
  • How a Short Versus Long Exposure Will Affect Your Landscape Images
  • How to Find the Best Locations for Landscape Photography
  • How to Plan and Prepare for Landscape Photography
  • How to Create Glass Ball Landscapes – 6 Techniques

Do you have other landscape tips you’d like to share? Please do so in the comments below.

The post Video – How to Use Light for Depth and Drama in Your Landscape Photos by Darlene Hildebrandt appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on Video – How to Use Light for Depth and Drama in Your Landscape Photos

Posted in Photography

 

Raw bit depth is about dynamic range, not the number of colors you get to capture

03 Sep
Shooting this image in 14-bit helped retain the full dynamic range captured by the sensor. Most of the time, with most cameras, 12-bit is enough.

Raw bit depth is often discussed as if it improves image quality and that more is better, but that’s not really the case. In fact, if your camera doesn’t need greater bit depth then you’ll just end up using hard drive space to record noise.

In fairness, it does sound as if bit depth is about the subtlety of color you can capture. After all, a 12-bit Raw file can record each pixel brightness with 4096 steps of subtlety, whereas a 14-bit one can capture tonal information with 16,384 levels of precision. But, as it turns out, that’s not really what ends up mattering. Instead, bit depth is primarily about how much of your camera’s captured dynamic range can be retained.

Much of this comes down to one factor: unlike our perception of brightness, Raw files are linear, not logarithmic. Let me explain why this matters.

Half the values in your Raw file are devoted to the brightest stop of light you captured

The human visual system (which includes the brain’s processing of the signals it gets from the eyes), interprets light in a non-linear manner: double the brightness of a light source by, say, turning on a second, identical light, and the perceptual difference isn’t that things have got twice as bright. Similarly, we’re much better as distinguishing between subtle differences in midtones than we are vast differences in bright ones. This is part of the way we’re able to cope with the high dynamic ranges in the scenes we encounter.

Digital sensors are different in this respect: double the light and you’ll get double the number of electrons released by the sensor, which results in double the value generated by the analogue-to-digital conversion process.

This diagram shows how the linear response of a digital sensor maps to the number of EV you can potentially capture. Note how the brightest stop of light takes up 1/2 of the available values of your Raw file.

Why does this matter? Because it means that half the values in your Raw file (the values between 2048 and 4096 in a 12-bit Raw file) are devoted to the brightest stop of light you captured. Which, with most typical tone curves, ends up translating to a series of near-indistinguishably bright tones in the final image. The next stop of light takes up the next 1024 values, and the third stop is recorded with the next 512, taking half of the remaining values each time.

In a typical out-of-camera JPEG rendering, the first ~3.5EV are captured above middle grey, and the first three of these stops of highlights have used up 7/8th of your available Raw values. The remaining Raw values are used to capture tones from just above middle grey all the way down to black.

Using the D750’s default JPEG tone curve as an example, you can see that around 3.5EV of the camera’s dynamic range is used for tones above middle grey. 1/2 the Raw values are used to capture the tones that end up being JPEG values of roughly 240 upwards, and more than 7/8ths of the available values on tones about middle grey.

Follow this logic onwards and you’ll see that the difference between 12 and 14-bit Raw has less to do with subtle transitions (after all, even in the example I describe, the tones around middle brightness would be encoded using 256 levels: the same number of steps used for the entire dynamic range of the image if saved as a JPEG or viewed on most, 8-bit monitors). Instead it has much more to do with having enough Raw values left to encode shadow detail.

By the time you’ve created a JPEG, the brightest stop of your image is likely to be made up from the tones in this image. Half of your Raw file was used for storing just these near-white tones.

Since every additional ‘bit’ of data doubles the number of available Raw values, but the brightest stop of light takes up half of your Raw values, you can see that all of those additional values increase the capacity of your Raw file by 1EV. Which, assuming neither you nor your camera’s exposure calibration are completely mad, ends up meaning an extra stop in the shadows.*

A 14-bit Raw file won’t generally give extra highlight capture, it’ll mean having sufficient Raw numbers left to be able to capture detail in the shadows. And if your camera is swamped by noise before you get to 14EV (most are), all this extra data will effectively be used to record shadow noise.

In other words, 12-bits provides enough room to encode roughly 12 stops of dynamic range, while 14 bits gives the extra space to capture up to around 14EV. Or to look at it from the opposite perspective: if your camera is overwhelmed by noise before you get to 12 stops of DR, you don’t benefit from more bit depth: all you’d be doing is capturing the shadow noise in your image in greater detail.

Bit depth in video

It’s a similar story in video. Because video capture is so data intensive, it’s not usually practical to try to save all the captured data, which usually means crushing everything down to just 8 or 10 bits.

Log gamma is a way of taking the linear data captured by the sensor and reformatting it so that each stop of captured light is given the same amount of values in the smaller file. This makes more sensible use of the file space and retains as much processing flexibility as possible.

And, even if you own, say, a Sony a7S (one of the few cameras we’ve encountered that has sufficiently large/clean pixels that it doesn’t have enough bit depth to capture its full dynamic range at base ISO), you need to remember that you only get the camera’s full DR at base ISO. As soon as you increase the ISO setting, you’ll amplify the brightest stop of captured data beyond clipping, such that you very quickly get to the stage where you’re losing 1EV of DR for every 1EV increase in ISO.

If your camera doesn’t capture more than 12 stops of DR, you probably shouldn’t clamor for 14-bit Raw

So, even though you started with a camera whose DR outstrips its bit depth, that stops being true as soon as you hike up the ISO: instead you just go back to encoding shadow noise with tremendous precision.

Consequently, if your camera doesn’t capture more than 12 stops of DR, you probably shouldn’t clamor for 14-bit Raw: it’s not going to increase the subtlety of gradation in your final images (especially not if you’re viewing them as 8-bit). All those extra bits would do is increase the amount of storage you’re using by around 16% with all of that space being devoted to an archive of noise.


Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Raw bit depth is about dynamic range, not the number of colors you get to capture

Posted in Uncategorized

 

New Qualcomm Spectra system brings 3D depth sensing to Android devices

16 Aug

Qualcomm launched the Clear Sight dual-camera system—which was powered by its Spectra ISP and used a combination of RGB and monochrome images sensors—in 2016. Today the company announced the second generation Spectra Module that introduces 3D computer vision to Qualcomm-powered Android devices.

The Qualcomm system is a dual-camera setup that is capable of sensing depth and motion in real time. In a smartphone’s rear camera, this technology could be used in apps to track motion and measure subject distance, which could ostensibly help improve simulated shallow depth-of-field effects.

In a front-facing camera, the Qualcomm system could help improve biometric security through iris scanning or 3D facial recognition. One of the technology’s advantages is the capability to capture and process image data in real time using off-the-shelf parts, unlike Google’s Tango project that relies on specialized hardware.

Qualcomm hasn’t yet announced any manufacturing partners yet, but given the popularity of the Qualcomm platform it’s arguably only a matter of time before we’ll see the technology pop up in the first Android devices.

Press Release

Qualcomm First to Announce Depth-Sensing Camera Technology Designed For Android Ecosystem

— Unveils next-generation Qualcomm Spectra ISP for computer vision, extended reality and computational photography technologies—

SAN DIEGO — August 15, 2017 — Today Qualcomm Incorporated (NASDAQ:QCOM), through its subsidiary, Qualcomm Technologies Inc., announced an expansion to the Qualcomm® Spectra™ Module Program, capable of improved biometric authentication and high-resolution depth sensing, designed to meet growing demands of photo and video for a broad range of mobile devices and head mounted displays (HMD). This module program is built on the cutting-edge technology behind the Qualcomm® SpectraTM embedded image signal processors (ISP) family. Engineered by Qualcomm Technologies from the ground up, Qualcomm Spectra paves the way for future image quality and computer vision innovations in upcoming Qualcomm® SnapdragonTM Mobile Platforms.

“Whether used for computational photography, video recording, or for computer vision applications that require accurate motion tracking, it’s clear that power efficient camera image signal processing has become more important for the next generation of mobile user experiences,” said Tim Leland, vice president of product management, Qualcomm TechnologiesInc. “Our breakthrough advancements in visual quality and computer vision, combined with our family of integrated Spectra ISPs for Snapdragon, are designed to support an ecosystem of cutting edge mobile applications for our customers.”

Together, the new ISPs and camera modules are engineered to support superior image quality and new computer vision use cases that utilize deep learning techniques and bokeh quality image experiences with a faster time to market for smartphone and HMD devices. The next-generation ISPs feature a new camera architecture designed for advancements in computer vision, image quality and power efficiency for the next Snapdragon mobile and VR platforms. The camera module program additions consist of a trio of camera modules, including an iris authentication module, a passive depth sensing module and an active depth sensing module.

Qualcomm Spectra Module Program

Launched last year, the Qualcomm Spectra Module Program was designed to help customers accelerate time to market for devices with stunning image quality and advanced camera technology. Last year’s offerings provided customers with optimized, dual-camera module solutions that make it easy for manufacturers to produce smartphone cameras with improved low light photography and video recording with smooth zoom. Now, the camera module program is being expanded to include new camera modules capable of utilizing active sensing for superior biometric authentication, and structured light for a variety of computer vision applications that require real-time, dense depth map generation and segmentation.

Second-generation Qualcomm Spectra ISP

The second-generation Qualcomm Spectra ISP is the next family of integrated ISPs that utilizes new hardware and software architecture designed specifically for advancements in computer vision, image quality, and power efficiency in future Snapdragon platforms. It features multiframe noise reduction for superior photographic quality, along with hardware-accelerated motion compensated temporal filtering (MCTF), and inline electronic image stabilization (EIS) for superior camcorder-like video quality.

The low-power, high-performance motion tracking capabilities of the Qualcomm Spectra ISP, in addition to optimized simultaneous localization and mapping (SLAM) algorithms, are designed to support new extended reality (XR) use cases for virtual and augmented reality applications that require SLAM.

The Qualcomm Spectra family of ISPs and new Qualcomm Spectra camera modules are expected to be part of the next flagship Snapdragon Mobile Platform.

About Qualcomm

Qualcomm’s technologies powered the smartphone revolution and connected billions of people. We pioneered 3G and 4G – and now we are leading the way to 5G and a new era of intelligent, connected devices. Our products are revolutionizing industries, including automotive, computing, IoT, healthcare and data center, and are allowing millions of devices to connect with each other in ways never before imagined. Qualcomm Incorporated includes our licensing business, QTL, and the vast majority of our patent portfolio. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, all of our engineering, research and development functions, and all of our products and services businesses, including, our QCT semiconductor business. For more information, visit Qualcomm’s website, OnQ blog, Twitter and Facebook pages.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on New Qualcomm Spectra system brings 3D depth sensing to Android devices

Posted in Uncategorized