RSS
 

Posts Tagged ‘Pixel’

Here’s why I’m not quite ready to let the Pixel 3 replace a dedicated camera

18 Apr
Modern architecture abounds in Palm Springs, mid-century and otherwise.
Olympus Pen F ISO 200 | 1/1600 sec | F6.3 | Olympus M.Zuiko 12mm F2.0

On the topic of “When will smartphones make most dedicated cameras obsolete?” I tend to be in the “We’re pretty much there already” camp. In my own day-to-day photography, and even for some special occasions where I expect to take more than a few photos, I’ll stick with my smartphone rather than bringing along a dedicated camera.

That wasn’t the case on a recent trip to Palm Springs. I shot with both the Pixel 3 and a Micro Four Thirds camera (the Olympus Pen F, specifically). Here’s where each of them shine, and why I’m glad I had a dedicated camera at my side.

My photographic priority in Palm Springs was the city’s veritable smorgasbord of mid-century modern buildings. Banks, hotels, liquor stores – all housed in stunning modern buildings that are extremely Instagrammable. You know you’ve hit the architectural jackpot when you’re excited to photograph the town BevMo!.

Literally the roof of a BevMo! liquor store.
Olympus Pen F ISO 200 | 1/800 sec | F5.6 | Olympus M.Zuiko 12mm F2.0

There are obvious benefits to any smartphone, including of course the Pixel 3. It’s always with you, even by the pool, photos are automatically backed up to your image library, everything is immediately shareable. But the Pixel 3 presents a few unique advantages: it handles high-contrast scenes particularly well, and the multi-shot Night Sight mode captures a level of detail well beyond what we’re used to seeing from smartphones, even in the daytime.

The Pixel 3 does a fine job balancing scenes like this one, and its IP68 waterproof rating means it’s safe poolside.
Google Pixel 3 XL ISO 59 | 28mm equiv. | F1.8

There are some disadvantages though, which figured into my decision to bring along the Olympus Pen F and 12mm lens. First, the Pixel’s main camera wasn’t quite wide enough for the kind of photography I wanted to do. Photographing mid-century modern buildings from the sidewalk along a busy road doesn’t make it easy to just back up to get the whole thing in the shot.

Using panorama mode for a wider shot isn’t a great option either – image quality is pretty poor. This year’s smartphones are addressing this problem with wide-angle lenses, so if Google ever decides to add another rear camera, who knows what will be possible!

Stuff like this is just lying around everywhere in Palm Springs!
Olympus Pen F ISO 200 | 1/1250 sec | F4.5 | Olympus M.Zuiko 12mm F2.0

Editing Pixel 3 Raws isn’t my favorite experience at the moment, either. Editing Pen F files is familiar and comfortable to me, while handling Pixel Raw files seems to be a quirky process in its current state. When I use Camera Raw I start with a very flat, overexposed image, and when I edit Raw photos in Snapseed I encounter a couple of bugs along the way (and don’t love the small-screen edit experience). It’s more than good enough for something I’ll post on social media, but I wanted a little more control with my Palm Springs photos.

I also found myself taking advantage of a few Pen F features that were handy, if not necessarily must-haves. A viewfinder really came in handy under the bright mid-day sun. I also like a tilting LCD to compose shots from higher and lower angles. Also, the digital level was pretty huge for me, a person with (apparently) a crooked brain who is unable to keep horizons straight.

If every Bank of America looked like this I’d be a member tomorrow.
Olympus Pen F ISO 200 | 1/1250 sec | F4.5 | Olympus M.Zuiko 12mm F2.0

To be sure, there are some third-party workarounds that would have adapted the Pixel 3 to my purposes better. I could have brought a wide-angle attachment lens along and used a camera app with a level. There are trade-offs when using either of these options, though.

I also prefer the anonymity of the Pixel 3. One morning I walked from the center of town a mile and a half to the visitor’s center, a futuristic-looking building that used to be a gas station and is one of the most recognizable structures in town.

Roof of the Tramway Gas Station, currently home of the Palm Springs Visitor’s Center.
Olympus Pen F ISO 200 | 1/1250 sec | F6.3 | Olympus M.Zuiko 12mm F2.0

I was quite conspicuous on this journey for several reasons. For starters, nobody walks a mile to get anywhere in 80°+ heat if they can help it. I’m also incredibly pale and probably a danger to motorists walking under a beaming sun on the side of the road. I also had a Real Camera in my hand, and on top of that, am a lady.

Being a lady alone in public doing something out of the ordinary is, in my experience, an invitation for commentary, usually of the harmless “What are ya doin’ there with that big ol’ camera little missy??” variety. Well-meaning I’m sure, but my male colleagues don’t quite experience the same interruptions.

Palm Springs: they aren’t kidding about those palms.
Olympus Pen F ISO 200 | 1/1000 sec | F4.5 | Olympus M.Zuiko 12mm F2.0

I wish I’d been shooting with the Pixel when I saw the Photo That Got Away. Traffic in the street was stopped at a red light, and I was walking parallel to a pickup truck towing a camper van with a majestic purple mountain on the side. Behind it was a backdrop of actual majestic mountains. It was perfect, except the driver was staring right at me staring at him.

Maybe I would have gotten away with it shooting with the phone. As it happened, it just felt too conspicuous, almost invasive, to pull the camera up to my eye and take a picture. The light turned green and I thought about that photo through the rest of the trip.

In any case, I made it to the visitor’s center, which is a lovely building but I actually ended up taking my favorite picture around the back of it. Funny how that happens.

I walked a mile and a half through the desert to take this photo of a bench, I guess.
Olympus Pen F ISO 200 | 1/1250 sec | F6.3 | Olympus M.Zuiko 12mm F2.0

I liked the experience of carrying the Pen F at my side. It put me in a mindset of taking photos that’s harder to get into when I’m using my phone. But I don’t think we’re far from a future where the Pixel 3 satisfies almost all of the photographic needs I had on a trip like that, and there are real benefits to shooting with the Pixel 3 that traditional cameras don’t provide now. The Pixel automatically backed up all of the trip photos I took with it to my Photos library, where they were instantly shareable, searchable and photo-book-printable. The Pen F sure didn’t do any of that.

When I can get 90% of the image quality from a smartphone that I would from a traditional camera, and the experience of using it as a photographic device – from capture through editing – is 90% as good, I’ll be ready to leave the camera at home when I go on a trip like the one I just took. That day probably isn’t far off at all.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Here’s why I’m not quite ready to let the Pixel 3 replace a dedicated camera

Posted in Uncategorized

 

Choosing a camera Part 1: should I worry about pixel size?

17 Apr

Pixels are the fundamental building blocks of digital photography: they are the individual elements that capture the light to make up your image. Higher pixel-count cameras promise better resolution but it’s often said that their smaller pixels result in noisier, less-clean images.

So does this mean you should look for fewer, bigger pixels when you buy your next camera?

Probably not. That’s because the idea that small pixels are noisier is only really true when you examine your images at pixel level. We’ve long passed the point where you only had enough pixels to fill your monitor. And even people making large prints will find that a 24MP camera provides far more resolution than needed for printing at A3 (11.7 x 16.3″).

Looking at the bigger picture

At which point, taking a more holistic, whole-image-level perspective on picture quality probably makes at least as much sense as worrying about the noisiness of your individual pixels.

Smaller pixels each receive less light than large ones, so will always individually be noisier (because for most photography, most of the noise comes from the amount of light you sample). But as soon as we have to scale our images to view or print them, this difference becomes much less significant or disappears entirely.

Key takeaways:

  • Larger pixels get more light during any given exposure, so are less noisy when viewed 1:1
  • Combining multiple small pixels cancels out most (or all) of this difference when viewed at the same size
  • For most applications you’ll end up downsizing your images, so there’s usually a resolution advantage but little (if any) downside to having more pixels

The effect of pixel size:

The Nikon D850 and the Sony a7S are both relatively modern full frame sensors, but they have very different pixel counts. Because they have the same sized sensor, this means the individual pixels on the 12MP a7S are much larger than the D850, which has a sensor made up of 48 million pixels.

The a7S is often described as being great in low light, but this is only true if you pixel peep.

Let’s see how they compare when scaled to the same size:

ISO 6400
D850 Full size
[Raw File]
a7S
[Raw File]
D850 (resized: 12MP)
[Raw File]
ISO 12800
D850 Full size
[Raw File]
a7S
[Raw File]
D850 (resized: 12MP)
[Raw File]
ISO 25600
D850 Full size
[Raw File]
a7S
[Raw File]
D850 (resized: 12MP)
[Raw File]
ISO 51200
D850 Full size
[Raw File]
a7S
[Raw File]
D850 (resized: 12MP)
[Raw File]

At the pixel level the a7S is much less noisy, as you’d expect with its larger pixels. But, at all but the very highest ISO settings, that advantage disappears when you compare them both at the same scale. The difference is that you usually retain some of the additional detail that the D850 captured.

We see this same pattern across almost all cameras. The only times we have seen any disadvantage to small pixels is in the very smallest pixels used in smartphones (and those often use multi-shot modes to overcome this) or in sensors that use unconventional sensor technologies.

The thing that’s much more likely to make a difference to your image quality is sensor size. We’ll look at this in the next part of this article series…

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Choosing a camera Part 1: should I worry about pixel size?

Posted in Uncategorized

 

Three things I love about the Pixel 3 and one that I don’t

24 Mar

The Google Pixel 3 has been my primary camera – and media consumption device, alarm clock, etc. – for over a month now. It will be no surprise to anyone that I’m finding the camera to be really, really good, but there are a few features in particular that stand out to me as excellent. In no particular order, here’s what I’m liking so far about the Pixel 3’s camera, and one area I’m not as crazy about.

Night Sight

You’ve heard all the hype about how good Night Sight is, and it’s true. Night Sight will allow you to take usable photos in incredibly dim conditions. I think the best compliment I can give Night Sight is that the example image above doesn’t convey just how dark the scene in my shot was. The Mexican restaurant looks pleasantly bright and festive – in reality, it was extremely dim (but still festive).

Night Sight is also a great alternative for low-light selfies when flash is a no-no, if everyone in the shot can stay reasonably still. Pro tip: don’t blink or move your eyes or it’ll make you look a little bit like a zombie. In any case, it’s really nice to have a usable alternative to completely destroying the vibe of a mood-lit bar with a smartphone flash.

Finally, Night Sight is also useful for static subjects in any kind of lighting if you want to capture more detail, thanks to its use of Super Resolution (more on that here). The rendering of *individual fibers* in the blanket in the shot above blows my mind. Getting that level of detail out of such a small sensor is a real technological innovation.

Wide angle selfie

We’re weird, okay?

This was a feature I didn’t expect to use much, but it’s really helpful when you need it. I’ve used it on a couple of occasions when there was something in the background I wanted to get into the photo I was taking.

In both cases I considered the shot that I wanted, thought to myself there was no way that I could get the shot, then remembered the wide-angle front facing camera. Boom. Problem solved.

Portrait Mode

Portrait Mode is of course, not new, but it’s been further improved in the Pixel 3. Google used machine learning to train the camera to better ‘cut out’ things like human subjects. We find that it does a better job with human hair than the iPhone (you can see how the iPhone does here), creating a more realistic effect rather than something that looks obviously digitally manipulated.

The ability to throw a busy background out of focus – even if the overall effect isn’t 100% convincing – is still better to me than the alternative.

As a side note, my personal smartphone is an “ancient” iPhone SE, which doesn’t offer Portrait Mode. I’ve gotten pretty attached to it shooting with the Pixel 3, and many of my favorite images taken with the camera are Portrait Mode shots. To me, it feels a little bit like Wi-Fi on traditional cameras. When the feature was introduced it was a little gimmicky and not all that useful, but now that it’s reliable and much improved, it’s becoming something I don’t want to live without.

Muted color rendition

Out of camera JPEG “Auto” edits applied in Google Photos

The thing I’m not as crazy about is more a matter of personal taste – the Pixel 3 tends toward more muted, natural colors. Plenty of people will prefer that, but I’m partial to a little more warmth and punch in my images. Colors are a little flat for my taste, and in some instances (backlit subjects are a big one) auto exposure doesn’t quite get things right.

In spite of this, I think the greatest testament to the Pixel 3 is that I’ve been taking more pictures lately. When I’m out and about and see a photo, I don’t have to talk myself out of taking a picture because I only have my phone with me. More often than not, I’m finding that I *can* get that photo, or something close to what I envisioned.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Three things I love about the Pixel 3 and one that I don’t

Posted in Uncategorized

 

Blackmagic Pocket Cinema Camera 4K update adds pixel remapping, better battery life, more

06 Feb

Blackmagic Design has released Blackmagic Cameras 6.1, the latest firmware for its Pocket Cinema Camera 4K camera. The update includes better audio recording, improved battery performance, a new pixel remapping feature and other updates.

On the audio front, Blackmagic has implemented a new audio processor that ‘analyzes incoming audio from the dual microphones on each side of the camera to dramatically lower the noise floor, resulting in quieter recordings than possible before.’ Audio latency has also been reduced for more accurate syncing with video footage and improved real-time monitoring.

Screenshot of the menu area dedicated to the new pixel remapping feature.

A new in-camera calibration tool has also been added that lets users recalibrate pixels in the camera to fix brightness variations that can occur over time. ‘The new pixel calibration feature allows the camera to realign the light output of each pixel resulting in a smooth clean image under changing environmental conditions,’ says Blackmagic.

Other features and improvements include more accurate autofocus, a new media formatting interface designed to prevent accidental formats, 2:1 monitoring frame guides, and new power savings and efficiency fixes that Blackmagic claims improves battery life 10-15%.

Blackmagic Cameras 6.1 is available as a free download for existing Blackmagic Pocket Cinema Camera 4K customers on Blackmagic Design’s website.

Blackmagic Design Announces Blackmagic Cameras 6.1 Update

Major update adds new features for Blackmagic Pocket Cinema Camera 4K including better audio recording and improved battery performance.

Fremont, California, USA – February 4, 2019 – Blackmagic Design today announced Blackmagic Cameras 6.1 which is a new update for the Blackmagic Pocket Cinema Camera 4K. This update adds quieter audio recording, a new pixel remapping feature, new 2:1 monitoring frame guides, improved battery performance and much more.

Blackmagic Camera 6.1 update is available now as a free download from the Blackmagic Design website.

The new Blackmagic Cameras 6.1 significantly improves audio recording when using the built in microphones on the Blackmagic Pocket Cinema Camera 4K. The new processing now analyzes incoming audio from the dual microphones on each side of the camera to dramatically lower the noise floor, resulting in quieter recordings than possible before. In addition, latency has been reduced for audio monitoring, audio and video synchronization has been fine tuned, and the 3.5mm audio input selection interface is now more intuitive, making it faster to use.

Blackmagic Cameras 6.1 also improves auto focus performance. Auto focus now responds quicker and more accurately so that lens hunting is greatly reduced when the camera is locking on to the focal point. There’s also a new media formatting interface that helps prevent customers from accidentally formatting media cards. Once the camera is updated, customers will need to tap and hold the media format confirmation button for 3 seconds before a card will be erased and reformatted.

This update also includes a new in-camera calibration feature which allows customers to recalibrate pixels in the camera. Over time some pixels can change in brightness and create small variations across the sensor. The new pixel calibration feature allows the camera to realign the light output of each pixel resulting in a smooth clean image under changing environmental conditions. Blackmagic Camera 6.1 update also features 2:1 monitoring frame guides, which is another creative composition tool for filmmakers to frame shots. In addition, new power savings and efficiency enhancements improve battery runtime by 10-15% and give customers a more accurate indication of remaining battery power.

“The Blackmagic Pocket Cinema Camera 4K is an incredible success and it’s been very exciting watching the adoption of digital film workflows by a much wider range of people,” said Grant Petty, CEO, Blackmagic Design. “This update is exciting because it adds even more great new features to the camera and it’s an exciting way for us to say thank you to all the people who have purchased a Blackmagic Pocket Cinema Camera 4K and who have taken the time to discuss ideas for the future with us. We can’t wait to see what customers will produce next!”

Blackmagic Cameras 6.1 Update Key Features

Adds support for pixel calibration in camera. Improves auto focus performance. Improves signal to noise ratio performance of the camera’s internal microphone. Improves power efficiency for improved battery life. Adds 2:1 monitoring frame guide. Improves media formatting user interface. Improves audio monitoring latency performance. Improves 3.5mm audio input selection interface. Improves AV sync performance.

Blackmagic Cameras 6.1 is available as a free download for all existing Blackmagic Pocket Cinema Camera 4K customers from www.blackmagicdesign.com/support.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Blackmagic Pocket Cinema Camera 4K update adds pixel remapping, better battery life, more

Posted in Uncategorized

 

Some Google Pixel 3 devices hampered by camera bug

21 Nov

Google’s latest Pixel 3 devices are widely regarded excellent camera smartphones, but some users are now reporting a serious camera bug, according to a report from Owen Williams of Charged.

For the affected Pixel owners the camera works fine when operated through the default camera app. However, if a third-party app attempts to access the imaging hardware, the camera becomes unusable and generates one of several error messages, such as “could not connect to camera,” “camera encountered fatal error,” or “the camera device encountered a fatal error.”

This means users of the affected devices are unable to use third-party camera apps that use the camera, such as Instagram, Snapchat, or Camera+. Unfortunately a reboot, or even factory reset, doesn’t fix the issue. After a reboot the device works fine, but only until a third-party camera app is launched again.

According to reports, Google is sometimes, when pressed by the customer, acknowledging the issue, but is refusing to replace affected devices. Instead users are being told they have to wait for a software update. Unfortunately at this point there is no ETA for the latter.

DPReview has contacted Google and will update this article accordingly when/if DPReview gets a response.

Is your Pixel 3 camera working as it should? Let us know in the comments.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Some Google Pixel 3 devices hampered by camera bug

Posted in Uncategorized

 

Google promises software fix for Pixel 3 image saving issues

25 Oct

It’s not unusual to see one or two software bugs on a newly released smartphone, but it looks like some Google Pixel 3 and 3 XL units are infected with a particularly nasty camera-related problem. Many early adopters have reported a bug that occasionally prevents photos from saving after capture in the camera app.

The technical details behind the problem are not quite clear, but looking at discussions on Reddit appears older Pixel phones, and even Nexus devices, have had similar issues in the past.

The good news is that Google is now reacting and addressing the issue. A spokesperson talked to Android Police and provided the following statement:

“We will be rolling out a software update in the coming weeks to address the rare case of a photo not properly saving.”

The company has also confirmed that the bug will not only be fixed on the latest Pixel 3 devices but also on older Google Pixel 1 and 2 generation phones that are affected.

Even if it only happens rarely, a lost photo in an important moment is every photographer’s nightmare. So it’s good to see Google’s is taking steps to fix the issue across all affected models.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google promises software fix for Pixel 3 image saving issues

Posted in Uncategorized

 

Google Pixel 3 XL sample gallery

19 Oct

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_8660533092″,”galleryId”:”8660533092″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

The Pixel 3 represents another step forward in computational photography for Google’s smartphone line. Introducing features like super-resolution digital zoom, a synthetic fill-flash effect and learning-based Portrait Mode improvements are just a few ways that the company is making the most of a single main camera. We’ve just started testing the Pixel 3 XL, but in the meantime take a look at some sample images.

We’ve included some Raw conversions and made Raws available for download where possible; however, please note that Raw support appears to be preliminary. Default conversions are very flat and require significant post-processing. We expect this to be remedied soon with proper profiles.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Pixel 3 XL sample gallery

Posted in Uncategorized

 

How Google developed the Pixel 3’s Super Res Zoom technology

18 Oct

In a blog post on its Google AI Blog, Google engineers have laid out how they created the new Super Res Zoom technology inside the Pixel 3 and Pixel 3 XL.

Over the past year or so, several smartphone manufacturers have added multiple cameras to their phones with 2x or even 3x optical zoom lenses. Google, however, has taken a different path, deciding instead to stick with a single main camera in its new Pixel 3 models and implementing a new feature it is calling Super Res Zoom.

Unlike conventional digital zoom, Super Res Zoom technology isn’t simply upscaling a crop from a single image. Instead, the technology merges many slightly offset frames to create a higher resolution image. Google claims the end results are roughly on par with 2x optical zoom lenses on other smartphones.

Compared to the standard demosaicing pipeline that needs to interpolate missing colors due to the Bayer color filter array (top), gaps can be filled by shifting multiple images one pixel horizontally or vertically. Some dedicated cameras implement this by physically shifting the sensor in one pixel increments, but the Pixel 3 does it cleverly by essentially finding the correct alignment in software after collecting multiple, randomly shifted samples. Illustration: Google

The Google engineers are using the photographer’s hand motion – and the resulting movement between individual frames of a burst – to their advantage. Google says this natural hand tremor occurs for everyone, even those users with “steady hands”, and has a magnitude of just a few pixels when shooting with a high-resolution sensor.

The pictures in a burst are aligned by choosing a reference frame and then aligning all other frames relative to it to sub-pixel precision in software. When the device is mounted on a tripod or otherwise stabilized natural hand motion is simulated by slightly moving the camera’s OIS module between shots.

As a bonus there’s no more need to demosaic, resulting in even more image detail. With enough frames in a burst any scene element will have fallen on a red, green, and blue pixel on the image sensor. After alignment R, G, and B information is then available for any scene element, removing the need for demosaicing.

For full technical detail of Google’s Super Res Zoom technology head over to the Google Blog. More information on the Pixel 3’s computational imaging features can be found here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on How Google developed the Pixel 3’s Super Res Zoom technology

Posted in Uncategorized

 

Google Pixel 3 interview: technical deep dive with the camera team

11 Oct

Recently, Science Editor Rishi Sanyal had the chance to sit down with two of Google’s most prominent imaging engineers and pick their brains about the software advances in the Pixel 3 and Pixel 3 XL. Isaac Reynolds is the Product Manager for Camera on Pixel and Marc Levoy is a Distinguished Engineer and is the Computational Photography Lead at Google. From computational Raw to learning-based auto white balance, they gave us an overview of some key new camera features and an explanation of the tech that makes them tick.

Features covered in this video include the wide-angle selfie camera, Synthetic Fill Flash, Night Sight, Super Resolution Zoom, computational Raw, Top Shot and the method behind improving depth maps in Portrait Mode.

These features are also covered in written form in a previously published article here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Pixel 3 interview: technical deep dive with the camera team

Posted in Uncategorized

 

Five ways Google Pixel 3 pushes the boundaries of computational photography

11 Oct

With the launch of the Google Pixel 3, smartphone cameras have taken yet another leap in capability. I had the opportunity to sit down with Isaac Reynolds, Product Manager for Camera on Pixel, and Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, to learn more about the technology behind the new camera in the Pixel 3.

One of the first things you might notice about the Pixel 3 is the single rear camera. At a time when we’re seeing companies add dual, triple, even quad-camera setups, one main camera seems at first an odd choice.

But after speaking to Marc and Isaac I think that the Pixel camera team is taking the correct approach – at least for now. Any technology that makes a single camera better will make multiple cameras in future models that much better, and we’ve seen in the past that a single camera approach can outperform a dual camera approach in Portrait Mode, particularly when the telephoto camera module has a smaller sensor and slower lens, or lacks reliable autofocus.

Let’s take a closer look at some of the Pixel 3’s core technologies.

1. Super Res Zoom

Last year the Pixel 2 showed us what was possible with burst photography. HDR+ was its secret sauce, and it worked by constantly buffering nine frames in memory. When you press the shutter, the camera essentially goes back in time to those last nine frames1, breaks each of them up into thousands of ’tiles’, aligns them all, and then averages them.

Breaking each image into small tiles allows for advanced alignment even when the photographer or subject introduces movement. Blurred elements in some shots can be discarded, or subjects that have moved from frame to frame can be realigned. Averaging simulates the effects of shooting with a larger sensor by ‘evening out’ noise. And going back in time to the last 9 frames captured right before you hit the shutter button means there’s zero shutter lag.

Like the Pixel 2, HDR+ allows the Pixel 3 to render sharp, low noise images even in high contrast situations. Click image to view the level of detail at 100%. Photo: Google

This year, the Pixel 3 pushes all this further. It uses HDR+ burst photography to buffer up to 15 images2, and then employs super-resolution techniques to increase the resolution of the image beyond what the sensor and lens combination would traditionally achieve3. Subtle shifts from handheld shake and optical image stabilization (OIS) allow scene detail to be localized with sub-pixel precision, since shifts are unlikely to be exact multiples of a pixel.

In fact, I was told the shifts are carefully controlled by the optical image stabilization system. “We can demonstrate the way the optical image stabilization moves very slightly” remarked Marc Levoy. Precise sub-pixel shifts are not necessary at the sensor level though; instead, OIS is used to uniformly distribute a bunch of scene samples across a pixel, and then the images are aligned to sub-pixel precision in software.

We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic

But Google – and Peyman Milanfar’s research team working on this particular feature – didn’t stop there. “We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic” explains Marc. If you have enough samples, you can expect any scene element to have fallen on a red, green, and blue pixel. After alignment, then, you have R, G, and B information for any given scene element, which removes the need to demosaic. That itself leads to an increase in resolution (since you don’t have to interpolate spatial data from neighboring pixels), and a decrease in noise since the math required for demosaicing is itself a source of noise. The benefits are essentially similar to what you get when shooting pixel shift modes on dedicated cameras.

Normal wide-angle (28mm equiv.) Super Res Zoom

There’s a small catch to all this – at least for now. Super Res only activates at 1.2x zoom or more. Not in the default ‘zoomed out’ 28mm equivalent mode. As expected, the lower your level of zoom, the more impressed you’ll be with the resulting Super Res images, and naturally the resolving power of the lens will be a limitation. But the claim is that you can get “digital zoom roughly competitive with a 2x optical zoom” according to Isaac Reynolds, and it all happens right on the phone.

The results I was shown at Google appeared to be more impressive than the example we were provided above, no doubt at least in part due to the extreme zoom of our example here. We’ll reserve judgement until we’ve had a chance to test the feature for ourselves.

Would the Pixel 3 benefit from a second rear camera? For certain scenarios – still landscapes for example – probably. But having more cameras doesn’t always mean better capabilities. Quite often ‘second’ cameras have worse low light performance due to a smaller sensor and slower lens, as well as poor autofocus due to the lack of, or fewer, phase-detect pixels. One huge advantage of Pixel’s Portrait Mode is that its autofocus doesn’t differ from normal wide-angle shooting: dual pixel AF combined with HDR+ and pixel-binning yields incredible low light performance, even with fast moving erratic subjects.

2. Computational Raw

The Pixel 3 introduces ‘computational Raw’ capture in the default camera app. Isaac stressed that when Google decided to enable Raw in its Pixel cameras, they wanted to do it right, taking advantage of the phone’s computational power.

Our Raw file is the result of aligning and merging multiple frames, which makes it look more like the result of a DSLR

“There’s one key difference relative to the rest of the industry. Our DNG is the result of aligning and merging [up to 15] multiple frames… which makes it look more like the result of a DSLR” explains Marc. There’s no exaggeration here: we know very well that image quality tends to scale with sensor size thanks to a greater amount of total light collected per exposure, which reduces the impact of the most dominant source of noise in images: photon shot, or statistical, noise.

The Pixel cameras can effectively make up for their small sensor sizes by capturing more total light through multiple exposures, while aligning moving objects from frame to frame so they can still be averaged to decrease noise. That means better low light performance and higher dynamic range than what you’d expect from such a small sensor.

Shooting Raw allows you to take advantage of that extra range: by pulling back blown highlights and raising shadows otherwise clipped to black in the JPEG, and with full freedom over white balance in post thanks to the fact that there’s no scaling of the color channels before the Raw file is written.

Pixel 3 introduces in-camera computational Raw capture.

Such ‘merged’ Raw files represent a major threat to traditional cameras. The math alone suggests that, solely based on sensor size, 15 averaged frames from the Pixel 3 sensor should compete with APS-C sized sensors in terms of noise levels. There are more factors at play, including fill factor, quantum efficiency and microlens design, but needless to say we’re very excited to get the Pixel 3 into our studio scene and compare it with dedicated cameras in Raw mode, where the effects of the JPEG engine can be decoupled from raw performance.

While solutions do exist for combining multiple Raws from traditional cameras with alignment into a single output DNG, having an integrated solution in a smartphone that takes advantage of Google’s frankly class-leading tile-based align and merge – with no ghosting artifacts even with moving objects in the frame – is incredibly exciting. This feature should prove highly beneficial to enthusiast photographers. And what’s more – Raws are automatically uploaded to Google Photos, so you don’t have to worry about transferring them as you do with traditional cameras.

3. Synthetic Fill Flash

‘Synthetic Fill Flash’ adds a glow to human subjects, as if a reflector were held out in front of them. Photo: Google

Often a photographer will use a reflector to light the faces of backlit subjects. Pixel 3 does this computationally. The same machine-learning based segmentation algorithm that the Pixel camera uses in Portrait Mode is used to identify human subjects and add a warm glow to them.

If you’ve used the front facing camera on the Pixel 2 for Portrait Mode selfies, you’ve probably noticed how well it detects and masks human subjects using only segmentation. By using that same segmentation method for synthetic fill flash, the Pixel 3 is able to relight human subjects very effectively, with believable results that don’t confuse and relight other objects in the frame.

Interestingly, the same segmentation methods used to identify human subjects are also used for front-facing video image stabilization, which is great news for vloggers. If you’re vlogging, you typically want yourself, not the background, to be stabilized. That’s impossible with typical gyro-based optical image stabilization. The Pixel 3 analyzes each frame of the video feed and uses digital stabilization to steady you in the frame. There’s a small crop penalty to enabling this mode, but it allows for very steady video of the person holding the camera.

4. Learning-based Portrait Mode

The Pixel 2 had one of the best Portrait Modes we’ve tested despite having only one lens. This was due to its clever use of split pixels to sample a stereo pair of images behind the lens, combined with machine-learning based segmentation to understand human vs. non-human objects in the scene (for an in-depth explanation, watch my video here). Furthermore, dual pixel AF meant robust performance of even moving subjects in low light – great for constantly moving toddlers. The Pixel 3 brings some significant improvements despite lacking a second lens.

According to computational lead Marc Levoy, “Where we used to compute stereo from the dual pixels, we now use a learning-based pipeline. It still utilizes the dual pixels, but it’s not a conventional algorithm, it’s learning based”. What this means is improved results: more uniformly defocused backgrounds and fewer depth map errors. Have a look at the improved results with complex objects, where many approaches are unable to reliably blur backgrounds ‘seen through’ holes in foreground objects:

Learned result. Background objects, especially those seen through the toy, are consistently blurred. Objects around the peripheries of the image are also more consistently blurred. Learned depth map. Note how objects in the background (blue) aren’t confused as being closer to the foreground (yellow) as they are in the heat map below.
Stereo-only result. Background objects, especially those seen through the toy, aren’t consistently blurred. Stereo-only based depth map from dual pixels. Note how some elements in the background appear to be closer to the foreground than they really are.

Interestingly, this learning-based approach also yields better results with mid-distance shots where a person is further away. Typically, the further away your subject is, the less difference in stereo disparity between your subject and background, making accurate depth maps difficult to compute given the small 1mm baseline of the split pixels. Take a look at the Portrait Mode comparison below, with the new algorithm on the left vs. the old on the right.

Learned result. The background is uniformly defocused, and the ground shows a smooth, gradual blur. Stereo-only result. Note the sharp railing in the background, and the harsh transition from in-focus to out-of-focus in the ground.

5. Night Sight

Rather than simply rely on long exposures for low light photography, ‘Night Sight’ utilizes HDR+ burst mode photography to take usable photos in very dark situations. Previously, the Pixel 2 would never drop below 1/15s shutter speed, simply because it needed faster shutter speeds to maintain that 9-frame buffer with zero shutter lag. That does mean that even the Pixel 2 could, in very low light, effectively sample 0.6 seconds (9 x 1/15s), but sometimes that’s not even enough to get a usable photo in extremely dark situations.

The camera will merge up to 15 frames… to get you an image equivalent to a 5 second exposure

The Pixel 3 now has a ‘Night Sight’ mode which sacrifices the zero shutter lag and expects you to hold the camera steady after you’ve pressed the shutter button. When you do so, the camera will merge up to 15 frames, each with shutter speeds as low as, say, 1/3s, to get you an image equivalent to a 5 second exposure. But without the motion blur that would inevitably result from such a long exposure.

Put simply: even though there might be subject or handheld movement over the entire 5s span of the 15 frame burst, many of the the 1/3s ‘snapshots’ of that burst are likely to still be sharp, albeit possibly displaced relative to one another. The tile-based alignment of Google’s ‘robust merge’ technology, however, can handle inter-frame movement by aligning objects that have moved and discarding tiles of any frame that have too much motion blur.

Have a look at the results below, which also shows you the benefit of the wider-angle, second front-facing ‘groupie’ camera:

Normal front-camera ‘selfie’ Night Sight ‘groupie’ with wide-angle front-facing lens

Furthermore, Night Sight mode takes a machine-learning based approach to auto white balance. It’s often very difficult to determine the dominant light source in such dark environments, so Google has opted to use learning-based AWB to yield natural looking images.

Final thoughts: simpler photography

The philosophy behind the Pixel camera – and for that matter the philosophy behind many smartphone cameras today – is one-button photography. A seamless experience without the need to activate various modes or features.

This is possible thanks to the computational approaches these devices embrace. The Pixel camera and software are designed to give you pleasing results without requiring you to think much about camera settings. Synthetic fill flash activates automatically with backlit human subjects, and Super Resolution automatically kicks in as you zoom.

At their best, these technologies allows you to focus on the moment

Motion photos turns on automatically when the camera detects interesting activity, and Top Shot now uses AI to automatically suggest the best photo of the bunch, even if it’s a moment that occurred before you pressed the shutter button. Autofocus typically focuses on human subjects very reliably, but when you need to specify your subject, just tap on it and ‘Motion Autofocus’ will continue to track and focus on it very reliably. Perfect for your toddler or pet.

At their best, these technologies allow you to focus on the moment, perhaps even enjoy it, and sometimes even help you to capture memories you might have otherwise missed.

We’ll be putting the Pixel 3 through its paces soon, so stay tuned. In the meantime, let us know in the comments below what your favorite features are, and what you’d like to see tested.


1In good light, these last 9 frames typically span the last 150ms before you pressed the shutter button. In very low light, it can span up to the last 0.6s.

2We were only told ‘say, maybe 15 images’ in conversation about the number of images in the buffer for Super Res Zoom and Night Sight. It may be more, it could be less, but we were at least told that it is more than 9 frames. One thing to keep in mind is that even if you have a 15-frame buffer, not all frames are guaranteed to be usable. For example, if in Night Sight one or more of these frames have too much subject motion blur, they’re discarded.

3You can achieve a similar super-resolution effect manually with traditional cameras, and we describe the process here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Five ways Google Pixel 3 pushes the boundaries of computational photography

Posted in Uncategorized