RSS
 

How Google developed the Pixel 3’s Super Res Zoom technology

18 Oct

In a blog post on its Google AI Blog, Google engineers have laid out how they created the new Super Res Zoom technology inside the Pixel 3 and Pixel 3 XL.

Over the past year or so, several smartphone manufacturers have added multiple cameras to their phones with 2x or even 3x optical zoom lenses. Google, however, has taken a different path, deciding instead to stick with a single main camera in its new Pixel 3 models and implementing a new feature it is calling Super Res Zoom.

Unlike conventional digital zoom, Super Res Zoom technology isn’t simply upscaling a crop from a single image. Instead, the technology merges many slightly offset frames to create a higher resolution image. Google claims the end results are roughly on par with 2x optical zoom lenses on other smartphones.

Compared to the standard demosaicing pipeline that needs to interpolate missing colors due to the Bayer color filter array (top), gaps can be filled by shifting multiple images one pixel horizontally or vertically. Some dedicated cameras implement this by physically shifting the sensor in one pixel increments, but the Pixel 3 does it cleverly by essentially finding the correct alignment in software after collecting multiple, randomly shifted samples. Illustration: Google

The Google engineers are using the photographer’s hand motion – and the resulting movement between individual frames of a burst – to their advantage. Google says this natural hand tremor occurs for everyone, even those users with “steady hands”, and has a magnitude of just a few pixels when shooting with a high-resolution sensor.

The pictures in a burst are aligned by choosing a reference frame and then aligning all other frames relative to it to sub-pixel precision in software. When the device is mounted on a tripod or otherwise stabilized natural hand motion is simulated by slightly moving the camera’s OIS module between shots.

As a bonus there’s no more need to demosaic, resulting in even more image detail. With enough frames in a burst any scene element will have fallen on a red, green, and blue pixel on the image sensor. After alignment R, G, and B information is then available for any scene element, removing the need for demosaicing.

For full technical detail of Google’s Super Res Zoom technology head over to the Google Blog. More information on the Pixel 3’s computational imaging features can be found here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on How Google developed the Pixel 3’s Super Res Zoom technology

Posted in Uncategorized

 

Tags: , , , , ,

Comments are closed.