Portrait modes that simulate the shallow depth of field of a large sensor camera and fast lens have been around on smartphones for a long time. The Pixel 2 was the first Google phone to offer the feature. With the Pixel 2 being a single-lens camera the dual-pixel autofocus system was used to estimate the parallax and thus depth. The Pixel 3 still relied on dual-pixels but the system was improved using machine learning.
The Pixel 4 is the first Google phone to use dual-pixel AF and dual-cameras combined for depth estimation. |
The latest Google flagship, the Pixel 4, is the first phone in the Pixel line to feature a dual-cameras. This allows for even better depth estimation by leveraging both the dual-camera and dual-pixel auto-focus system. In addition Google has also improved the appearance of the bokeh, making it more closely match that of a large sensor DSLR or mirrorless camera.
With the dual-pixel autofocus distance between the two focus pixels is very small which makes it difficult to estimate depth further away from the camera. The Pixel 4’s dual-cameras are 13 mm apart, allowing for a larger parallax and making it easier to estimate the depth of objects at a distance.
…dual-pixels provide better depth information in the occluded regions between the arm and torso, while the large baseline dual cameras provide better depth information in the background and on the ground.
Google is also still using the information collected by the dual-pixels, though, as it helps refine depth estimation around the foreground subject. In addition machine learning is used to estimate depth from both and dual cameras. A neural network first processes data from the two separately into an intermediate representation. A final depth map is then computed in a second step.
This image shows depth maps generated from dual-pixel AF, dual-camera and both combined. Dual-pixels provide better depth in the areas visible to only one camera, dual-cameras provide better depth in the background and ground. (Photo: Mike Milne/Google) |
In addition to the improved depth estimation spotlights in the background are now rendered with more contrast, making for more natural looking results. This is achieved by blurring the merged raw image produced by the HDR+ processing and applying tone mapping.
Additional depth map samples can be found here, head over to the Google Blog for the full article.
Articles: Digital Photography Review (dpreview.com)