RSS
 

Posts Tagged ‘Smartphone’

Why smartphone cameras are blowing our minds

28 Apr

An modified version of this article was originally published February 20, 2018.

There’s no getting around physics – smartphone cameras, and therefore sensors, are tiny. And since we all (now) know that, generally speaking, it’s the amount of light you capture that determines image quality, smartphones have a serious disadvantage to deal with: they don’t capture enough light.

But that’s where computational photography comes in. By combining machine learning, computer vision, and computer graphics with traditional optical processes, computational photography aims to enhance what is achievable with traditional methods. Here’s a rundown of some recent developments in smartphone imaging – and why we think they’re a big deal.

Intelligent exposure and processing? Press. Here.

One of the defining characteristics of smartphone photography is the idea that you can get a great image with one button press, and nothing more. No exposure decision, no tapping on the screen to set your exposure, no exposure compensation, and no post-processing. Just take a look at what the Google Pixel 2 XL did with this huge dynamic range sunrise at Banff National Park in Canada:

Sunrise at Banff, with Mt. Rundle in the background. Shot on Pixel 2 with one button press. I also shot this with my Sony a7R II full-frame camera, but that required a 4-stop reverse graduated neutral density (‘Daryl Benson’) filter, and a dynamic range compensation mode (DRO Lv5) to get a usable image. While the resulting image from the Sony was head-and-shoulders above this one at 100%, I got this image from a device in my pocket by just pointing and shooting.

The Pixel 2 was able to achieve the image above by first determining the correct focal plane exposure required to not blow large bright (non-specular) areas (an approach known as ETTR or ‘expose-to-the-right’). When you press the shutter button, the Pixel 2 goes back in time 9 frames, aligning and averaging them to give you a final image with quality similar to what you might expect from a sensor with 9x as much surface area. While it’s not quite that simple – sensor efficiency and the number of usable frames for averaging can vary – it’s not far off: consider the Pixel 2 can hold its own to the 5x larger RX100 sensor when given the same amount of total light per exposure.

When you press the shutter button, the Pixel 2 goes back in time 9 frames

How does it do that? It’s constantly keeping the last 9 frames it shot in memory, so when you press the shutter it can grab them, break each into many square ’tiles’, align them all, and then average them. Breaking each image into small tiles allows for alignment despite photographer or subject movement by ignoring moving elements, discarding blurred elements in some shots, or re-aligning subjects that have moved from frame to frame. Averaging simulates the effects of shooting with a larger sensor by ‘evening out’ noise.

That’s what allows the Pixel 2 to capture such a wide dynamic range scene: expose for the bright regions, while reducing noise in static elements of the scene by image averaging, while not blurring moving (water) elements of the scene by making intelligent decisions about what to do with elements that shift from frame to frame. Sure, moving elements have more noise to them (since they couldn’t have as many of the 9 frames dedicated to them for averaging), but overall, do you see anything but a pleasing image?

Autofocus

Improvements in autofocus, combined with the extended depth-of-field inherent to smaller sensors, are bringing focus performance of smartphones nearer and nearer to that of high performance dedicated cameras. Dual Pixel AF on the Google Pixel 2 uses nearly the entire sensor for autofocus (binning the high-resolution sensor into a low-resolution mode to decrease noise), while also using HDR+ and its 9-frame image averaging to further decrease noise and have a usable signal to make AF calculations from.

Google Pixel 2 can focus lightning fast even in indoor artificial light, thanks to Dual Pixel AF, allowing me to snap this candid before it was over in a split second. Technologies like ‘Dual PDAF’ autofocus – used by recent iPhones – don’t quite offer this level of performance (the iPhone X lagged and caught a less interesting moment seconds later when it eventually achieved focus), but offer potential image quality benefits.

And despite the left and right perspectives the split pixels in the Pixel 2 sensor ‘see’ having less than 1mm stereo disparity, an impressive depth map can be built, rendering an optically accurate lens blur. This isn’t just a matter of masking the foreground and blurring the background, it’s an actual progressive blur based on depth.

Instant AF and zero shutter lag allowed me to nail this candid image the instant after my wife and child whirled around to face the camera. A relatively new autofocus technology on recent iPhones we’re seeing is ‘Dual PDAF’ autofocus, where a 1×2 microlens is placed over a green-blue pixel pair where the blue color filter has been replaced by a green one. This can offer some benefits over masked pixels, which sacrifice light and can affect image quality, and over dual pixel AF by not requiring as much deep trench isolation as split photodiodes require to prevent color cross-talk.

However, current implementations only utilize this modified microlens structure in 2 pixels out of an 8×8 pixel region, which means only 3% of the pixels are used for ‘Dual PDAF’ AF. That means less light and information available compared to the full-sensor Dual Pixel AF approach which, combined with the lack of the multi-frame noise reduction the Pixel 2 phones benefit from even for AF, meant more misfocus or shots captured after the decisive moment. Like every technology though, we expect generational improvements.

Portrait Lighting

While we’ve been praising the Pixel phones, Apple is leading smartphone photography in a number of ways. First and foremost: color accuracy. Apple displays are all calibrated and profiled to display accurate colors, so no matter what Apple or color-managed device (or print) you’re viewing, colors look the same. Android devices are still the Wild West in this regard, but Google is trying to solve this via a proper color management system (CMS) under-the-hood. It’ll be some time before all devices catch up, and even Google itself is struggling with its current display and CMS implementation.

But let’s talk about Portrait Lighting. Look at the iPhone X ‘Contour Lighting’ shot below, left, vs. what the natural lighting looked like at the right (shot on a Google Pixel 2 with no special lighting features). While the Pixel 2 image is more natural, the iPhone X image is arguably more interesting, as if I’d lit my subject with a light on the spot.

Apple iPhone X, ‘Contour Lighting’ Google Pixel 2

Apple builds a 3D map of a face using trained algorithms, then allows you to re-light your subject using modes such as ‘natural’, ‘studio’ and ‘contour’ lighting. The latter highlights points of the face like the nose, cheeks and chin that would’ve caught the light from an external light source aimed at the subject. This gives the image a dimensionality you could normally only achieve using external lighting solutions or a lot of post-processing.

Sure the photo on the left could be better, but this is the first iteration of the technology. It won’t be long before we see other phones and software packages taking advantage of—and improving on—these computational approaches.

HDR and wide-gamut photography

And then we have HDR. Not the HDR you’re used to thinking about, that creates flat images from large dynamic range scenes. No, we’re talking about the ability of HDR displays—like bright contrasty OLEDs—to display the wide range of tones and colors cameras can capture these days, rather than sacrificing global contrast just to increase and preserve local contrast, as traditional camera JPEGs do.

iPhone X is the first device ever to support the HDR display of HDR photos. That is: it can capture a wide dynamic range and color gamut but then also display them without clipping tones and colors on its class-leading OLED display, all in an effort to get closer to reproducing the range of tones and colors we see in the real world.

iPhone X is the first device ever to support HDR display of HDR photos

Have a look below at a Portrait Mode image I shot of my daughter that utilizes colors and luminances in the P3 color space. P3 is the color space Hollywood is now using for most of its movies (it’s similar, though shifted, to Adobe RGB). You’ll only see the extra colors if you have a P3-capable display and a color-managed OS/browser (macOS + Google Chrome, or the newest iPads and iPhones). On a P3 display, switch between ‘P3’ and ‘sRGB’ to see the colors you’re missing with sRGB-only capture.

Or, on any display, hover over ‘Colors in P3 out-of-gamut of sRGB’ to see (in grey) what you’re missing with a sRGB-only capture/display workflow.

iPhone X Portrait Mode, image in P3 color space iPhone X Portrait mode, image in sRGB color space Colors in P3 out-of-gamut of sRGB highlighted in grey

Apple is not only taking advantage of the extra colors of the P3 color space, it’s also encoding its images in the ‘High Efficiency Image Format’ (HEIF), which is an advanced format aimed to replace JPEG that is more efficient and also allows for 10-bit color encoding (to avoid banding while allowing for more colors) and HDR encoding to allow the display of a larger range of tones on HDR displays.

But will smartphones replace traditional cameras?

For many, yes, absolutely. Autofocus speeds on the Pixel 2 are phenomenal, assisted by not only dual pixel AF but also laser AF. HDR+ like image stacking algorithms will only get better with time, averaging more frames or frames of various time intervals. The Huawei P20 can do exactly this and results are impressive. The P20 can also combine information from both color and higher-sensitivity monochrome sensors to yield impressive noise – and resolution – performance. Dual (or even triple) lens units give you the focal lengths of a camera body and two or more primes, and we’ve seen the ability to selectively blur backgrounds and isolate subjects like the pros do. Folded optics can give you far reaching zoom.

Below is a shot from the Pixel 2 vs. a shot from a $ 4,000 full-frame body and 55mm F1.8 lens combo—which is which?

Full Frame or Pixel 2? Pixel 2 or Full Frame?

Yes, the trained—myself included—can pick out which is the smartphone image. But when is the smartphone image good enough?

Smartphone cameras are not only catching up with traditional cameras, they’re actually exceeding them in many ways. Take for example…

Creative control…

The image below exemplifies an interesting use of computational blur. The camera has chosen to keep much of the subject—like the front speaker cone, which has significant depth to it—in focus, while blurring the rest of the scene significantly. In fact, if you look at the upper right front of the speaker cabinet, you’ll see a good portion of it in focus. After a certain point, the cabinet suddenly-yet-gradually blurs significantly.

The camera and software has chosen to keep a significant depth-of-focus around the focus plane before blurring objects far enough away from the focus plane significantly. That’s the beauty of computational approaches: while F1.2 lenses can usually only keep one eye in focus—much less the nose or the ear—computational approaches allow you to choose how much you wish to keep in focus even if you wish to blur the rest of the scene to a degree where traditional optics wouldn’t allow for much of your subject to remain in focus.

B&W speakers at sunrise. Take a look at the depth-of-focus vs. depth-of-field in this image. If you look closely, the entire speaker cone and a large front portion of the black cabinet is in focus. There is then a sudden, yet gradual blur to very shallow depth-of-field. That’s the beauty of computational approaches: one can choose extended (say, F5.6 equivalent) depth-of-focus near the focus plane, but then gradually transition to far shallower – say F2.0 – depth-of-field outside of the focus plane. This allows one to keep much of the subject in focus, bet achieve the subject isolation of a much faster lens.

Surprise and delight…

Digital assistants. Love them or hate them, they will be a part of your future, and they’re another way in which smartphone photography augments and exceeds traditional photography approaches. My smartphone is always on me, and when I have my full-frame Sony a7R III with me, I often transfer JPEGs from it to my smartphone. Those images (and 720p video proxies) automatically upload to my Google Photos account. From there any image or video that has my or my daughter’s face in it automatically gets shared with my wife without my so much as lifting a finger.

Better yet? Often I get a notification that Google Assistant has pulled a cute animated GIF from my movie it thinks is interesting. And more often than not, the animations are adorable:

Splash splash! in Xcaret, Quintana Roo, Mexico. Animated GIF auto-generated from a movie shot on the Pixel 2.

Machine learning allowed Google Assistant to automatically guess that this clip from a much longer video was an interesting moment I might wish to revisit and preserve. And it was right. Just as it was right in picking the moment below, where my daughter is clapping in response to her cousin clapping at successfully feeding her… after which my wife claps as well.

Claps all around!

Google Assistant is impressive in its ability to pick out meaningful moments from photos and videos. Apple takes a similar approach in compiling ‘Memories’.

But animated GIFs aren’t the only way Google Assistant helps me curate and find the important moments in my life. It also auto-curates videos that pull together photos and clips from my videos—be it from my smartphone or media I’ve imported from my camera—into emotionally moving ‘Auto Awesome’ compilations:

At any time I can hand-select the photos and videos, down to the portions of each video, I want in a compilation—using an editing interface far simpler than Final Cut Pro or Adobe Premiere. I can even edit the auto-compilations Google Assistant generates, choosing my favorite photos, clips and music. And did you notice that the video clips and photos are cut down to the beat in the music?

This is a perfect example of where smartphone photography exceeds traditional cameras, especially for us time-starved souls that hardly have the time to download our assets to a hard drive (not to mention back up said assets). And it’s a reminder that traditional cameras that don’t play well with such automated services like Google and Apple Photos will only be left behind simpler services that surprise and delight a majority of us.

The future is bright

This is just the beginning. The computational approaches Apple, Google, Samsung and many others are taking are revolutionizing what we can expect from devices we have in our pockets, devices we always have on us.

Are they going to defy physics and replace traditional cameras tomorrow? Not necessarily, not yet, but for many purposes and people, they will offer pros that are well-worth the cons. In some cases they offer more than we’ve come to expect of traditional cameras, which will have to continue to innovate—perhaps taking advantage of the very computational techniques smartphones and other innovative computational devices are leveraging—to stay ahead of the curve.

But as techniques like HDR+ and Portrait Mode and Portrait Lighting have shown us, we can’t just look at past technologies to predict what’s to come. Computational photography will make things you’ve never imagined a reality. And that’s incredibly exciting.

If you’d rather digest this article in video form, watch my segment on the TWiT Network (named after its flagship show, This Week in Tech) show ‘The New Screen Savers’ below. And don’t forget to catch our recent smartphone galleries after the video.


Appendix: Studio Scene

We’ve added the Google Pixel 2 and Apple iPhone X to our studio scene widget. You can compare the Daylight and Low Light scenes below to any camera of your choosing, keeping in mind that we shot the smartphones in their default camera apps without controlling exposure to see how they would perform in these light levels (10 and 3 EV, respectively, for Daylight and Low Light).

$ (document).ready(function() { ImageComparisonWidget({“containerId”:”reviewImageComparisonWidget-19227307″,”widgetId”:589,”initialStateId”:3906}) })

Note that we introduced some motion into the Low Light scene to simulate what the iPhone does when there’s movement in the scene. Hence, the ISO 640, 1/30s iPhone X image is more reflective of low light image quality for scenes that can’t be shot at the 1/4s shutter speed (ISO 125) the iPhone X will tend to drop to for completely static (tripod-based) low light scenes.

The Pixel 2 rarely drops to shutter speeds slower than 1/30s in low light, yet impressively almost matches the performance of a 1″-type sensor at these shutter speeds in low light (though the ‘i’ tab shows the RX100 shot at 1/6s F4, you’d get an equivalent exposure at 1/30s were you to shoot the Sony at F1.8 like the Pixel 2).

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Why smartphone cameras are blowing our minds

Posted in Uncategorized

 

Meizu unveils the 15 Plus smartphone with stabilized tele-camera

24 Apr

Chinese smartphone manufacturer Meizu has launched a new high-end model, the Meizu 15 Plus. And based on specs alone, the 15 Plus could be well-worth a closer look for mobile photographers who are open to the idea of buying from a less established brand.

In the camera department, a 12MP 1/2.3″ main sensor is combined with a 20MP secondary camera that features a 2x zoom factor. On the main camera, light is captured through a F1.8 lens while the tele-lens has to make do with a slower F2.8 aperture. Both lenses are equipped with optical image stabilization, though.

As with most similar systems, the optical zoom is enhanced with computational methods and Meizu promises a 3x “lossless” zoom, and the cameras features multi-frame noise reduction and HDR as well. To view and edit those images, the phone is equipped with a 5.95-inch AMOLED screen with QHD resolution and a notch-less 16:9 aspect ratio.

In the processing department, the Meizu deploys the Exynos 8895 chipset from last year’s Samsung flagship models, and users can choose between 64 or 128GB storage—unfortunately, there is no expansion slot. All components are housed in a body made of a stainless steel aluminum composite material.

The Meizu 15 Plus costs CNY 3,000 (approximately US$ 475) for the 64GB version and CNY 3,300 (approximately US$ 525) for the 128GB model. This sounds like a very decent deal for a tele-camera equipped device with high-end specs, but unfortunately, no pricing information for outside China has been provided as of yet.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Meizu unveils the 15 Plus smartphone with stabilized tele-camera

Posted in Uncategorized

 

Video: $7000 superzoom lens + DSLR compared to $35 smartphone lens

13 Apr

NYC-based filmmaker Casey Neistat recently compared a $ 35 clip-on smartphone lens with a $ 7,000 DSLR (the Canon EOS-1D X Mark II to be precise) and superzoom lens. The results, to no one’s surprise, were not in the smartphone lens’ favor. However, Neistat expresses surprise at the (admittedly very minor) capabilities of the cheap lens, saying, “So, I very gently, very reluctantly, recommend this total piece of sh*t $ 35 lens because it sort of almost works.”

That recommendation is given to potential buyers who need something to use with a smartphone. If a more capable lens and camera are within budget, the resulting content will benefit greatly from them, as the comparison screenshot below pretty clearly demonstrates:

Via: PetaPixel

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: $7000 superzoom lens + DSLR compared to $35 smartphone lens

Posted in Uncategorized

 

NAB 2018: This adapter lets you attach huge cinema lenses to your smartphone

12 Apr

A small Shenzhen-based company called Cinematics International Company Ltd. recently caught the eye of No Film School at NAB 2018, and in a second you’ll understand why. The company is showcasing a smartphone DOF lens adapter that enables full-size lenses to be used with an iPhone or Android handset.

Unfortunately, many key details about the adapter—including a product’s name—aren’t provided, but the company representative said Cinematics’ adapter supports just about any lens the user may want to attach to their phone. The product also features a pair of metal handles and what looks like a viewfinder.

When asked whether the handles on the adapter are sufficient enough to support such a large lens, Cinematics’ rep indicated the company has an additional hardware solution for that, one not shown in the video.

It isn’t clear whether Cinematics International Company Ltd. has any immediate plans to sell the adapter—although why would the company bring it to NAB if it didn’t?—but assuming it does, the product will probably appear first on the Cinematics’ eBay store.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NAB 2018: This adapter lets you attach huge cinema lenses to your smartphone

Posted in Uncategorized

 

Moment launches anamorphic lens and other gear for smartphone filmmakers

30 Mar

Many photographers consider Moment lenses to be the best quality smartphone accessory lenses out there. Now the company returns to crowdfunding platform Kickstarter with a new product line, this time for video shooters and filmmakers rather than stills photographers.

The headline product is an anamorphic add-on lens which allows you to achieve the same super-wide look and lens flare we are used to from Hollywood movies. The lens alters the field of view of the built-in camera on your iPhone, Samsung Galaxy or Google Pixel device and locks onto a specific lens case.

The lens currently works with a range of video-specific apps, including Filmic Pro, but Moment still has to update its own app in order to achieve an undistorted live preview of the footage captured with the lens.

The lens is not the only new accessory for filmmakers, however. There is also a battery case for the iPhone X which can charge via the Lighting port or Qi wireless charging and comes with a dedicated camera button. Of course it’s designed to attach Moment lenses as well.

Finally, Moment also debuted a counterweight for gimbals like the DJO Osmo Mobile 2, and a lens filter mount that can be used to attach ND and other filters to any of Moment’s lenses.

All the new items can be ‘pre-ordered’ on Kickstarter, where the campaign has quickly reached astronomical levels of funding—with 21 days to go, Moment has raised over $ 800,000 against a goal of just $ 50,000. All this funding despite each item being pretty low-cost: the filter holder is yours for a $ 29 pledge, the battery case will set you back $ 79, and the lens is priced at $ 119. Shipping is scheduled for June.

To learn more about these products or pick one up for yourself, head over to the Kickstarter page.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Moment launches anamorphic lens and other gear for smartphone filmmakers

Posted in Uncategorized

 

The NiSi Prosories P1 Kit lets you attach square filters to your smartphone

30 Mar

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_2893553642″,”galleryId”:”2893553642″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Chinese accessory company NiSi has started taking preorders for its new Prosories P1 Smartphone Filter Kit, a camera filter system for smartphones. The P1 Kit includes a phone clip, medium graduated neutral density filter, polarizer, pouch, and holder. As demonstrated in the video below, the system involves attaching a mount over the phone’s camera, then sliding a square filter into that mount.

The P1 Kit’s filters are made from optical glass with a nano-coating, according to NiSi, which says its clip is compatible with most smartphone models. Users can rotate the filter within the mount to adjust its angle, and also use a polarizer with the filter when necessary.

The company doesn’t provide the P1 Kit’s filter size, making it unclear whether any of its other filter products are compatible with the mount.

The NiSi Prosories P1 Kit is available now for $ 40 USD. To learn more or order yours, head over to the NiSi website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The NiSi Prosories P1 Kit lets you attach square filters to your smartphone

Posted in Uncategorized

 

Vivo V9 smartphone packs a 24MP front-facing camera and AI selfie software

27 Mar

Chinese phone manufacturer Vivo has launched a new flagship smartphone called the V9. This mid-tier model sports a design clearly inspired by the iPhone X, as well as one other very notable feature: a 24MP F2.0 front-facing camera. Whereas many smartphones still feature a low-resolution front camera, Vivo elected to put its higher-resolution camera on the front and pair it with its Face Access 2.0 security feature and AI-based Face Beauty selfie software.

As with previous Vivo models (and in case the front-facing camera resolution isn’t evidence enough), the V9 focuses on high-end selfies as a selling point. In this case, Vivo offers a feature called AI Face Beauty that is said to use machine learning determine things about the person featured in the selfie such as age and skin tone. That feature will ensure selfies “truly represent” the user’s beauty, according to The Verge.

The user will also have access to AR Stickers and will be able to unlock the phone using the front camera with Vivo’s Face Access feature.

On the back, dual 16MP + 5MP cameras, and inside there is a Qualcomm Snapdragon 626 SoC, 4GB RAM, and 64GB storage. Finally, you’ll use the phone through its 6.3-inch 2280 x 1080 19:9 FullView 2 display, complete with much-maligned iPhone X-like notch.

The phone has launched in India where it is priced at Rs 22,990 / $ 355 USD / 284 EUR. Availability and cost in other markets isn’t clear at this time.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Vivo V9 smartphone packs a 24MP front-facing camera and AI selfie software

Posted in Uncategorized

 

DPReview on TWiT: tech trends in smartphone cameras

20 Feb

As part of our regular appearances on the TWiT Network (named after its flagship show, This Week in Tech) show ‘The New Screen Savers’, our Science Editor Rishi Sanyal joined host Leo Laporte and co-host Megan Morrone to talk about how smartphone cameras are revolutionizing photography. Watch the segment above, then catch the full episode here.

Rishi has also expounded upon some of the topics covered in the segment below, with detailed examples that clarify some of the points covered. Have a read after the fold once you’ve watched the segment.

You can watch The New Screen Savers live every Saturday at 3pm Pacific Time (23:00 UTC), on demand through our articles, the TWiT website, or YouTube, as well as through most podcasting apps.


So who wins? iPhone X or Pixel 2?

Not so fast. Neither.

Each has its strengths, which we hope to tell you about in our video segment above and in our examples below. Google and Apple take different approaches, and each has its pros and cons, but there are common overlapping practices and themes as well. And that’s before we begin discussing video, where the iPhone’s 4K/60p HEVC video borders on professional quality while Google’s stabilization may make you want to chuck your gimbal.

Smartphones have to deal with the fact that their cameras, and therefore sensors, are tiny. And since we all (now) know that, generally speaking, it’s the amount of light you capture that determines image quality, smartphones have a serious disadvantage to deal with: they don’t capture enough light. But that’s where computational photography comes in. By combining machine learning, computer vision, and computer graphics with traditional optical processes, computational photography aims to enhance what is achievable with traditional methods.

Intelligent exposure and processing? Press. Here.

One of the defining characteristics of smartphone photography is the idea that you can get a great image with one button press, and nothing more. No exposure decision, no tapping on the screen to set your exposure, no exposure compensation, and no post-processing. Just take a look at what the Google Pixel 2 XL did with this huge dynamic range sunrise at Banff National Park in Canada:

Sunrise at Banff, with Mt. Rundle in the background. Shot on Pixel 2 with one button press. I also shot this with my Sony a7R II full-frame camera, but that required a 4-stop reverse graduated neutral density (‘Daryl Benson’) filter, and a dynamic range compensation mode (DRO Lv5) to get a usable image. While the resulting image from the Sony was head-and-shoulders above this one at 100%, I got this image from the Pixel 2 by just pointing and shooting.

Apple’s iPhones try to achieve similar results by combining multiple exposures if the scene has enough contrast to warrant it. But iPhones can’t achieve these results (yet) since they don’t average as many ‘samples’ as the Google Pixel 2. Sometimes Apple’s longer exposures can blur subjects, and iPhones tend to overexpose and blow highlights for the sake of exposing the subject properly. Apple is also still pretty reticent to enable HDR in ‘Auto HDR’.

The Pixel 2 was able to achieve the image above by first determining the correct focal plane exposure required to not blow large bright (non-specular) areas (an approach known as ETTR or ‘expose-to-the-right’). When you press the shutter button, the Pixel 2 goes back in time 9 frames, aligning and averaging them to give you a final image with quality similar to what you might expect from a sensor with 9x as much surface area.

How does it do that? It’s constantly keeping the last 9 frames it shot in memory, so when you press the shutter it can grab them, break each into many square ’tiles’, align them all, and then average them. Breaking each image into small tiles allows for alignment despite photographer or subject movement by ignoring moving elements, discarding blurred elements in some shots, or re-aligning subjects that have moved from frame to frame. Averaging simulates the effects of shooting with a larger sensor by ‘evening out’ noise.

That’s what allows the Pixel 2 to capture such a wide dynamic range scene: expose for the bright regions, while reducing noise in static elements of the scene by image averaging, while not blurring moving (water) elements of the scene by making intelligent decisions about what to do with elements that shift from frame to frame. Sure, moving elements have more noise to them (since they couldn’t have as many of the 9 frames dedicated to them for averaging), but overall, do you see anything but a pleasing image?

Autofocus

Who focuses better? Google Pixel 2, hands down. Its dual pixel AF uses nearly the entire sensor for autofocus (binning the high-resolution sensor into a low-resolution mode to decrease noise), while also using HDR+ and its 9-frame image averaging to further decrease noise and have a usable signal to make AF calculations from.

Google Pixel 2 can focus lightning fast even in indoor artificial light, which allowed me to snap this candid before it was over in a split second. The iPhone X captured a far less interesting moment seconds later when it finally achieved focus, missing the candid moment.

And despite the left and right perspectives the split pixels in the Pixel 2 sensor ‘see’ having less than 1mm stereo disparity, an impressive depth map can be built, rendering an optically accurate lens blur. This isn’t just a matter of masking the foreground and blurring the background, it’s an actual progressive blur based on depth.

That’s what allowed me to nail this candid image the instant after my wife and child whirled around to face the camera. Nearly all my iPhone X images of this scene were either out-of-focus or captured a less interesting, non-candid moment because of the shutter lag required to focus. The iPhone X only uses approximately 3% of its pixels for its ‘Dual PDAF’ autofocus, as opposed to the Pixel 2’s use of its entire sensor combined with multi-frame noise reduction, not just for image capture but also for focus.

Portrait Lighting

While we’ve been praising the Pixel phones, Apple is leading smartphone photography in a number of ways. First and foremost: color accuracy. Apple displays are all calibrated and profiled to display accurate colors, so no matter what Apple or color-managed device (or print) you’re viewing, colors look the same. Android devices are still the Wild West in this regard, but Google is trying to solve this via a proper color management system (CMS) under-the-hood. It’ll be some time before all devices catch up, and even Google itself is struggling with its current display and CMS implementation.

But let’s talk about Portrait Lighting. Look at the iPhone X ‘Contour Lighting’ shot below, left, vs. what the natural lighting looked like at the right (shot on a Google Pixel 2 with no special lighting features). While the Pixel 2 image is more natural, the iPhone X image is far more interesting, as if I’d lit my subject with a light on the spot.

Apple iPhone X, ‘Contour Lighting’ Google Pixel 2

Apple builds a 3D map of a face using trained algorithms, then allows you to re-light your subject using modes such as ‘natural’, ‘studio’ and ‘contour’ lighting. The latter highlights points of the face like the nose, cheeks and chin that would’ve caught the light from an external light source aimed at the subject. This gives the image a dimensionality you could normally only achieve using external lighting solutions or a lot of post-processing.

Currently, the Pixel 2 has no such feature, so we get the flat lighting the scene actually had on the right. But, as you can imagine, it won’t be long before we see other phones and software packages taking advantage of—and even improving on—these computational approaches.

HDR and wide-gamut photography

And then we have HDR. Not the HDR you’re used to thinking about, that creates flat images from large dynamic range scenes. No, we’re talking about the ability of HDR displays—like bright contrasty OLEDs—to display the wide range of tones and colors cameras can capture these days, rather than sacrificing global contrast just to increase and preserve local contrast, as traditional camera JPEGs do.

iPhone X is the first device ever to support the HDR display of HDR photos. That is: it can capture a wide dynamic range and color gamut but then also display them without clipping tones and colors on its class-leading OLED display, all in an effort to get closer to reproducing the range of tones and colors we see in the real world.

iPhone X is the first device ever to support HDR display of HDR photos

Have a look below at a Portrait Mode image I shot of my daughter that utilizes colors and luminances in the P3 color space. P3 is the color space Hollywood is now using for most of its movies (it’s similar, though shifted, to Adobe RGB). You’ll only see the extra colors if you have a P3-capable display and a color-managed OS/browser (macOS + Google Chrome, or the newest iPads and iPhones). On a P3 display, switch between ‘P3’ and ‘sRGB’ to see the colors you’re missing with sRGB-only capture.

Or, on any display, hover over ‘Colors in P3 out-of-gamut of sRGB’ to see (in grey) what you’re missing with a sRGB-only capture/display workflow.

iPhone X Portrait Mode, image in P3 color space iPhone X Portrait mode, image in sRGB color space Colors in P3 out-of-gamut of sRGB highlighted in grey

Apple is not only taking advantage of the extra colors of the P3 color space, it’s also encoding its images in the ‘High Efficiency Image Format’ (HEIF), which is an advanced format aimed to replace JPEG that is more efficient and also allows for 10-bit color encoding (to avoid banding while allowing for more colors) and HDR encoding to allow the display of a larger range of tones on HDR displays.

But will smartphones replace traditional cameras?

For many, yes, absolutely. You’ve seen the autofocus speeds of the Pixel 2, assisted by not only dual pixel AF but also laser AF. You’ve seen the results of HDR+ image stacking, which will only get better with time. We’ve seen dual lens units that give you the focal lengths of a camera body and two primes, and we’ve seen the ability to selectively blur backgrounds and isolate subjects like the pros do.

Below is a shot from the Pixel 2 vs. a shot from a $ 4,000 full-frame body and 55mm F1.8 lens combo—which is which?

Full Frame or Pixel 2? Pixel 2 or Full Frame?

Yes, the trained—myself included—can pick out which is the smartphone image. But when is the smartphone image good enough?

Smartphone cameras are not only catching up with traditional cameras, they’re actually exceeding them in many ways. Take for example…

Creative control…

The image below exemplifies an interesting use of computational blur. The camera has chosen to keep much of the subject—like the front speaker cone, which has significant depth to it—in focus, while blurring the rest of the scene significantly. In fact, if you look at the upper right front of the speaker cabinet, you’ll see a good portion of it in focus. After a certain point, the cabinet suddenly-yet-gradually blurs significantly.

The camera and software has chosen to keep a significant depth-of-focus around the focus plane before blurring objects far enough away from the focus plane significantly. That’s the beauty of computational approaches: while F1.2 lenses can usually only keep one eye in focus—much less the nose or the ear—computational approaches allow you to choose how much you wish to keep in focus even if you wish to blur the rest of the scene to a degree where traditional optics wouldn’t allow for much of your subject to remain in focus.

B&W speakers at sunrise. Take a look at the depth-of-focus vs. depth-of-field in this image. If you look closely, the entire speaker cone and a large front portion of the black cabinet is in focus. There is then a sudden, yet gradual blur to very shallow depth-of-field. That’s the beauty of computational approaches: one can choose extended (say, F5.6 equivalent) depth-of-focus near the focus plane, but then gradually transition to far shallower – say F2.0 – depth-of-field outside of the focus plane. This allows one to keep much of the subject in focus, bet achieve the subject isolation of a much faster lens.

Surprise and delight…

Digital assistants. Love them or hate them, they will be a part of your future, and they’re another way in which smartphone photography augments and exceeds traditional photography approaches. My smartphone is always on me, and when I have my full-frame Sony a7R III with me, I often transfer JPEGs from it to my smartphone. Those images (and 720p video proxies) automatically upload to my Google Photos account. From there any image or video that has my or my daughter’s face in it automatically gets shared with my wife without my so much as lifting a finger.

Better yet? Often I get a notification that Google Assistant has pulled a cute animated GIF from my movie it thinks is interesting. And more often than not, the animations are adorable:

Splash splash! in Xcaret, Quintana Roo, Mexico. Animated GIF auto-generated from a movie shot on the Pixel 2.

Machine learning allowed Google Assistant to automatically guess that this clip from a much longer video was an interesting moment I might wish to revisit and preserve. And it was right. Just as it was right in picking the moment below, where my daughter is clapping in response to her cousin clapping at successfully feeding her… after which my wife claps as well.

Claps all around!

Google Assistant is impressive in its ability to pick out meaningful moments from photos and videos. Apple takes a similar approach in compiling ‘Memories’.

But animated GIFs aren’t the only way Google Assistant helps me curate and find the important moments in my life. It also auto-curates videos that pull together photos and clips from my videos—be it from my smartphone or media I’ve imported from my camera—into emotionally moving ‘Auto Awesome’ compilations:

At any time I can hand-select the photos and videos, down to the portions of each video, I want in a compilation—using an editing interface far simpler than Final Cut Pro or Adobe Premiere. I can even edit the auto-compilations Google Assistant generates, choosing my favorite photos, clips and music. And did you notice that the video clips and photos are cut down to the beat in the music?

This is a perfect example of where smartphone photography exceeds traditional cameras, especially for us time-starved souls that hardly have the time to download our assets to a hard drive (not to mention back up said assets). And it’s a reminder that traditional cameras that don’t play well with such automated services like Google and Apple Photos will only be left behind simpler services that surprise and delight a majority of us.

The future is bright

This is just the beginning. The computational approaches Apple, Google, Samsung and many others are taking are revolutionizing what we can expect from devices we have in our pockets, devices we always have on us.

Are they going to defy physics and replace traditional cameras tomorrow? Not necessarily, not yet, but for many purposes and people, they will offer pros that are well-worth the cons. In some cases they offer more than we’ve come to expect of traditional cameras, which will have to continue to innovate—perhaps taking advantage of the very computational techniques smartphones and other innovative computational devices are leveraging—to stay ahead of the curve.

But as techniques like HDR+ and Portrait Mode and Portrait Lighting have shown us, we can’t just look at past technologies to predict what’s to come. Computational photography will make things you’ve never imagined a reality. And that’s incredibly exciting.

Hungry for more? We’ve updated our standard studio scene to allow you to compare the Pixel 2 and iPhone X against each other and other cameras in Daylight and Low Light, as well as updated our galleries. Follow the links below:

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on DPReview on TWiT: tech trends in smartphone cameras

Posted in Uncategorized

 

Noa N7 smartphone captures 80MP images with ‘high-resolution mode’

15 Feb

Lesser known Croatian brand Noa might not be the first manufacturer that springs to mind when you think about mobile photography, but the company will be launching a new mid-range device with some very interesting imaging features at the Mobile World Congress in Barcelona at the end of this month.

The Noa N7 comes with a dual-camera setup that features two 16MP Sony IMX298 1/2.8″ image sensors. At this point, there’s no further detail on how the two cameras play together, but we would assume there will be a shallow depth-of-field simulation mode and some kind of computational merging for better detail and reduced noise.

What the camera will definitely feature, however, is a 80MP high-resolution mode, presumably using image data from both lenses in combination with a pixel-shift technology. Looking at the demo video below, it seems the mode will require a tripod, but that’s still an attractive option for landscape or architectural photography who require maximum detail.

Main camera aside, the phone will offer a ceramic casing, Face-ID unlocking via the front camera, DTS stereo sound and an octa-core MediaTek MT6750 chipset. Images can be framed and viewed on a 5.7-inch display with 18:9 aspect ratio, and HD+ resolution.

If the 80MP mode has sparked your interest, the Noa N7 might be worth a closer look. Fortunately, the high pixel count won’t come with an expensive price tag—Noa says the N7 will retail for about 250 EUR ($ US 310) in Europe. We are looking forward to testing the high-resolution mode at MWC, so stay tuned!

Press Release

Koprivnica, 15th of February 2018

NOA will focus on its latest smartphone at the Mobile World Congress in Barcelona – NOA N7, with a 5,7 HD+ screen with an 18:9 screen ratio. Ceramic case, improved photography, Face ID and Face Beauty functionality along with an affordable price are the main key selling points of this new model.

The first two thing you’ll notice about this model is its design and a wonderful „royal blue“ color of the ceramic casing. This smartphone’s subtle elegance will certainly be noticed by everyone around you.

What makes this phone especially noticeable is the photo detailing and its quality, something that you’ll experience when you zoom in the photo and notice the perfectly rendered details. The N7 model will feature 2x 16 MP back Sony cameras with an IMX 298 sensor, which enables the creation of photographs up to 80 MP using oversampling technology.

The front 16 MP „selfie“ camera will support „Face ID“ and „Face Beauty“ functionalities. This means that you’ll be able to unlock your smartphone by scanning your face, which adds extra functionality in this price range, and rounds our the feature list with attractive and novel technologies. Your selfies will look sharper, more detailed and be of better quality. Thanks to the „Face Beauty“ feature, you’ll always look your best in photos, whether you’re taking them in the morning or evening.

Users who like to listen to music will also enjoy themselves with NOA N7, thanks to the world famous DTS sound technology. DTS Sound is an all-in-one audio solution that offers improved stereo sound quality, internal speaker optimization, and creates a panoramic audio experience while using earbuds.

NOA N7 is based on the 8 core Media Tek MT6750 processor with a 1,5 GHz frequency and a 5,7” screen. The screen resolution, complete with HD+ technology is 1440×720 pixel and an 18:9 screen ratio. NOA N7 will have 4 GB of RAM and 64 GB of ROM storage, expandable to 128 GB with the help of an SD card. NOA N7 comes with a 3.300 mAh battery and will use the latest Android 8.0 as its operating system.

NOA N7 smartphone will be in the price bracket of up to 250 EUR.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Noa N7 smartphone captures 80MP images with ‘high-resolution mode’

Posted in Uncategorized

 

LG will introduce a powerful new camera AI for the V30 smartphone at MWC

15 Feb

LG won’t unveil the successor to the G6 at MWC in the final week of February; instead, the new device will be launched at a stand-alone event at a later point in time. However, there will still be some interesting news for mobile photographers from LG at the trade show. Namely, the Korean manufacturer will introduce a suite of AI technologies for its smartphones, with the 2018 version of the flagship V30 being the first device to feature the new tech.

LG’s objective for the new system was to deliver a “unique and more intuitive user experience”, focusing on the camera technology and voice recognition.

“As we communicated last month at CES, the future for LG lies in AI, not just hardware specs and processing speeds,” said Ha Jeung-uk, senior vice president and business unit leader for LG Electronics Mobile Communications Company. “Creating smarter smartphones will be our focus going forward and we are confident that consumers will appreciate the advanced user experience with the enhanced V30 that many have been asking and waiting for.”

The Vision AI component puts the focus on camera usability and performance. It automatically analyzes objects and recommends the best suited out of the following shooting modes:

  • Portrait
  • Food
  • Pet
  • Landscape
  • City
  • Flower
  • Sunrise
  • Sunset

In addition to any detected objects in the frame, the system takes into account the angle of view, color, reflections, backlighting, and saturation levels in order to pick the best mode. For example, framing a plate of pasta will result in food mode being triggered. The final image result will show warmer colors and increased levels of sharpening for a pleasing visual presentation of your lunch.

A new low-light shooting mode, also on board, automatically brightens images in dim environments by a factor of two.

Like other AI systems—think Google Lens—Vision AI can provide shopping advice through image recognition. When the camera is pointed at an object the software can automatically scan QR codes, initiate an image search or display online shopping options to purchase the item or similar products.

For the development of the object recognition software, which is core to Vision AI, LG collaborated with a partner to analyze over a 100 million images and create more than a thousand image categories were created for accurate image analysis.

The AI system’s voice component lets you run apps and change settings through voice commands alone, working alongside Google Assistant. LG says that certain AI features won’t be limited to new models only, but will be available also to existing LG smartphones via over-the-air updates, depending on hardware specifications.

Vision AI isn’t the first system of its kind, but there is no doubt AI is part of the future of photography, and it’s good to see almost all large mobile manufacturers working in the field. We’re looking forward to having a closer look at LG’s Vision AI at MWC.

LG TO INTRODUCE OWN SMARTPHONE AI AT MWC 2018

New AI Functionalities Aligned Closely with Needs and Usage Behavior of Today’s Consumers

SEOUL, Feb. 13, 2018 — LG Electronics (LG) will introduce the first of a suite of AI technologies for its smartphones at this year’s Mobile World Congress in Barcelona, Spain. The new technology will be featured in the 2018 version of the LG V30, LG’s most advanced flagship smartphone to date.

LG spent more than a year researching how AI should be implemented in smartphones, long before announcing LG ThinQ at CES 2018. This research focused primarily on making AI-based solutions with the objective to deliver a unique and more intuitive user experience, focusing on the camera and voice recognition. The result is a suite of AI technologies that is aligned closely with the needs and usage behavior of today’s users.

Vision AI: Next Generation Image Recognition
On top of an already impressive list of camera features that include dual lenses, wide-angle low-distortion lenses, and the all-glass Crystal Clear Lens, Vision AI makes the camera of the LG V30 even smarter and easier to use.

Vision AI automatically analyzes objects and recommends the best shooting mode from among eight modes: portrait, food, pet, landscape, city, flower, sunrise, and sunset. The angle of view, color, reflections, backlighting, and saturation levels are all taken into consideration as the phone analyzes images in its database to determine what the camera is focusing on in order to recommend the best setting. For example, pointing the camera at a plate of pasta will result in food mode being invoked, resulting in warmer colors and heightened sharpening for the most appetizing shot possible.

LG collaborated with a partner in image recognition to analyze over a 100 million images in order to develop the phone’s image recognition algorithms. Over one thousand unique image categories were created for more accurate image analysis, resulting in better shooting mode recommendations.

Another feature of Vision AI provides shopping advice through smart image recognition. By simply pointing the camera at an object, LG’s Vision AI can automatically scan QR codes, initiate an image search or provide shopping options including where to purchase the item for the lowest price and other similar products that the customer might find of interest.

A new low-light shooting mode automatically brightens images in dim environments by a factor of two. Instead of conventional methods of measuring external light levels, Vision AI instead measures the brightness of the actual image that will be recorded, resulting in brightness levels that are adjusted much more accurately.

Voice AI: LG-Exclusive Voice Commands
Another new feature is Voice AI that allows users to run apps and change settings through voice commands alone. Working alongside Google Assistant, the 32 LG-exclusive Voice AI commands – up from 23 commands in 2017 – Voice AI eliminates the need to search through menu options and allows for direct selection of specific functions.

LG Exclusive Voice Commands for Google Assistant

VOICE COMMAND
FEATURE (PRECEDE WITH “OK GOOGLE”)
1 Wide-angle photo Take a picture on a wide angle
2 Wide-angle selfie Take a selfie on a wide angle
3 Wide-angle video Record a video on a wide angle
4 Wide-angle selfie video Take a selfie video on a wide angle
5 Cine Video Open camera on Cine Video
6 Expert Photo Mode Open camera on a manual mode
7 Expert Video Mode Open camera on a manual video
8 Cine Video (Romantic) Take a romantic Cine Video
9 Cine Video (Melodramatic) Take a melodramatic Cine Video
10 Cine Video (Thriller) Take a thriller Cine Video
11 Cine Video (Beauty) Take a beauty Cine Video
12 Cine Video (Blockbuster) Take a summer blockbuster Cine Video
13 Cine Video (Romantic Comedy) Take a romantic comedy Cine Video
14 Cine Video (Documentary) Take a documentary Cine Video
15 Cine Video (Landscape) Take a scenery Cine Video
16 Cine Video (Drama) Take a drama Cine Video
17 Cine Video (Historic) Take a historical Cine Video
18 Cine Video (Mystery) Take a mystery Cine Video
19 Cine Video (Noir) Take a noir Cine Video
20 Cine Video (Classic) Take a classic Cine Video
21 Cine Video (Flashback) Take a flashback Cine Video
22 Cine Video (Pop Art) Take a pop art Cine Video
23 Expert Mode (Graphy) Open camera with Graphy

NEW FOR 2018

24 Panoramic Photo Pending
25 Food Photo Pending
26 Time-lapse Photo (Video) Pending
27 Slow-motion Video Pending
28 Low-light Photo Pending
29 AI Cam Photo Pending
30 Image Search Pending
31 QR Code Scanning Pending
32 Shopping Search Pending


Upgrading and Expanding Smartphone AI
LG’s strategy for smartphones is to continue expanding its AI capabilities while also refining existing features to make them more convenient to use. Certain AI features will not be limited to new LG models only, but also to existing LG smartphones via over-the-air updates, taking into consideration various hardware specifications and stability of LG smartphone models for maximum user experience.

“As we communicated last month at CES, the future for LG lies in AI, not just hardware specs and processing speeds,” said Ha Jeung-uk, senior vice president and business unit leader for LG Electronics Mobile Communications Company. “Creating smarter smartphones will be our focus going forward and we are confident that consumers will appreciate the advanced user experience with the enhanced V30 that many have been asking and waiting for.”

Attendees of MWC 2018 are encouraged to visit LG’s booth in Hall 3 of Fira Gran Via from February 26 until March 1 for more information.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on LG will introduce a powerful new camera AI for the V30 smartphone at MWC

Posted in Uncategorized