RSS
 

Posts Tagged ‘Deep’

Google shares a deep dive into its new HDR+ with Bracketing technology found in its latest Pixel devices

26 Apr

Google has shared an article on its AI Blog that dives into the intricacies of the HDR capabilities of its most recent Pixel devices. In it, Google explains how its HDR+ with Bracketing technology works to capture the best image quality possible through clever capture and computational editing techniques.

To kick off the article, Google explains how its new ‘under the hood’ HDR+ with Bracketing technology — first launched on the Pixel 4a 5G and Pixel 5 back in October — ‘works by merging images taken with different exposure times to improve image quality (especially in shadows), resulting in more natural colors, improved details and texture, and reduced noise.’

Using bursts to improve image quality. HDR+ starts from a burst of full-resolution raw images (left). Depending on conditions, between 2 and 15 images are aligned and merged into a computational raw image (middle). The merged image has reduced noise and increased dynamic range, leading to a higher quality final result (right). Caption and image via Google.

Before diving into how the behind-the-scenes work is done to capture the HDR+ with Bracketing images, Google explains why high dynamic range (HDR) scenes are difficult to capture, particularly on mobile devices. ‘Because of the physical constraints of image sensors combined with limited signal in the shadows […] We can correctly expose either the shadows or the highlights, but not both at the same time.’

Left: The result of merging 12 short-exposure frames in Night Sight mode. Right: A single frame whose exposure time is 12 times longer than an individual short exposure. The longer exposure has significantly less noise in the shadows but sacrifices the highlights. Caption and image via Google.

Google says one way to combat this is to capture two different exposures and combine them — something ‘Photographers sometimes [do to] work around these limitations.’ While this works fairly well with cameras with larger sensors and more capable processors inside tablets and laptops to merge the images, Google says it’s a challenge to do on mobile devices because it requires ‘Capturing additional long exposure frames while maintaining the fast, predictable capture experience of the Pixel camera’ and ‘Taking advantage of long exposure frames while avoiding ghosting artifacts caused by motion between frames.’

Google was able to mitigate these issues with its original HDR+ technology through prioritizing the highlights in an image and using burst photography to reduce noise in the shadows. Google explains the HDR+ method ‘works well for scenes with moderate dynamic range, but breaks down for HDR scenes.’ As for why, Google breaks down the two different types of noise that get into an image when capturing bursts of photos: shot noise and read noise.

Google explains the differences in detail:

One important type of noise is called shot noise, which depends only on the total amount of light captured — the sum of N frames, each with E seconds of exposure time has the same amount of shot noise as a single frame exposed for N × E seconds. If this were the only type of noise present in captured images, burst photography would be as efficient as taking longer exposures. Unfortunately, a second type of noise, read noise, is introduced by the sensor every time a frame is captured. Read noise doesn’t depend on the amount of light captured but instead depends on the number of frames taken — that is, with each frame taken, an additional fixed amount of read noise is added.’

Left: The result of merging 12 short-exposure frames in Night Sight mode. Right: A single frame whose exposure time is 12 times longer than an individual short exposure. The longer exposure has significantly less noise in the shadows but sacrifices the highlights. Caption and image via Google.

As visible in the above image, Google highlights ‘why using burst photography to reduce total noise isn’t as efficient as simply taking longer exposures: taking multiple frames can reduce the effect of shot noise, but will also increase read noise.’

To address this shortcoming, Google explains how it’s managed to use a ‘concentrated effort’ to make the most of recent ‘incremental improvements’ in exposure bracketing to combined the burst photography component of HDR+ with the more traditional HDR method of exposure bracketing to get the best result possible in extreme high dynamic range scenes:

‘To start, adding bracketing to HDR+ required redesigning the capture strategy. Capturing is complicated by zero shutter lag (ZSL), which underpins the fast capture experience on Pixel. With ZSL, the frames displayed in the viewfinder before the shutter press are the frames we use for HDR+ burst merging. For bracketing, we capture an additional long exposure frame after the shutter press, which is not shown in the viewfinder. Note that holding the camera still for half a second after the shutter press to accommodate the long exposure can help improve image quality, even with a typical amount of handshake.’

Google explains how its Night Sight technology has also been improved through the use of its advanced bracketing technology. As visible in the illustration below, the original Night Sight mode captured 15 short exposure frames, which it merged to create the final image. Now, Night Sight with bracketing will capture 12 short and 3 long exposures before merging them, resulting in greater detail in the shadows.

Capture strategy for Night Sight. Top: The original Night Sight captured 15 short exposure frames. Bottom: Night Sight with bracketing captures 12 short and 3 long exposures. Caption and image via Google.

As for the merging process, Google says its technology chooses ‘one of the short frames as the reference frame to avoid potentially clipped highlights and motion blur.’ The remaining frames are then aligned with the reference frame before being merged.

To reduce ghosting artifacts caused by motion, Google says it’s designed a new spatial merge algorithm, similar to that used in its Super Res Zoom technology, ‘that decides per pixel whether image content should be merged or not.’ Unlike Super Res Zoom though, this new algorithm faces additional challenges due to the long exposure shots, which are more difficult to align with the reference frame due to blown out highlights, motion blur and different noise characteristics.

Left: Ghosting artifacts are visible around the silhouette of a moving person, when deghosting is disabled. Right: Robust merging produces a clean image. Caption and image via Google.

Google is confident it’s been able to overcome those challenges though, all while merging images even faster than before:

Despite those challenges, our algorithm is as robust to these issues as the original HDR+ and Super Res Zoom and doesn’t produce ghosting artifacts. At the same time, it merges images 40% faster than its predecessors. Because it merges RAW images early in the photographic pipeline, we were able to achieve all of those benefits while keeping the rest of processing and the signature HDR+ look unchanged. Furthermore, users who prefer to use computational RAW images can take advantage of those image quality and performance improvements.’

All of this is done behind the scenes without any need for the user to change settings. Google notes ‘depending on the dynamic range of the scene, and the presence of motion, HDR+ with bracketing chooses the best exposures to maximize image quality.’

Google’s HDR+ with Bracketing technology is found on its Pixel 4a 5G and Pixel 5 devices with the default camera app, Night Sight and Portrait modes. Pixel 4 and 4a devices also have it, but it’s limited to Night Sight mode. It’s also safe to assume this and further improvements will be available on Pixel devices going forward.

You can read Google’s entire blog post in detail on its AI blog at the link below:

HDR+ with Bracketing on Pixel Phones

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google shares a deep dive into its new HDR+ with Bracketing technology found in its latest Pixel devices

Posted in Uncategorized

 

‘Deep Nostalgia’ AI tech animates old photos and brings them to life

01 Mar

The online genealogy company MyHeritage has launched a new AI-powered service, Deep Nostalgia. This new service animates family photos (or other photos, as we’ll see) to allow users to ‘experience your family history like never before.’

Deep Nostalgia uses AI licensed from D-ID to turn still images into animated photos like the Live Photos feature in iOS portraits in the ‘Harry Potter’ films. Deep Nostalgia relies upon videos of facial animations, which the AI then applies to a still image. For example, an old black and white portrait of a man looking off-camera comes to life, with the subject moving his head, blinking and smiling at the camera.

MyHeritage prepared several drivers for Deep Nostalgia, which are then applied to a face in a still photo. You can animate all the faces in a photo, such as in a family portrait, although a separate animation must be created for each face. The technology automatically selects an animation sequence for a face, but users can select a different sequence as well. The animation sequences are based on genuine human gestures. Different MyHeritage employees are the foundation for many of the animation sequences.

To try Deep Nostalgia for yourself, you must sign up for a free MyHeritage account. Once you sign up, you can begin uploading images, which are animated and turned into a GIF. If you don’t do the full signup process, MyHeritage states that any images you upload will be deleted automatically to protect the user’s privacy. If you are uploading small or blurry images, MyHeritage’s Photo Enhancer will enhance your photos before the animation is applied, as Deep Heritage requires a high-resolution face.

It’s a neat idea to be able to bring old photos back to life. For many, their only connection to family members featured in old photographs is the image itself. They may never have seen them in person. In many cases, including those shared by different users on Twitter, Deep Nostalgia produces pretty impressive results.

As pointed out by The Verge, not everyone is using the service to add life-like qualities to antiquated family photos. Twitter user Flint Dibble opted instead to upload photos of statues from the Acropolis Museum in Athens. If you’ve ever wanted to see a statue of Alexander the Great move and blink, now you can. As Kim Lyons of The Verge asks, ‘I wonder if perhaps there are some photos best left un-animated?’

Jokes aside, Deep Nostalgia is a fascinating technology that can create impressive results. Photographs are the lasting connection we collectively have to our past. When our photos are of lost loved ones, the images take on a much deeper meaning. For some, seeing someone blink and smile again may feel morbid or odd, but it may be a special experience for others.

As MyHeritage writes, ‘Some people love the Deep Nostalgia feature and consider it magical, while others find it creepy and dislike it. Indeed, the results can be controversial, and it’s hard to stay indifferent to this technology.’ To try it for yourself, head over to MyHeritage.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on ‘Deep Nostalgia’ AI tech animates old photos and brings them to life

Posted in Uncategorized

 

Halide’s deep dive into why the iPhone 12 Pro Max is made for ‘Real Pro Photography’

20 Nov

Sebastiaan de With, Co-founder and Designer of the professional iOS camera app Halide, has shared a deep dive blog post into the photographic capabilities of Apple’s iPhone 12 Pro Max, showing a technical breakdown of all three cameras packed inside the flagship device.

In many of the articles we gathered in our iPhone 12 Pro Max review roundup, reviewers said they didn’t actually notice that big of a difference in image quality between the iPhone 12 and iPhone 12 Pro Max. This came as somewhat a surprise considering how promising the technology in the iPhone 12 Pro Max looked, but without having further information to work with — or a review unit in our hands — it’s all we could go off of.

As it turns out though, the cursory first-looks and reviews didn’t paint the entire picture for what the iPhone 12 Pro Max is capable of. Thankfully, Sebastiaan took matters into his own hands and has provided an incredibly detailed look into why initial reviewers didn’t notice nearly as big a difference as expected and provides a number of examples to showcase what’s actually capable with the new iPhone 12 Pro Max when you use it in a more advanced capacity.

Sebastiaan starts by revisiting the specs that set the iPhone 12 Pro Max apart from all the other iPhone 12 models: a 47% larger sensor, a faster F1.6 lens, improved image stabilization, 87% high ISO sensitivity and a new 65mm (full-frame equivalent) telephoto lens. As impressive as those specs are for a smartphone camera, they don’t mean much without context and examples to back them up.

To that point, Sebastiaan shares the above graphic to show just how much larger the new sensor is compared to the one found in the other iPhone 12 models. While the larger sensor should help with noise, Sebastiaan notes the difference is far less noticable during the day, compared to when the sun starts to set. He uses the below comparison shot to show just how well the iPhone 12 Pro Max (bottom image) handles noise compared to its smaller iPhone 12 Pro (top image) companion. As you can see when viewing the full-size image, it’s clear the photo captured with the iPhone 12 Pro Max (bottom image) holds much better detail in the shadows and doesn’t show nearly as much noise.

Click to enlarge.

Sebastiaan posits that the reason most reviewers didn’t notice the difference in image quality as much is twofold. First, many photos taken by reviewers were done during the day, when high-ISO and larger photosites don’t make nearly as big a difference. Second — and arguably even more importantly — most reviewers were using the stock iOS camera app, which uses various intelligent image processing technology to create the final image, which can soften parts of the image with noise reduction and other artifacts. To see how good the iPhone 12 Pro Max camera was without all of the image processing, Sebastiaan used Halide to capture Raw (DNG) images, which ‘omits steps like multi-exposure combination and noise reduction.’

Click to enlarge.

If you’re wondering just how much of a difference it makes when using the stock iOS Camera app versus a camera app that can capture a Raw image, such as Halide, take a look at the above comparison shot Sebastiaan captured in San Francisco at sunset. Notice the lack of detail in the distant buildings, the muddiness of the windows on nearby apartments and the overall ‘watercolor’ effect that happens when too much noise reduction is applied. Sebastiaan shows multiple other examples that highlight just how much of a difference it can make to use third-party apps capable of capturing Raw images compared to those captured with the stock camera app.

Also tackled in the deep dive is the improvement in image stabilization, which is now sensor-based rather than lens-based, as well as the new 65mm telephoto camera, which offers a slightly longer reach (65mm, full-frame equivalent vs the 52mm full-frame equivalent of other iPhone 12 models)

All in all, Sebastiaan concludes his breakdown by saying the ‘results [are] mind-blowing’ as the developer of a camera app. He summarizes it all saying:

It achieves images previously only seen in dedicated cameras, with sensors four times its size. It allows photographers to get steady and well exposed shots in conditions that weren’t imaginable a year ago. It captures low-light shots beyond anything we’ve seen on an iPhone. By a lot.’

That’s high praise compared to previous reviews, but the data doesn’t lie. To read the in-depth dive (which you absolutely should), head on over to the Halide blog using the link below:

The iPhone 12 Pro Max: Real Pro Photography

You can keep up with the Lux team — Sebastiaan De With, Ben Sandovsky and Rebecca Sloane — on Twitter and download Halide Mark II in the iOS App Store.


Image credits: Photographs/images provided by Halide, used with permission.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Halide’s deep dive into why the iPhone 12 Pro Max is made for ‘Real Pro Photography’

Posted in Uncategorized

 

The latest iOS 13 developer beta gives us a sneak peek at Apple’s new Deep Fusion mode

05 Oct

Earlier this week, Apple released the first developer beta version of iOS 13 with support for its Deep Fusion technology built-in. Although there’s still plenty to learn about the feature, multiple developers have already taken the camera tech for a spin and shared their thoughts (and results) around the web.

To refresh, below is a brief explainer on what Deep Fusion is from our initial rundown on the feature:

‘Deep Fusion captures up to 9 frames and fuses them into a higher resolution 24MP image. Four short and four secondary frames are constantly buffered in memory, throwing away older frames to make room for newer ones […] After you press the shutter, one long exposure is taken (ostensibly to reduce noise), and subsequently all 9 frames are combined – ‘fused’ – presumably using a super resolution technique with tile-based alignment (described in the previous slide) to produce a blur and ghosting-free high resolution image.’

Although the tests are far from conclusive, we’ve rounded up a few sample images and comparisons shared by Twitter users from around the world. From the commentary shared by those who have tested the feature and from a brief analysis with our own eyes, Deep Fusion appears to work as advertised, bringing out more detail and clarity in images.

In addition to the above comparison, photographer Tyler Stalman also compared how Deep Fusion compares to the Smart HDR feature.

As noted by Halide co-founder Sebastiaan de With, it seems as though the image files captured with Deep Fusion are roughly twice the size of a standard photo.

Much remains to be seen about what Deep Fusion is actually capable of and how third-party developers can make the most of the technology, but it looks promising. There seems to be some confusion as well regarding whether Deep Fusion will work with Night Mode, but according to Apple guru John Gruber, the two are mutually exclusive, with Deep Fusion being applied to scenes between 600-10 lux while Night Mode kicks in at 10 or fewer lux.

We’ll know more for sure when we have a chance to test the new feature ourselves.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The latest iOS 13 developer beta gives us a sneak peek at Apple’s new Deep Fusion mode

Posted in Uncategorized

 

Nikon invests in computer vision and deep learning startup ‘wrnch’

21 Jun

Nikon has announced a $ 7.5 million investment in Canadian computer vision and deep learning startup wrnch, Inc. Wrnch was founded in 2014 and ‘uses deep learning to develop and provide tools and software development kits (SDKs) that enable computers to see and understand human movement and activity.’

On its website the company says about itself it is ‘Teaching Cameras To Read Human Body Language.’

Nikon says the move is in line with its medium-term management plan which is designed to expand not only its business-to-consumer but also business-to-business imaging activities. The company is hoping to create synergies by combining resources with wrench and ultimately expand the range of its imaging business.

Nikon is aiming to enhance its automatic shooting solutions for the sports market by fusing its optical technologies, automatic tracking shooting technologies from its subsidiary Mark Roberts Motion Control Limited and wrnch’s pose estimation technologies.

In addition the company is looking into providing ‘new imaging experiences’ with technologies such as artificial intelligence. In the statement Nikon also says it is open to ‘making’ further use of its optical technologies and collaborating with companies that offer their own innovative solutions and technologies.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nikon invests in computer vision and deep learning startup ‘wrnch’

Posted in Uncategorized

 

Demon of the Deep: Shooting Kawah Ijen Volcano

18 May

A few weeks ago I returned from a fantastic trip to South East Asia. After 13 years without setting foot there (a bit of a frightening number – the last time was before I ever held a DSLR!), I was getting a serious itch that had to be scratched. I was craving the feel of Asia, its food, wildlife and landscapes. When I found the right partner to join me, I jumped on the opportunity and booked my flights.

While this wasn’t purely a shooting trip (I also spent time in Malaysia, Singapore and Hong Kong without shooting), two weeks of it were shooting-oriented. I spent a week photographing orangutans in Northern Sumatra, but the experience I want to share here is shooting the well-known East Java volcano of Kawah Ijen.

Kawah Ijen (Ijen crater) was one of the icons I had long wanted to visit and shoot. As a volcano enthusiast and keen shooter, there was no chance I was going to pass on this special place.

Part of a larger group of composite volcanoes located in the far east of the island of Java, it is one of the main attractions of the island and draws much tourism. Ijen is extremely photogenic and tells several stories worth exploring. It’s also the site of intensive sulfur mining, wherein miners extract elemental sulfur solidified from gases bursting out of an active vent in the crater. The miners then carry the sulfur by hand in baskets that can weigh up to 90kg (about 200lb), up to the crater rim and 3km / 1.9mi down the mountain up to a weighing station, where they get paid for the load.

An 80 kg load of sulfur inside Ijen’s crater

While the miners’ story is indeed fascinating (and controversial due to the health hazards, even though the work is comparatively well paid), I’m a nature photographer, and I came to shoot the natural features of the volcano. Ijen is very well known for the colors of the ignited sulfuric gasses in the mined vent, and also for the turquoise color of its crater lake. I spent two days hiking up the volcano and shooting it, and I’d like to share the experience.

A aerial view of Kawah Ijen. Vertical panorama from 2 shot, taken with DJI Mavic II Pro. The panorama allowed me to capture the entirety of the lake as my foreground, creating a better framing. Shot using a circular polarizer kindly supplied by Polar Pro.

While not very difficult, hiking Ijen is quite physical. To get to the crater in time and maximize your photography, you need to start the journey at about midnight. This gives you time to locate a porter should you need one (I suffer from minor shoulder and knee problems and so was happy to support the local economy and hire a porter to carry my heavy photo bag), and be ready for the opening of the gate at 1 a.m. – and you had better be early rather than late. On the first hike I headed up at 2:30 a.m., the and trail was jam packed with tourists, which made it much harder to hike in my own pace. I didn’t make the same mistake again the second time around.

A look to the vent from the crater rim. The converging gas clouds are an important compositional element here.

The hike up to the crater rim took me 1.5 hours the first night, when I was tired from the travel and had to make my way through other hikers. The second time, without anybody hiking beside me other than my travel partner, guide and porters, I was well rested and Red-Bulled, and made the way up in less than an hour. Once up the crater rim, a trail goes down to the mining vent, and hiking it also depends highly on how many people are there. A good estimate would be 45 minutes with people around, half an hour without if you’re early.

Going down into the crater is more technical than going up and one should be very careful when doing it. In general, remember to always do the hike with a certified guide, as an experienced guide will make sure you stay safe and protected from the elements, especially when smoke gets thick inside the crater.

The crater can fill up to the brim with noxious gasses.

Near the end of the hike up, I had to put my respirator on. The sulfur smell was getting overwhelming, and I knew it was time to protect myself against the noxious gasses. Soon after starting the hike down to the vent, I felt the sting in my eyes telling me to put my goggles on. Both respirator and goggles were absolutely essential to be able to function when inside the crater. Closer to the vent, even they were not enough to prevent me from tearing when the wind carried the sulfuric gasses my way.

Yours truly with full Ijen gear. Not even the goggles prevented me from tearing up when the wind swept the sulfuric gasses toward me.

So there I was, in the dead of night, watching the purple fire of ignited sulfuric gasses. This was astonishing to behold, but quite challenging to capture. As you might imagine, shooting at night required high ISO, and in order to get any detail in the fire, an ISO of at least 3200 was needed. I ended up using ISO 6400 most of the time.

Focusing was also very difficult. Naturally I had to focus manually, as usual by enlarging part of the image on live-view and turning the focus ring to get good sharpness, but the fire is so dynamic, and so often covered by smoke, that it took me several minutes to be able to focus. The goggles filling up with water from my breath sure didn’t help.

Thick smoke covers part of the fire, resulting in an interesting shot. Canon 5D Mark IV, Canon 70-300mm f/4-5.6L IS. ISO6400, f/4.5, 88mm, 1/3 sec.

Negative space is an important tool in images like this one – the darkness surrounding the purple fire conveys the atmosphere around the vent: a mysterious and sometimes frightening place where noxious smoke can engulf you before you know it.

Once I got my focusing sorted out, it was time to compose. Composing a rapidly-changing fire that is covered by smoke 90% of the time and ruined by the flash of other people’s cellphone cameras 80% of the remaining time was frustrating. I found myself struggling through my tears just to find some sort of balance. Two nights of shooting the fire only yielded 2-3 good shots. While indeed I didn’t need more than that for my portfolio, I wish it had been an easier ordeal and that I had gotten a bigger selection.

Purple fire in Kawah Ijen.
Canon 5D Mark IV, Canon 70-300mm f/4-5.6L IS
ISO6400, f/5.6, 300mm, 1/4 sec
Note the diagonal lines and the two main centers of compositional mass in the top right and in the bottom left, balancing each other.

After shooting the fiery vent I headed back up to the crater rim for sunrise. Ijen boasts a 1-km wide crater lake, which is recognized for being the largest highly acidic crater lake in the world. The lake’s colors are truly beautiful: an almost-unnatural turquoise lined with yellow streaks of sulfur.

Ijen’s beautiful turquoise acidic waters decorated by sulfur streaks, covered by a thick layer of morning fog and sulfuric gases from the volcano’s vents.

While simple in composition, the image is enriched by the lake’s contrasting colors and the light on the fog. The circular polarizer I used (made by Polar Pro) enhanced the saturation, eliminating reflections.

While it is possible to shoot the lake from the crater rim, I found that using a drone was much more productive, and allowed me to include the entire crater in the image, in addition to the several other volcanoes around Ijen.

I had some fun trying abstract photography with the drone, as I flew it close to the crater lake. Especially nice was flying the drone through the sulfuric gasses, which create an eerie haze. Aerials can also reveal another visually interesting element of the area: the contrast between the toxic environment inside Ijen and the lush forests around it.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_0022623816″,”galleryId”:”0022623816″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Click the image above for more image information

Kawah Ijen didn’t disappoint; I highly recommend traveling there and witnessing it for yourself. Whether shooting the miners or nature, from the ground or from the air, it holds a special kind of beauty and tons of photographic potential. Just make sure you have a good guide, and a respirator and goggles at hand.


Erez Marom is a professional nature photographer, photography guide and traveler based in Israel. You can follow Erez’s work on Instagram and Facebook, and subscribe to his mailing list for updates.

If you’d like to experience and shoot some of the most fascinating landscapes on earth with Erez as your guide, take a look at his unique photography workshops in The Lofoten Islands, Greenland, Namibia, the Faroe Islands, Israel and Ethiopia.

Erez offers video tutorials discussing his images and explaining how he achieved them.

Selected Articles by Erez Marom:

  • Parallelism in Landscape Photography
  • Winds of Change: Shooting changing landscapes
  • Behind the Shot: Dark Matter
  • On the Importance of Naming Images
  • On Causality in Landscape Photography
  • Shooting K?lauea Volcano, Part 1: How to melt a drone
  • The Art of the Unforeground
  • Whatever it Doesn’t Take
  • Almost human: photographing critically endangered mountain gorillas

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Demon of the Deep: Shooting Kawah Ijen Volcano

Posted in Uncategorized

 

Google Pixel 3 interview: technical deep dive with the camera team

11 Oct

Recently, Science Editor Rishi Sanyal had the chance to sit down with two of Google’s most prominent imaging engineers and pick their brains about the software advances in the Pixel 3 and Pixel 3 XL. Isaac Reynolds is the Product Manager for Camera on Pixel and Marc Levoy is a Distinguished Engineer and is the Computational Photography Lead at Google. From computational Raw to learning-based auto white balance, they gave us an overview of some key new camera features and an explanation of the tech that makes them tick.

Features covered in this video include the wide-angle selfie camera, Synthetic Fill Flash, Night Sight, Super Resolution Zoom, computational Raw, Top Shot and the method behind improving depth maps in Portrait Mode.

These features are also covered in written form in a previously published article here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Pixel 3 interview: technical deep dive with the camera team

Posted in Uncategorized

 

Polarr Deep Crop app crops your images like a pro

20 Sep

Cropping has the potential to turn a good photo into a great one, or even save a shot that would otherwise have ended up in the digital trashcan. Software company Polarr has now launched a new AI-powered app for iOS that should make finding the perfect crop much easier.

Deep Crop’s algorithms have been trained to find the most interesting elements in a photo using

using 200 million cropping data points from real photographers.

Source image Crop 1
Crop 2 Crop 3

The company also says it has achieved an 20x efficiency boost in RAM and power usage for offline AI- systems, allowing for the app to run locally on your iPhone. This means there’s no need for internet connectivity and your image material won’t be uploaded to any external servers.

When you launch the app, all images in your camera roll will be displayed, and tapping on a photo will show you suggested crop options. By default, you’ll see smart crops of various ratios but it is also possible to specify an aspect ratio.

If you don’t like the apps’ suggestion you can repeat the process as many times as you like to see more crops. Once you like the result it can be exported or shared in the usual ways.

Source image Crop 1
Crop 2 Crop 3

In its current state Deep Crop is pretty much a one-trick-pony but we’d expect the technology to be integrated into one of Polarr’s more comprehensive applications, such as Photo Editor, at some point in the near future. In the meantime the app can help you achieve better crops or simply discover new perspectives when viewing your own images.

Polarr Deep Crop is available on the Apple App Store now.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Polarr Deep Crop app crops your images like a pro

Posted in Uncategorized

 

NVIDIA’s content-aware fill uses deep learning to produce incredible results

24 Apr

Adobe Photoshop’s Content-Aware Fill is the current industry standard when it comes to removing unwanted artifacts and distracting objects, but that might not always be the case. Because while Adobe is currently working on an advanced deep learning-based “Deep Fill” feature, NVIDIA just demonstrated its own AI-powered spot healing tool, and the results are pretty incredible.

As you can see from the two-minute demonstration above, the prototype tool can handle both basic tasks, like removing a wire from a scene, as well as more complicated tasks, such as reconstructing books and shelves inside an intricate library scene.

The secret behind this tool is NVIDIA’s “state-of-the-art deep learning method” that the tool is built on. Not only does the tool use pixels from within the image to reconstruct an area—it actually analyzes the scene and figures out what it should look like when it’s finished. This helps to create a much more accurate and realistic result, even when the original image is an absolute disaster.

The best examples of this can be seen in a paper NVIDIA team members published titled ‘Image Inpainting for Irregular Holes Using Partial Convolutions.’ As seen in the comparison images below, NVIDIA’s tool blows Photoshop out of the water when reconstructing portraits where much or most of the face is removed.

From left to right: the corrupted image, Adobe’s Content-Aware results, NVIDIA’s results and the actual image.

In the discussion section (section 5.1) of the aforementioned paper, NVIDIA says its “model can robustly handle holes of any shape, size location, or distance from the image borders. Further, our performance does not deteriorate catastrophically as holes increase in size.”

NVIDIA does note, however, that “one limitation of our method is that it fails for some sparsely structured images such as the bars on the door,” as seen in the image comparison below.

From left to right: the corrupted image, NVIDIA’s results and the original image.

Current shortcomings aside, this particular tool—prototype or otherwise—appears to be leaps and bounds ahead of everyone else that’s currently on the market. Unsurprisingly, there’s no word on when, or if, we’ll ever see this hit the market, let alone in the consumer market, but we’ll keep our fingers and toes crossed.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA’s content-aware fill uses deep learning to produce incredible results

Posted in Uncategorized

 

Adobe’s Project ‘Deep Fill’ is an incredible, AI-powered Content Aware Fill

21 Oct

The coolest technology to come out of Adobe MAX is, sadly, not the technology we already have access to. Like Adobe’s Project Cloak we showed you earlier today, it’s the incredible ‘Sneaks’ sneak peeks that really wow the audience. Case in point: check out Project Deep Fill, a much more powerful, AI-driven version of Content Aware Fill that makes the current tool look like crap… to put it lightly.

Deep Fill is powered by the Adobe Sensei technology—which “uses artificial intelligence (AI), machine learning and deep learning”—and trained using millions of real-world images. So while Content Aware Fill has to work with the pixels at hand to ‘guess’ what’s behind the object or person you’re trying to remove, Deep Fill can use its training images to much more accurately create filler de novo.

The examples used in the demo video above are impressive to say the least:

And just when you thought the demo is over, you find out that Deep Fill can also take into account user inputs—like sketching—to completely alter an image:

In this way it’s a lot more than a ‘fill’ feature. In fact, Adobe calls it “a new deep neural network-based image in-painting system.” Check out the full demo for yourself above, and then read all about the other ‘Sneaks’ presented at Adobe MAX here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Adobe’s Project ‘Deep Fill’ is an incredible, AI-powered Content Aware Fill

Posted in Uncategorized