RSS
 

Posts Tagged ‘Google’s’

Google’s ex-lead of computational photography Marc Levoy to build new imaging experiences at Adobe

23 Jul

Marc Levoy1, Google’s former computational photography lead and arguably one of the founding figures of computational approaches to imaging, has joined Adobe as Vice President and Fellow, reporting directly to Chief Technology Officer Abhay Parasnis. At Adobe, Marc will ‘spearhead company-wide technology initiatives focused on computational photography and emerging products, centered on the concept of a universal camera app.’ He will also work closely with Photoshop Camera, Adobe Research, and the machine-learning focused Sensei and Digital Imaging teams.

The imaging sphere was taken by surprise a few months back when Marc left Google where he helped spearhead a revolution in mobile imaging with the excellent success of Pixel phones and their stills and video capabilities. Marc and his colleagues at Google developed HDR+, which uses burst photography alongside clever exposure and merging techniques to increase dynamic range of capture and reduce noise. His work, in conjunction with Peyman Milanfar, also helped Pixel cameras yield visible photos in the dark using Night Sight, and even capture super-resolution data that captured far more detail in ‘zoomed-in’ shots than competitors, despite limited hardware. Google’s burst mode techniques even allowed its cameras to forego traditional demosaicing processes, yielding more detailed images than even competitive cameras with similar sensor sizes.2

Marc Levoy… [is] arguably one of the founding figures of computational approaches to imaging

Marc also championed the use of machine learning to tackle challenges in image capture and processing, leading to better portrait modes, more accurate colors via learning-based white balance, and synthetic re-lighting of faces. Marc helped push the boundaries of what is possible with limited hardware by focusing heavily on the software.

At its core, Adobe is a software company, and so Marc’s expertise is at once relevant. At Adobe, Marc will continue to explore the application of computational photography to Adobe’s imaging and photography products, with one of his focuses being the development of a ‘universal camera app’ that could function across multiple platforms and devices. This should allow Marc to continue his passion for delivering unique and innovative imaging experiences to the masses.

Marc has a knack for distilling complex concepts into simple terms. You can learn about the algorithms and approaches his teams spearheaded in the Pixel phones in our interview above.

More on Marc Levoy

Marc Levoy has a long history of pioneering computational approaches to images, video and computer vision, spanning both industry and academia. He taught at Stanford University, where he remains Professor, Emeritus, and is often credited as popularizing the term ‘computational photography’ through his courses. Before he joined Google he worked as visiting faculty at Google X on the camera for the Explorer Edition of Google Glass. His work early on at Stanford with Google was the basis for Street View in Google Maps. Marc also helped popularize light field photography with his work at Stanford with Mark Horowitz and Pat Hanrahan, advising students like Ren Ng who went on to found Lytro.

Marc also developed his own smartphone apps early on to utilize the potential of burst photography for enhanced image quality with apps like SynthCam. The essential idea – which underpins all multi-imaging techniques today employed by smartphones – is to capture many images to synthesize together into a final image. This technique overcomes the major shortcomings of smartphone cameras: their sensors have such small surface areas and their lenses have such small apertures that the amount of light captured is relatively low. Given that most of the noise in digital images is due to a lack of captured photons (read our primer on the dominant source of noise: shot noise), modern smartphones employ many clever techniques to capture more total light, and in intelligent ways as well to retain both highlight and shadow information while dealing with subject movement from shot to shot. Much of Marc’s early work, as seen in SynthCam, became the basis for the multi-shot noise averaging and bokeh techniques used in Pixel smartphones.

Marc is also passionate about the potential for collaborative efforts and helped develop the ‘Frankencamera’ as an open source platform for experimenting with computational photography. We look forward to the innovation he’ll bring to Adobe, and hope that much of it will be available across platforms and devices to the benefit of photographers at large.


Footnotes:

1Apart from being well renowned in the fields of imaging and computer graphics, Marc Levoy is himself a photography enthusiast and expert, and while at Stanford taught a Digital Photography class. The course was an in-depth look at everything from sensors to optics to light, color, and image processing, and is available online. We highly recommend our curious readers watch his lectures in video form and also visit Marc’s course website for lecture slides and tools that help you understand the complex concepts both visually and interactively.

2Our own signal:noise ratio analyses of Raw files from the Pixel 4 and representative APS-C and four-thirds cameras show the Pixel 4, in Night Sight mode, to be competitive against both classes of cameras, even slightly out-performing four-thirds cameras (for static scene elements). See our full signal:noise analysis here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google’s ex-lead of computational photography Marc Levoy to build new imaging experiences at Adobe

Posted in Uncategorized

 

Google’s Pixel 4 Astrophotography mode is now available on Pixel 2, 3 and 3a devices

07 Nov

The Google Pixel 4 offers a range of new and innovative camera features. Some of them are, mainly due to hardware requirements, exclusive to the latest Pixel device but Google has promised to make some others available for older Pixel models.

This has now happened in the case of the Pixel 4 Astrophotography mode. The function had previously been made available for older Pixels via a community-driven development effort but it’s now officially supported with older devices in the latest version 7.2 of the Google Camera app. Below are a few sample photos captured with the Astrophotography mode on the Pixel 4:

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_6422957949″,”galleryId”:”6422957949″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Users of the Pixel 2 and Pixel 3 series, including the Pixel 3a, are now able to use the feature after updating to the latest version of the app. The Astrophotography option builds on Google’s Night Sight technology and captures and combines several frames to achieve a clean exposure and great detail as well as limited noise levels when photographing the night sky.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google’s Pixel 4 Astrophotography mode is now available on Pixel 2, 3 and 3a devices

Posted in Uncategorized

 

Google’s Dual Exposure Controls and Live HDR+ features remain exclusive to Pixel 4

22 Oct

Google’s brand new flagship smartphone Pixel 4 comes with a variety of new and innovative camera features. Google has already said that some of those features, for example the astrophotography mode, will be made available on older Pixel devices via software updates.

However, two of the new functions will be reserved for the latest generation Pixel devices: Dual exposure controls, which let you adjust highlights and shadows via sliders in the camera app, and Live HDR+, which gives you a WYSIWYG live preview of Google’s HDR+ processing. Google confirmed this in a tweet:

According to the tweet the reason these two features won’t be available on the Pixel 3 and older devices is down to hardware rather than a marketing decision. It appears the older phones simply don’t have enough processing oomph to display the HDR+ treatment and manual adjustments in real time.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google’s Dual Exposure Controls and Live HDR+ features remain exclusive to Pixel 4

Posted in Uncategorized

 

Video: Google’s Super Resolution algorithm explained in three minutes

30 May

Space constraints in the thin bodies of modern smartphones mean camera engineers are limited in terms of the size of image sensors they can use in their designs. Manufacturers have therefore been pushing computational imaging methods in order to improve the quality of their devices’ image output.

Google’s Super Resolution algorithm is one such method. It involves shooting a burst of raw photos every time the shutter is pressed and takes advantage of the user’s natural hand-shake, even if it is ever so slight. The pixel-level differences between each of the frames in the burst can be used to merge several images of the burst into an output file with optimized detail at each pixel location.

An illustration that shows how multiple frames are aligned to create the final image.

Google uses the Super Resolution in the Night Sight feature and Super-Res zoom of the Pixel 3 devices and has previously published an in-depth article about it on its blog . Our own Rishi Sanyal has also had a close look at the technology and the features it has been implemented in.

A visual representation of the steps used to create the final image from a burst of Raw input images.

Now Google has published the above video that provides a great overview of the the technology in just over three minutes.

‘This approach, which includes no explicit demosaicing step, serves to both increase image resolution and boost signal to noise ratio,’ write the Google researchers in the paper the video is based on. ‘Our algorithm is robust to challenging scene conditions: local motion, occlusion, or scene changes. It runs at 100 milliseconds per 12-megapixel RAW input burst frame on mass-produced mobile phones.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: Google’s Super Resolution algorithm explained in three minutes

Posted in Uncategorized

 

Google’s Photobooth brings automated selfie-shooting to the Pixel 3

19 Apr

Capturing a group selfie can be a daunting task. Someone is always looking the wrong way or unhappy with their facial expression in the shot, usually resulting in a large number of unusable shots in your camera roll. Google has now developed a clever piece of AI software for its Pixel phones that should make things much easier and reduce the image waste on your device.

Photobooth is a new shutter-free mode in the Pixel 3 Camera app. With the mode activated you hit the shutter once and the camera will automatically capture a shot when the camera is stable and all subjects have good facial expressions and their eyes open.

via GIPHY

Unlike face, smile and blink detection features of the past Photobooth does not simply rely on the shape and specific features of the human face. Smartphone processing power allows for better autonomous control of the capture process by the device. Photobooth is capable of identifying five expressions: smiles, sticking your tongue out, kisses, duck face, puffed out cheeks, and a look of surprise.

The Google engineers trained a neural network to identify these expressions in real time. After pressing the shutter button every preview frame is analyzed, looking for one of the expressions mentioned above and checking for camera shake.

In the camera app a white bar that expands and shrinks indicates how photogenic the preview scene is deemed by the algorithm, so users have some idea when the camera is likely to trigger the capture.

Some of the technology has been ported from one of Google’s now terminated hardware projects, the Clips lifelogging camera.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google’s Photobooth brings automated selfie-shooting to the Pixel 3

Posted in Uncategorized

 

Almalence compares Google’s Super Resolution Zoom to its own Super Sensor solution

05 Feb

Optical zoom lenses with 2x or even 3x magnification factor are one of the the latest trends on high-end smartphones. However, you don’t necessarily need a dedicated lens to achieve better zoom results than you get from a standard digital zoom.

In the world of computational photographer one solution to getting around optical zoom is to combine a multitude of frames to capture as much detail as possible and apply some clever processing algorithms. While not nearly as clear as optical zoom — yet — these methods result in final images that aren’t far off an optical system. One such example is Google’s Super Resolution Zoom on the Pixel 3 smartphone. Through this method, the Pixel 3 can produce image detail far superior to a simple digital zoom.

But Google isn’t the only company working on this. Computational imaging company Almalence also provides imaging software solutions to mobile device and camera makers with a similar solution called SuperSensor, and it’s shown off just how capable its system is.

On its blog Almalence has compared Google’s Super Resolution to its own Super Sensor technology by installing the latter on a Google Pixel 3 and capturing a couple of test scenes.

The company’s conclusion is that Google’s Super Resolution Zoom ‘reveals some details that are indistinguishable in the normal image,’ but ‘it’s still not the best of what super resolution can achieve.’

In the 100% crops above you can see Google’s system is doing a much better job at magnifying the text in the book that served as a test subject. However, at closer inspection you’ll also see that the characters in the text is better preserved in the image captured by the Almalence system, despite an overall softer appearance.

In the original article you can find another comparison scene and all samples for download at original size, so you can form your own opinions about the performance of the two systems. In any case it’s good to see how far purely software-based systems have come when compared to a simple digital zoom. Combining such systems with optical zoom lenses should open up completely now possibilities on mobile devices.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Almalence compares Google’s Super Resolution Zoom to its own Super Sensor solution

Posted in Uncategorized

 

Google’s Night Sight allows for photography in near darkness

15 Nov

Google’s latest Pixel 3 smartphone generation comes with the company’s new Night Sight feature that allows for the capture of well-exposed and clean images in near darkness, without using a tripod or flash. Today Google published a post on its Research Blog, explaining in detail thecomputational photography and machine learning techniques used by the feature and describing the challenges the development team had to overcome in order to capture the desired image results.

Night Sight builds on Google’s multi-frame-merging HDR+ mode that was first introduced in 2014, but takes things a few steps further, merging a larger number of frames and aiming to improve image quality in extremely low light levels between 3 lux and 0.3 lux.

One key difference between HDR+ and Night Sight are longer exposure times for individual frames, allowing for lower noise levels. HDR+ uses short exposures to provide a minimum frame rate in the viewfinder image and instant image capture using zero-shutter-lag technology. Night Sight waits until after you press the shutter button before capturing images which means users need to hold still for a short time after pressing the shutter but achieve much cleaner images.

The longer per-frame exposure times could also result in motion blur caused by handshake or to moving objects in the scene. This problem is solved by measuring motion in a scene and setting an exposure time that minimizes blur. Exposure times also vary based on a number of other factors, including whether the camera features OIS and the device motion detected by the gyroscope.

In addition to per-frame exposure, Night Sight also varies the number of frames that are captured and merged, 6 if the phone is on a tripod and up to 15 if it is handheld.

Frame alignment and merging are additional challenges that you can read all about in detail on the Google Research Blog. Our science editor Rishi Sanyal also had a closer look at Night Sight and the Pixel 3’s other computational imaging features in this article.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google’s Night Sight allows for photography in near darkness

Posted in Uncategorized

 

The New York Times’ massive photo archive is being digitized with Google’s help

13 Nov

The New York Times has millions of printed photographs stored in an underground archive nicknamed “the morgue,” and it has begun the arduous task of digitizing this collection. Google is part of the project, according to a post on one of the company’s blogs, where it explains that its machine learning and cloud technologies will help The New York Times store, process, and search its archive.

The morgue houses between 5 and 7 million photographs dating back to the late 19th century, all of them stored in folders within file cabinets. Many of the photos haven’t been viewed in decades and all of them are at risk of damage. In 2015, for example, the morgue experienced minor damage after water leaked in from a broken pipe.

The New York Times‘ CTO Nick Rockwell said in a statement to Google:

The morgue is a treasure trove of perishable documents that are a priceless chronicle of not just The Times’s history, but of nearly more than a century of global events that have shaped our modern world … Staff members across the photo department and on the business side have been exploring possible avenues for digitizing the morgue’s photos for years. But as recently as last year, the idea of a digitized archive still seemed out of reach.

To help preserve this visual history, Google has stepped in to provide The New York Times with its cloud storage product for storing high-resolution digital copies of the photographs. The New York Times has developed a processing pipeline for the digitization project that includes resizing images using Google Kubernetes Engine and storing metadata using PostgreSQL, in addition to the open source command-line software ExifTool and ImageMagick.

Google’s machine learning technology augments the system to offer insights into the digitized content. The company’s Cloud Vision API is used to detect text, logos, objects, and more within photographs, while the Cloud Natural Language API uses the detected text to categorize the images. This data makes it possible to search the digitized collection for specific images that would otherwise be lost in the vast archive.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The New York Times’ massive photo archive is being digitized with Google’s help

Posted in Uncategorized

 

This is why Google’s impressive object removal tool was never released

19 May

At its 2018 I/O developer conference Google presented a number of improvements to its Photos app, but there was no talk about an exciting feature that was demoed the previous year: an object removal tool that automatically removes obstructions like fences, window panes from your photos.

The feature caused quite a buzz when it was demoed in early 2017, and people have been waiting on it ever since… so why has it disappeared? The answer is simpler than you might think. According to an interview with the Google Photos team on XDA, object removal was simply de-prioritized in the development queue, giving way to other AI-powered features in Google Lens.

In the interview, Google team members said that the technology exists and could be deployed, but that Google prioritizes products based on what is most important for people, and other machine learning applications were prioritized over object removal. This means the technology might eventually be implemented into Google Photos or another Google app if the company changes its mind (and development queue), but we probably should not hold our breath.

When it was first demoed, object removal looked impressive and exceedingly useful. As you can see in the video above, the feature was shown as 100 percent automatic, without the need for any manual editing. Sure, professional photographers might want a bit more control over their cloning, but the vast majority of Google Photos users probably don’t know what the Clone Stamp tool or Content Aware Fill even is.

If you’ve been waiting for object removal to finally make an appearance on your smartphone, knowing the source code is still stored on some hard drive at Google HQ might not be much of a consolation… but least we know the reason why it has never been released.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on This is why Google’s impressive object removal tool was never released

Posted in Uncategorized

 

Vivo’s AI-powered ‘Super-HDR’ tech takes on Google’s HDR+

15 Mar

Google’s HDR+ mode is widely regarded as the current benchmark for computational imaging on smartphones, but Chinese manufacturer Vivo wants to unseat the champion. Earlier today, Vivo announced its AI-powered Super HDR feature—a direct competitor to the Google system found in Pixel devices.

Super HDR is designed to improve HDR performance while keeping a natural and “unprocessed” look. To achieve this, the system captures 12 exposures (Google uses 9) and merges them into a composite image, allowing for a fine control over image processing.

Additionally, AI-powered scene detection algorithms identify different elements of a scene—for example: people, the sky, the clouds, rocks, trees, etc.—and adjust exposure for each of them individually. According to Vivo, the end result looks more natural than most images that use the simpler tone-mapping technique.

Looking at the provided sample images, the system appears to be doing an impressive job. That said, these kind of marketing images have to be swallowed with a pinch of salt; we’ll see what the system is really capable of when it’s available in a production device we can test.

Speaking of which, as of now, we don’t know which device Super HDR will be shipping on first, but there is a chance it might be implemented on the upcoming Vivo V9, which is expected to be announced on March 22nd. The V9 is currently rumored to feature a Snapdragon 660 chipset and 12+8MP dual-camera.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Vivo’s AI-powered ‘Super-HDR’ tech takes on Google’s HDR+

Posted in Uncategorized