RSS
 

Posts Tagged ‘Google’

Google Photos excludes unsupported video formats from its free unlimited storage

13 Dec

If you can live with some compression being applied to your files, Google Photos offers a reliable and free method for storing your photos and videos. The platform can also be used for storing original-quality JPGs and Raw files but those files will count against your quota and once you’re out of storage you’ll have to pay for extra space.

Depending on the file types you are storing, some of your video files might now count against the quota as well. Google has introduced new rules to make unsupported videos count against your Google account storage quota. One of the reasons for this move — but likely not the only one — might be that some users apparently used fake file extensions to disguise unsupported files.

Whatever the reasons, to not count against your quota from now on video files have to be at least one second long, be of the right file type, be playable by Google Photos and be playable when downloaded to your device. Below are the file types accepted by Google:

  • Photos: .jpg, .png, .webp and some RAW files.
  • Live photos can be backed up if you use the Google Photos app on your iPhone or iPad.
  • Videos: .mpg, .mod, .mmv, .tod, .wmv, .asf, .avi, .divx, .mov, .m4v, .3gp, .3g2, .mp4, .m2t, .m2ts, .mts, and .mkv files.

Any videos uploaded after December 6 which don’t comply with these requirements will take up storage space.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Photos excludes unsupported video formats from its free unlimited storage

Posted in Uncategorized

 

Insta360 One X update brings HDR video and Google Street View integration

06 Dec

Insta360 has released a software update for its One X 5.7K 360-degree camera. With version software version 1.1.0 the camera is now capable of capturing HDR video — previously HDR recording was only available for still images. The One X HDR mode makes sure highlight and shadow clipping in your 360-degree videos are kept to a minimum and should make for more natural looking footage, without minimal need for post processing.

The second new feature in the update is Google Maps Street View Integration. One X owners can now use their camera to capture 360-degree content for Google Maps Street View and directly upload to Street View via the One X mobile app. The latter automatically converts video files into a series of evenly spaced 360 photo spheres for viewing on the Google platform.

In addition the company has announced that the One X is now available in a bundle that is exclusive to Apple. The bundle includes a number of accessories, including Insta360’s Bullet Time handle that also serves as a tripod, a selfie stick that is rendered invisible by the camera software, two batteries, and a protective pouch.

The Insta360 ONE X Camera Bundle is now available at Apple.com for $ 449.95. You can read our review of the Insta360 One X here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Insta360 One X update brings HDR video and Google Street View integration

Posted in Uncategorized

 

Some Google Pixel 3 devices hampered by camera bug

21 Nov

Google’s latest Pixel 3 devices are widely regarded excellent camera smartphones, but some users are now reporting a serious camera bug, according to a report from Owen Williams of Charged.

For the affected Pixel owners the camera works fine when operated through the default camera app. However, if a third-party app attempts to access the imaging hardware, the camera becomes unusable and generates one of several error messages, such as “could not connect to camera,” “camera encountered fatal error,” or “the camera device encountered a fatal error.”

This means users of the affected devices are unable to use third-party camera apps that use the camera, such as Instagram, Snapchat, or Camera+. Unfortunately a reboot, or even factory reset, doesn’t fix the issue. After a reboot the device works fine, but only until a third-party camera app is launched again.

According to reports, Google is sometimes, when pressed by the customer, acknowledging the issue, but is refusing to replace affected devices. Instead users are being told they have to wait for a software update. Unfortunately at this point there is no ETA for the latter.

DPReview has contacted Google and will update this article accordingly when/if DPReview gets a response.

Is your Pixel 3 camera working as it should? Let us know in the comments.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Some Google Pixel 3 devices hampered by camera bug

Posted in Uncategorized

 

Google Photos for iOS update brings depth and focus editing

21 Nov

Google Photos is the default photos app on Android devices but also a viable option for the users of iOS devices. Today Google has announced an update to its Google Photos app for iOS that brings adjustable depth and focus for portrait mode shots to the Apple ecosystem.

Previously this feature had only been available in the Android version of the app. With this latest update you can now open Portrait Mode images in Google Photos, tap the editing icon and then modify depth-of-field and focus using virtual sliders.

iPhone XS, XS Max and iPad Pro users can do the same thing in Apple’s own Photos app but the owners of older portrait-mode-capable iPhones until now have not been able to adjust focus and depth after capture.

In addition, the latest update brings a Color Pop feature that converts the background of a Portrait Mode image into black and white but keeps the subject in color. You can download the updated Google Photos app from the App Store now.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Photos for iOS update brings depth and focus editing

Posted in Uncategorized

 

Google promises software fix for Pixel 3 image saving issues

25 Oct

It’s not unusual to see one or two software bugs on a newly released smartphone, but it looks like some Google Pixel 3 and 3 XL units are infected with a particularly nasty camera-related problem. Many early adopters have reported a bug that occasionally prevents photos from saving after capture in the camera app.

The technical details behind the problem are not quite clear, but looking at discussions on Reddit appears older Pixel phones, and even Nexus devices, have had similar issues in the past.

The good news is that Google is now reacting and addressing the issue. A spokesperson talked to Android Police and provided the following statement:

“We will be rolling out a software update in the coming weeks to address the rare case of a photo not properly saving.”

The company has also confirmed that the bug will not only be fixed on the latest Pixel 3 devices but also on older Google Pixel 1 and 2 generation phones that are affected.

Even if it only happens rarely, a lost photo in an important moment is every photographer’s nightmare. So it’s good to see Google’s is taking steps to fix the issue across all affected models.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google promises software fix for Pixel 3 image saving issues

Posted in Uncategorized

 

Google Pixel 3 XL sample gallery

19 Oct

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_8660533092″,”galleryId”:”8660533092″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

The Pixel 3 represents another step forward in computational photography for Google’s smartphone line. Introducing features like super-resolution digital zoom, a synthetic fill-flash effect and learning-based Portrait Mode improvements are just a few ways that the company is making the most of a single main camera. We’ve just started testing the Pixel 3 XL, but in the meantime take a look at some sample images.

We’ve included some Raw conversions and made Raws available for download where possible; however, please note that Raw support appears to be preliminary. Default conversions are very flat and require significant post-processing. We expect this to be remedied soon with proper profiles.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Pixel 3 XL sample gallery

Posted in Uncategorized

 

How Google developed the Pixel 3’s Super Res Zoom technology

18 Oct

In a blog post on its Google AI Blog, Google engineers have laid out how they created the new Super Res Zoom technology inside the Pixel 3 and Pixel 3 XL.

Over the past year or so, several smartphone manufacturers have added multiple cameras to their phones with 2x or even 3x optical zoom lenses. Google, however, has taken a different path, deciding instead to stick with a single main camera in its new Pixel 3 models and implementing a new feature it is calling Super Res Zoom.

Unlike conventional digital zoom, Super Res Zoom technology isn’t simply upscaling a crop from a single image. Instead, the technology merges many slightly offset frames to create a higher resolution image. Google claims the end results are roughly on par with 2x optical zoom lenses on other smartphones.

Compared to the standard demosaicing pipeline that needs to interpolate missing colors due to the Bayer color filter array (top), gaps can be filled by shifting multiple images one pixel horizontally or vertically. Some dedicated cameras implement this by physically shifting the sensor in one pixel increments, but the Pixel 3 does it cleverly by essentially finding the correct alignment in software after collecting multiple, randomly shifted samples. Illustration: Google

The Google engineers are using the photographer’s hand motion – and the resulting movement between individual frames of a burst – to their advantage. Google says this natural hand tremor occurs for everyone, even those users with “steady hands”, and has a magnitude of just a few pixels when shooting with a high-resolution sensor.

The pictures in a burst are aligned by choosing a reference frame and then aligning all other frames relative to it to sub-pixel precision in software. When the device is mounted on a tripod or otherwise stabilized natural hand motion is simulated by slightly moving the camera’s OIS module between shots.

As a bonus there’s no more need to demosaic, resulting in even more image detail. With enough frames in a burst any scene element will have fallen on a red, green, and blue pixel on the image sensor. After alignment R, G, and B information is then available for any scene element, removing the need for demosaicing.

For full technical detail of Google’s Super Res Zoom technology head over to the Google Blog. More information on the Pixel 3’s computational imaging features can be found here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on How Google developed the Pixel 3’s Super Res Zoom technology

Posted in Uncategorized

 

Google Pixel 3 interview: technical deep dive with the camera team

11 Oct

Recently, Science Editor Rishi Sanyal had the chance to sit down with two of Google’s most prominent imaging engineers and pick their brains about the software advances in the Pixel 3 and Pixel 3 XL. Isaac Reynolds is the Product Manager for Camera on Pixel and Marc Levoy is a Distinguished Engineer and is the Computational Photography Lead at Google. From computational Raw to learning-based auto white balance, they gave us an overview of some key new camera features and an explanation of the tech that makes them tick.

Features covered in this video include the wide-angle selfie camera, Synthetic Fill Flash, Night Sight, Super Resolution Zoom, computational Raw, Top Shot and the method behind improving depth maps in Portrait Mode.

These features are also covered in written form in a previously published article here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Pixel 3 interview: technical deep dive with the camera team

Posted in Uncategorized

 

Five ways Google Pixel 3 pushes the boundaries of computational photography

11 Oct

With the launch of the Google Pixel 3, smartphone cameras have taken yet another leap in capability. I had the opportunity to sit down with Isaac Reynolds, Product Manager for Camera on Pixel, and Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, to learn more about the technology behind the new camera in the Pixel 3.

One of the first things you might notice about the Pixel 3 is the single rear camera. At a time when we’re seeing companies add dual, triple, even quad-camera setups, one main camera seems at first an odd choice.

But after speaking to Marc and Isaac I think that the Pixel camera team is taking the correct approach – at least for now. Any technology that makes a single camera better will make multiple cameras in future models that much better, and we’ve seen in the past that a single camera approach can outperform a dual camera approach in Portrait Mode, particularly when the telephoto camera module has a smaller sensor and slower lens, or lacks reliable autofocus.

Let’s take a closer look at some of the Pixel 3’s core technologies.

1. Super Res Zoom

Last year the Pixel 2 showed us what was possible with burst photography. HDR+ was its secret sauce, and it worked by constantly buffering nine frames in memory. When you press the shutter, the camera essentially goes back in time to those last nine frames1, breaks each of them up into thousands of ’tiles’, aligns them all, and then averages them.

Breaking each image into small tiles allows for advanced alignment even when the photographer or subject introduces movement. Blurred elements in some shots can be discarded, or subjects that have moved from frame to frame can be realigned. Averaging simulates the effects of shooting with a larger sensor by ‘evening out’ noise. And going back in time to the last 9 frames captured right before you hit the shutter button means there’s zero shutter lag.

Like the Pixel 2, HDR+ allows the Pixel 3 to render sharp, low noise images even in high contrast situations. Click image to view the level of detail at 100%. Photo: Google

This year, the Pixel 3 pushes all this further. It uses HDR+ burst photography to buffer up to 15 images2, and then employs super-resolution techniques to increase the resolution of the image beyond what the sensor and lens combination would traditionally achieve3. Subtle shifts from handheld shake and optical image stabilization (OIS) allow scene detail to be localized with sub-pixel precision, since shifts are unlikely to be exact multiples of a pixel.

In fact, I was told the shifts are carefully controlled by the optical image stabilization system. “We can demonstrate the way the optical image stabilization moves very slightly” remarked Marc Levoy. Precise sub-pixel shifts are not necessary at the sensor level though; instead, OIS is used to uniformly distribute a bunch of scene samples across a pixel, and then the images are aligned to sub-pixel precision in software.

We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic

But Google – and Peyman Milanfar’s research team working on this particular feature – didn’t stop there. “We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic” explains Marc. If you have enough samples, you can expect any scene element to have fallen on a red, green, and blue pixel. After alignment, then, you have R, G, and B information for any given scene element, which removes the need to demosaic. That itself leads to an increase in resolution (since you don’t have to interpolate spatial data from neighboring pixels), and a decrease in noise since the math required for demosaicing is itself a source of noise. The benefits are essentially similar to what you get when shooting pixel shift modes on dedicated cameras.

Normal wide-angle (28mm equiv.) Super Res Zoom

There’s a small catch to all this – at least for now. Super Res only activates at 1.2x zoom or more. Not in the default ‘zoomed out’ 28mm equivalent mode. As expected, the lower your level of zoom, the more impressed you’ll be with the resulting Super Res images, and naturally the resolving power of the lens will be a limitation. But the claim is that you can get “digital zoom roughly competitive with a 2x optical zoom” according to Isaac Reynolds, and it all happens right on the phone.

The results I was shown at Google appeared to be more impressive than the example we were provided above, no doubt at least in part due to the extreme zoom of our example here. We’ll reserve judgement until we’ve had a chance to test the feature for ourselves.

Would the Pixel 3 benefit from a second rear camera? For certain scenarios – still landscapes for example – probably. But having more cameras doesn’t always mean better capabilities. Quite often ‘second’ cameras have worse low light performance due to a smaller sensor and slower lens, as well as poor autofocus due to the lack of, or fewer, phase-detect pixels. One huge advantage of Pixel’s Portrait Mode is that its autofocus doesn’t differ from normal wide-angle shooting: dual pixel AF combined with HDR+ and pixel-binning yields incredible low light performance, even with fast moving erratic subjects.

2. Computational Raw

The Pixel 3 introduces ‘computational Raw’ capture in the default camera app. Isaac stressed that when Google decided to enable Raw in its Pixel cameras, they wanted to do it right, taking advantage of the phone’s computational power.

Our Raw file is the result of aligning and merging multiple frames, which makes it look more like the result of a DSLR

“There’s one key difference relative to the rest of the industry. Our DNG is the result of aligning and merging [up to 15] multiple frames… which makes it look more like the result of a DSLR” explains Marc. There’s no exaggeration here: we know very well that image quality tends to scale with sensor size thanks to a greater amount of total light collected per exposure, which reduces the impact of the most dominant source of noise in images: photon shot, or statistical, noise.

The Pixel cameras can effectively make up for their small sensor sizes by capturing more total light through multiple exposures, while aligning moving objects from frame to frame so they can still be averaged to decrease noise. That means better low light performance and higher dynamic range than what you’d expect from such a small sensor.

Shooting Raw allows you to take advantage of that extra range: by pulling back blown highlights and raising shadows otherwise clipped to black in the JPEG, and with full freedom over white balance in post thanks to the fact that there’s no scaling of the color channels before the Raw file is written.

Pixel 3 introduces in-camera computational Raw capture.

Such ‘merged’ Raw files represent a major threat to traditional cameras. The math alone suggests that, solely based on sensor size, 15 averaged frames from the Pixel 3 sensor should compete with APS-C sized sensors in terms of noise levels. There are more factors at play, including fill factor, quantum efficiency and microlens design, but needless to say we’re very excited to get the Pixel 3 into our studio scene and compare it with dedicated cameras in Raw mode, where the effects of the JPEG engine can be decoupled from raw performance.

While solutions do exist for combining multiple Raws from traditional cameras with alignment into a single output DNG, having an integrated solution in a smartphone that takes advantage of Google’s frankly class-leading tile-based align and merge – with no ghosting artifacts even with moving objects in the frame – is incredibly exciting. This feature should prove highly beneficial to enthusiast photographers. And what’s more – Raws are automatically uploaded to Google Photos, so you don’t have to worry about transferring them as you do with traditional cameras.

3. Synthetic Fill Flash

‘Synthetic Fill Flash’ adds a glow to human subjects, as if a reflector were held out in front of them. Photo: Google

Often a photographer will use a reflector to light the faces of backlit subjects. Pixel 3 does this computationally. The same machine-learning based segmentation algorithm that the Pixel camera uses in Portrait Mode is used to identify human subjects and add a warm glow to them.

If you’ve used the front facing camera on the Pixel 2 for Portrait Mode selfies, you’ve probably noticed how well it detects and masks human subjects using only segmentation. By using that same segmentation method for synthetic fill flash, the Pixel 3 is able to relight human subjects very effectively, with believable results that don’t confuse and relight other objects in the frame.

Interestingly, the same segmentation methods used to identify human subjects are also used for front-facing video image stabilization, which is great news for vloggers. If you’re vlogging, you typically want yourself, not the background, to be stabilized. That’s impossible with typical gyro-based optical image stabilization. The Pixel 3 analyzes each frame of the video feed and uses digital stabilization to steady you in the frame. There’s a small crop penalty to enabling this mode, but it allows for very steady video of the person holding the camera.

4. Learning-based Portrait Mode

The Pixel 2 had one of the best Portrait Modes we’ve tested despite having only one lens. This was due to its clever use of split pixels to sample a stereo pair of images behind the lens, combined with machine-learning based segmentation to understand human vs. non-human objects in the scene (for an in-depth explanation, watch my video here). Furthermore, dual pixel AF meant robust performance of even moving subjects in low light – great for constantly moving toddlers. The Pixel 3 brings some significant improvements despite lacking a second lens.

According to computational lead Marc Levoy, “Where we used to compute stereo from the dual pixels, we now use a learning-based pipeline. It still utilizes the dual pixels, but it’s not a conventional algorithm, it’s learning based”. What this means is improved results: more uniformly defocused backgrounds and fewer depth map errors. Have a look at the improved results with complex objects, where many approaches are unable to reliably blur backgrounds ‘seen through’ holes in foreground objects:

Learned result. Background objects, especially those seen through the toy, are consistently blurred. Objects around the peripheries of the image are also more consistently blurred. Learned depth map. Note how objects in the background (blue) aren’t confused as being closer to the foreground (yellow) as they are in the heat map below.
Stereo-only result. Background objects, especially those seen through the toy, aren’t consistently blurred. Stereo-only based depth map from dual pixels. Note how some elements in the background appear to be closer to the foreground than they really are.

Interestingly, this learning-based approach also yields better results with mid-distance shots where a person is further away. Typically, the further away your subject is, the less difference in stereo disparity between your subject and background, making accurate depth maps difficult to compute given the small 1mm baseline of the split pixels. Take a look at the Portrait Mode comparison below, with the new algorithm on the left vs. the old on the right.

Learned result. The background is uniformly defocused, and the ground shows a smooth, gradual blur. Stereo-only result. Note the sharp railing in the background, and the harsh transition from in-focus to out-of-focus in the ground.

5. Night Sight

Rather than simply rely on long exposures for low light photography, ‘Night Sight’ utilizes HDR+ burst mode photography to take usable photos in very dark situations. Previously, the Pixel 2 would never drop below 1/15s shutter speed, simply because it needed faster shutter speeds to maintain that 9-frame buffer with zero shutter lag. That does mean that even the Pixel 2 could, in very low light, effectively sample 0.6 seconds (9 x 1/15s), but sometimes that’s not even enough to get a usable photo in extremely dark situations.

The camera will merge up to 15 frames… to get you an image equivalent to a 5 second exposure

The Pixel 3 now has a ‘Night Sight’ mode which sacrifices the zero shutter lag and expects you to hold the camera steady after you’ve pressed the shutter button. When you do so, the camera will merge up to 15 frames, each with shutter speeds as low as, say, 1/3s, to get you an image equivalent to a 5 second exposure. But without the motion blur that would inevitably result from such a long exposure.

Put simply: even though there might be subject or handheld movement over the entire 5s span of the 15 frame burst, many of the the 1/3s ‘snapshots’ of that burst are likely to still be sharp, albeit possibly displaced relative to one another. The tile-based alignment of Google’s ‘robust merge’ technology, however, can handle inter-frame movement by aligning objects that have moved and discarding tiles of any frame that have too much motion blur.

Have a look at the results below, which also shows you the benefit of the wider-angle, second front-facing ‘groupie’ camera:

Normal front-camera ‘selfie’ Night Sight ‘groupie’ with wide-angle front-facing lens

Furthermore, Night Sight mode takes a machine-learning based approach to auto white balance. It’s often very difficult to determine the dominant light source in such dark environments, so Google has opted to use learning-based AWB to yield natural looking images.

Final thoughts: simpler photography

The philosophy behind the Pixel camera – and for that matter the philosophy behind many smartphone cameras today – is one-button photography. A seamless experience without the need to activate various modes or features.

This is possible thanks to the computational approaches these devices embrace. The Pixel camera and software are designed to give you pleasing results without requiring you to think much about camera settings. Synthetic fill flash activates automatically with backlit human subjects, and Super Resolution automatically kicks in as you zoom.

At their best, these technologies allows you to focus on the moment

Motion photos turns on automatically when the camera detects interesting activity, and Top Shot now uses AI to automatically suggest the best photo of the bunch, even if it’s a moment that occurred before you pressed the shutter button. Autofocus typically focuses on human subjects very reliably, but when you need to specify your subject, just tap on it and ‘Motion Autofocus’ will continue to track and focus on it very reliably. Perfect for your toddler or pet.

At their best, these technologies allow you to focus on the moment, perhaps even enjoy it, and sometimes even help you to capture memories you might have otherwise missed.

We’ll be putting the Pixel 3 through its paces soon, so stay tuned. In the meantime, let us know in the comments below what your favorite features are, and what you’d like to see tested.


1In good light, these last 9 frames typically span the last 150ms before you pressed the shutter button. In very low light, it can span up to the last 0.6s.

2We were only told ‘say, maybe 15 images’ in conversation about the number of images in the buffer for Super Res Zoom and Night Sight. It may be more, it could be less, but we were at least told that it is more than 9 frames. One thing to keep in mind is that even if you have a 15-frame buffer, not all frames are guaranteed to be usable. For example, if in Night Sight one or more of these frames have too much subject motion blur, they’re discarded.

3You can achieve a similar super-resolution effect manually with traditional cameras, and we describe the process here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Five ways Google Pixel 3 pushes the boundaries of computational photography

Posted in Uncategorized