RSS
 

Posts Tagged ‘Google’

Google software engineer shows what’s possible with smartphone cameras in low light

27 Apr
Image: Florian Kainz/Google

On a full moon night last year, Google software engineer Florian Kainz took a photo of the Golden Gate bridge and the City of San Francisco in the background with professional camera equipment: a Canon EOS-1D X and a Zeiss Otus 28mm F1.4 ZE lens. 

When he showed the results to his colleagues at Google Gcam, a team that focuses on computational photography, they challenged him to re-take the same shot with a smartphone camera. Google’s HDR+ camera mode on the Google Nexus and Pixel phones is one of Gcam’s most interesting products. It allows for decent image quality at low light levels by shooting a burst of up to ten short exposures and averaging them them into a single image, reducing blur while capturing enough total light for a good exposure. 

However, Florian being an engineer, wanted to find out what smartphone camera can do when taken to the current limits of technology and wrote an Android camera app with manual control over exposure time, ISO and focus distance. When the shutter button is pressed the app waits a few seconds and then records up to 64 frames with the selected settings. The app saves DNG raw files which can then be downloaded for processing on a PC. 

He used the app to capture several night scenes, including an image of the night sky, with a Nexus 6P smartphone, which is capable of shutter speeds up to 2 seconds at high ISOs. On each occasion he shot an additional burst of black frames after covering the camera lens with opaque adhesive tape. Back at the office the frames were combined in Photoshop. Individual images were, as you would expect, very noisy, but computing the mean of all 32 frames cleaned up most of the grain, and subtracting the mean of the 32 black frames removed faint grid-like patterns caused by local variations in the sensor’s black level.

The results are very impressive indeed. At 9 to 10MP the images are smaller than the output of most current DSLRs but the photos are sharp across the frame, there is little noise and dynamic range is surprisingly good. Getting to those results took a lot of post-processing work but with smartphone processing becoming even more powerful it should only be a question of time before the sort of complex processing that Florian did manually in Photoshop can be done on the device. You can see all the image results in full resolution and read Florian’s detailed description of his capture and editing workflow on the Google Research Blog.

 Image: Florian Kainz/Google

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google software engineer shows what’s possible with smartphone cameras in low light

Posted in Uncategorized

 

Android 7.1.2 update fixes Google Pixel’s pink camera streaks issue

05 Apr

Google has released Android 7.1.2 for the Pixel and Nexus smartphones, and with it comes a fix for the pink streaking issue affecting some Pixel cameras. Owners affected by the issue report a pink banding and vertical lines that appear on photos taken using the Pixel’s camera app. Google had encouraged affected handset owners to factory reset their phone as a temporary solution for the problem, and now it has released a permanent fix with the latest version of Android.

In addition to fixing the Pixel’s pink banding problem, Android 7.1.2 for Pixel and Nexus phones brings improvements to Bluetooth connectivity and fingerprint swipe performance, and also adds battery usage alerts. Google says audio popping and early shutdown issues have also been fixed.

The update is available now via both OTA update images and factory images. Handset owners who don’t want to manually flash their device with an image can wait for the update to be delivered to their phone over-the-air, the rollout of which is happening now.

Via: Android Police

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Android 7.1.2 update fixes Google Pixel’s pink camera streaks issue

Posted in Uncategorized

 

Gcam: the story behind the Google Pixel camera software

28 Mar

Google’s now independent X research division, which calls itself ‘the moonshot factory,’ has been publishing a collection of stories about the group’s graduated projects and where they stand today. The latest article in the so-called Graduate Series offers a closer look at Gcam, the software behind the class-leading cameras in Google’s Pixel devices. 

The blog post outlines how the Gcam team was set up back in 2011 to find a solution for the Google Glasses smart goggles’ most pressing challenge: providing a high-quality camera in a very small device. As using bigger hardware wasn’t an option, the Gcam team developed a method called image fusion, which uses multi-frame-stacking techniques to create a single, higher quality image with lower noise levels, better detail and increased dynamic range. 

The technology, which is now called HDR+, quickly grew beyond Google Glass and made it into the Nexus 5 and Nexus 6 cameras and eventually became the default camera mode in the Google Pixel series. The Gcam team now works across a range of imaging-related technologies, including Android, YouTube, Google Photos 360?Virtual Reality projects. If you are interested in more detail you can read the full blog entry on the X blog or find our full Google Pixel XL camera review here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Gcam: the story behind the Google Pixel camera software

Posted in Uncategorized

 

Google Guetzli is an open source JPEG encoder that creates 35% smaller files

18 Mar
20×24 pixel zoomed areas from a picture of a cat’s eye. Uncompressed original on the left. Guetzli (on the right) shows less ringing artefacts than libjpeg (middle) without requiring a larger file size. Image and caption via Google 

Google has announced the open source release of Guetzli, a new JPEG encoder able to reduce a JPEG’s file size by up to 35%, without any significant loss of quality. Per a study detailing the algorithm, Guetzli ‘aims to produce visually indistinguishable images at a lower bit-rate than other common JPEG encoders,’ including libjpeg. However, the study goes on to caution that the compression tool is ‘currently extremely slow.’

Google announced the new encoder on Thursday, detailing it as a proof-of-concept that can be freely used by webmasters and others to reduce the size of JPEG image files. The algorithm merges ‘advanced psychovisual models with lossy compression techniques,’ according to the study, to produce high-quality compressed images. It’s a different approach than that taken by other Google projects we’ve looked at recently like RAISR. Google expresses a desire to see future compression research that is inspired by Guetzli’s own psychovisual underpinnings.

Though Google largely details Guetzli’s benefits as they pertain to webmasters (namely faster Web page loading), the algorithm is available for anyone to download and use via Github. Instructions for setting up and using the tool are provided on the Github page for multiple platforms, including Windows, macOS and Linux.

Via: Google Blog

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Guetzli is an open source JPEG encoder that creates 35% smaller files

Posted in Uncategorized

 

Google patent details a hat with a wearable camera and bone conduction speaker

03 Mar

Google was recently granted a patent it filed in 2013 that details a hat with a built-in camera system able to pair with a mobile device for the purpose of ‘interactive sessions.’ While a baseball cap in particular seems like a somewhat odd choice for a wearable, the system itself sounds fairly straight-forward as a portable studio of sorts for live broadcasting video and snapping photos.

The system revolves around the camera, but includes related technologies to encompass a complete system. This system includes a speaker that transmits audio to the user via bone conduction, a module that uses vibrations to direct the user’s attention from one side to another, and a microphone, as well as a built-in battery to power it all.

The intended purpose for the wearable camera system appears multifaceted. One obvious purpose is capturing content and sharing it via the mobile app whether directly or as a live broadcast. The patent indicates the system could also be used for more utilitarian things as well, though, such as getting help from a remote entity (a line worker sharing a problem with someone at a facility, for example).

Whether this patent will ever be turned into a consumer product — and whether that product would actually be based around a baseball cap — is unclear at this time.

Via: Mashable

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google patent details a hat with a wearable camera and bone conduction speaker

Posted in Uncategorized

 

Google AI adds detail to low-resolution images

09 Feb

It seems intelligent enhancement of image detail is currently high on the agenda at Google. Recently the company brought its RAISR smart image upsampling to Android devices. Now, the Google Brain team has developed a system that uses neural networking to enhance detail in low-resolution images.

The system uses a two-step approach, with a conditioning network first attempting to map 8×8 source images against similar images with a higher resolution and creating an approximation of what the enhanced image might look like. In a second step, the prior network adds realistic detail to the final output image. It does so by learning what each pixel in a low-resolution image generally corresponds to in higher-res files.

As you can see, the system already works pretty well. In the series of samples above, the images on the left show the 64 pixel source images, while the ones in the middle show the output image that the Google Brain algorithm has produced from them. The images on the right show higher-resolution versions of the low-res source images for comparison. While the results are not perfect yet, they are certainly close enough to provide value in a variety of scenarios. Eventually we might even be able to extract high-resolution images from low-quality security-cam footage a la CSI.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google AI adds detail to low-resolution images

Posted in Uncategorized

 

Google brings RAISR smart image upsampling to Android devices

14 Jan

Google first showed off its RAISR technology, which uses machine learning to produce high-quality versions of low-resolution images, in November last year. Now the company has published a blog post to announce that RAISR has been implemented into Google+ for Android. Google+ is used by many photographers to display high-resolution images and the move is aimed at reducing mobile data requirements, which could be particularly useful in areas with slow connections or when using data is expensive, for example when roaming. 

RAISR allows for viewing images at their (almost) full glory while reducing bandwidth requirements per image by up to 75%. Google has only just begun to apply the technology to high-resolution images in the Google+ streams of a subset of Android devices but is already processing 1 billion images per week, resulting in a total bandwidth reduction of about a third for the affected users. Google says it is planning to roll out RAISR more broadly in the coming weeks, so your data consumption might go down soon if you use Google+ frequently. 

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google brings RAISR smart image upsampling to Android devices

Posted in Uncategorized

 

PhotoSpots uses Google Maps to pinpoint photography hotspots

13 Jan

When you’re traveling, it’s always a good idea to scope out shooting locations ahead of time. Here to help is a newly launched online service called PhotoSpots. With PhotoSpots, photographers can find so-called ‘photography hotspots’ highlighted around the globe using Google Maps and the image hosting website 500px. The service was created by photographer Mike Wong, who recently detailed his creation on Reddit.

‘I thought that it would be interesting to see where and when other photographers were taking photos,’ Wong explained in his Reddit post, ‘so I decided to create a small website that shows exactly that.’ The photography hotspots are presented as a heat map, with red areas representing heavily photographed regions. A bar beneath the map shows thumbnails for images taken in a particular region and uploaded to 500px.

Clicking PhotoSpots’ menu icon opens a slider that filters photos by month, while hovering over a specific photo thumbnail reveals the precise location it was taken via the map. “I’m also planning to make a filter for categories (e.g. nature, cities etc.) to make it more personalizable,’ said Wong, though he didn’t provide a timeframe for when that feature will be added.

Via: PetaPixel

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on PhotoSpots uses Google Maps to pinpoint photography hotspots

Posted in Uncategorized

 

Turn your doodles into Google satellite images with Land Lines

05 Jan

Looking for a moment of zen? We suggest spending a couple of minutes playing with Land Lines, a game that draws on Google’s satellite images of Earth.

So maybe it’s not quite a ‘game.’ Its creators Zach Lieberman and Matt Felsen call it an experiment, one that analyzes basic scribbles (zig zags and curves, nothing fancy) and finds a satellite image of Earth with matching features. It runs on either a mobile or desktop web browser.

Lines that you draw with your finger or a mouse become roads, shorelines and runways before your eyes, almost instantaneously. It runs seamlessly, and it’s oddly soothing. Give it a try and learn more about how it all came together.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Turn your doodles into Google satellite images with Land Lines

Posted in Uncategorized

 

Google Sunroof: Search to Save Money With Home Solar Power

26 Dec

[ By WebUrbanist in Architecture & Houses & Residential. ]

google sunroof

Google has rolled out a search engine with a specific target in mind: your house, more specifically how much area of your roof could be covered in solar panels, what that would save you and where you can look for companies to install a system for you.

Interested homeowners can input their addresses and get realistic illustrations of how many solar panels would fit and what their potential cost savings on energy bills are likely to be.

google solar savings calculation

A complex data comprising everything from your home’s location and orientation to the presence of shade from adjacent trees or buildings – but this set is broken down into simple information for users.

google solar energy savings

Google, of course, is well-positioned to make this Sunroof tool effective, combining its Google Maps and Earth data that can sort out not just the footprint of a building but its 3D space too, and thus shadows. It also has all of the other information at hand, like average temperatures, sun exposure, cloud cover.

new google solar energy

Users of the tool can tinker with variables, but ultimately are given a recommended installation that maximizes the potential output and thus puts more back in homeowner pockets long-term. For now it is limited to the San Francisco Bay Area, Boston and a few other locations, but they have plans to expand around the country.

Share on Facebook





[ By WebUrbanist in Architecture & Houses & Residential. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]


WebUrbanist

 
Comments Off on Google Sunroof: Search to Save Money With Home Solar Power

Posted in Creativity