RSS
 

Archive for the ‘Uncategorized’ Category

Photokina 2018: hands-on with the Leica S3

12 Oct

Hands-on: Leica S3 medium format camera

With all the excitement surrounding full-frame system announcements at Photokina recently, it’s easy to forget the new arrivals in the medium format digital arena. While Fujifilm announced the GFX 50R – the second medium format digital camera in the GFX line – Leica unveiled the S3, an update to the Leica S2 launched in 2008.

A full spec sheet has yet to be released, but we do know the S3 will launch in Spring 2019 and have a 64MP sensor, 3 fps burst rate and 4K video capture using the full width of the sensor. And if its predecessor’s pricing is an indication, it will likely cost somewhere in the vicinity of $ 20k.

Despite the lack of specification details, we did get our hands on an early working version of the camera. Here’s how it handled.

Hands-on: Leica S3 medium format camera

The first thing I noticed when picking up the S3 is that for a medium format digital camera, it really isn’t that large. In fact it feels similar in hand to a Nikon D5 or Canon EOS 1DX II – I had to remind myself it has a larger sensor than both.

It is also quite pleasant to hold. The shutter release is located on the front of the camera and is easy to access. An indentation below the shutter release provides a comfortable place for your other fingers, and also makes the camera feel secure in hand.

As you might expect for a five-figure-camera, the S3 handles like it’s built to last – the magnesium alloy body feels like it could be used to drive a spike into the earth. The rubber material covering much of the exterior is thick and grippy, giving the whole camera a rugged quality.

Hands-on: Leica S3 medium format camera

The second thing I noticed about the S3 is the big beautiful optical finder, among the loveliest I can ever recall looking through. For reference, the S2’s finder has .86x magnification and the S3’s is likely similar.

The back of the camera looks essentially identical to the original Leica S2 as well as the more recent Leica SL. The four buttons surrounding the 3″ LCD are programmable. There’s also a programmable button on the front of the camera near the mount.

Like the S2, the S3 has two different shutters which can be engaged via a three-way controller on the back of the camera: ‘FPS’ stands for focal plane shutter and ‘CS’ stands for center shutter or leaf shutter (available with compatible lenses). I tried both and the leaf shutter is a good bit quieter.

Hands-on: Leica S3 medium format camera

Operationally, the camera felt fast. It was quick to start up and load menus. I also found the idea of dual top plate info LCDs to be kind of cool and definitely unique.

I didn’t get to shoot much with the S3 but I did get to spend a little bit of time focusing around the room with my eye to the finder. Overall, despite being a non-final product, AF acquisition speeds felt surprisingly quick. On the other hand, using the 5-way AF joystick to actually move points felt a tad sluggish.

Hands-on: Leica S3 medium format camera

I mentioned that the S3 will be able to do 4K video with full-sensor readout. To further expand video capabilities the S3 offers audio in/out and HDMI as well as LEMO style ports for USB and remote trigger/flash connectivity. All these connection points have thick rubber covers to protect them from the elements and grime.

Hands-on: Leica S3 medium format camera

Overall, the Leica S3 feels like a utilitarian tool, built for working professionals. It may have the price tag of an expensive museum piece, but it does not handle like one.

Of course, this is a camera that will most likely be purchased by agencies, studios and perhaps some very high-end pros – not every day shooters – making the high price tag seem less cringe-worthy. More importantly, my brief time with the Leica S3 has raised the bar for just how much I can lust over a new digital camera. Nice work, Leica.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Photokina 2018: hands-on with the Leica S3

Posted in Uncategorized

 

FAA issues warning to drone pilots in hurricane areas

12 Oct

The FAA, the agency that regulates airspace in the United States, has issued a warning to drone operators as a reminder not to interfere with emergency operations in areas affected by Hurricane Michael.

Drones are frequently used during disasters for tasks such as search and rescue or damage assessment, but this work is performed by trained professionals and volunteers, and is tightly coordinated by emergency agencies to avoid possible interference with low flying aircraft involved in the disaster response.

Fines for interfering with emergency operations may exceed $ 20,000, but more importantly, flying a drone in an affected area could impact emergency operations at a critical time. Pilots who wish to contribute to recovery efforts are encouraged to do so through volunteer organizations that work directly with the local incident commander.

Although most drone pilots will know to avoid interference with emergency operations, this is a friendly reminder not to be that person who inevitably ends up on the evening news for flying their drone directly into a disaster zone.

Here’s the official warning from the FAA for those who want details:

Hurricane Michael: Information for Drone Operators

The Federal Aviation Administration (FAA) is warning drone owners and operators that they will be subject to significant fines that may exceed $ 20,000 if they interfere with emergency response operations in the areas affected by Hurricane Michael.

Many aircraft that are conducting life-saving missions and other critical response and recovery efforts are likely to be flying at low altitudes over areas affected by the storm. Flying a drone without authorization in or near the disaster area may unintentionally disrupt rescue operations and violate federal, state, or local laws and ordinances, even if a Temporary Flight Restriction (TFR) is not in place. Allow first responders to save lives and property without interference.

Government agencies with an FAA Certificate of Authorization (COA) or flying under Part 107, as well as private sector Part 107 drone operators who want to support response and recovery operations, are strongly encouraged to coordinate their activities with the local incident commander responsible for the area in which they want to operate.

If drone operators need to fly in controlled airspace or a disaster TFR to support the response and recovery, operators must contact the FAA’s System Operations Support Center (SOSC) by emailing 9-ATOR-HQ-SOSC@faa.gov the information they need to authorize access to the airspace. Coordination with the SOSC may also include a requirement that a drone operator obtain support from the appropriate incident commander.

Here’s the information the FAA may require:

  • the unmanned aircraft type
  • a PDF copy of a current FAA COA
  • the pilot’s Part 107 certificate number
  • details about the proposed flight (date, time, location, altitude, direction and distance to the nearest airport, and latitude/longitude)
  • nature of the event (fire, law enforcement, local/national disaster, missing person) and the pilot’s qualification information.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on FAA issues warning to drone pilots in hurricane areas

Posted in Uncategorized

 

The Samsung Galaxy A9 is the first quad-cam smartphone

12 Oct

It looked like Lenovo was going to be the first smartphone manufacturer to launch a quad-camera model but Samsung has outpaced its rival on the final straight. Today, the Korean company launched the Galaxy A9, the world’s first smartphone with a quad-camera setup.

The device’s main camera offers 24MP resolution and F1.7 aperture. There is a 2x 10MP tele with F2.4 aperture and an 8MP super-wide-angle with 120 degree field-of-view and F2.4 aperture. The fourth camera is a 5MP depth sensor used to create a simulated bokeh effect.

There is no mention of optical image stabilization but the phone features AI-powered scene recognition for optimized exposure and other image parameters. The front camera comes with a 24MP resolution and F2 aperture.

Images can be composed and viewed on a 6.3-inch Super AMOLED screen with 1,080 x 2,220 pixel resolution and stored on 128GB of internal memory or microSD card. The Android 8.0 OS is powered by an octa-core processor and 6 or 8GB of RAM.

Power is provided by a 3,800mAh battery and a fingerprint reader is on board as well. The new phone will be available in a range of colors from November and set you back $ 695 (599 EUR) in the Euro Zone. Unfortunately no US pricing information has been released yet.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The Samsung Galaxy A9 is the first quad-cam smartphone

Posted in Uncategorized

 

Lawsuit claims Apple’s dual-camera setup in recent iPhones infringes on 2003 patent

12 Oct

A lawsuit filed with the U.S. District Court for the Northern District of California on Tuesday claims the cameras in Apple’s iPhone 7 Plus and newer dual-camera models infringe on a patent that was granted in 2003 and is based on an invention from 1999.

Plaintiffs Yanbin Yu and Zhongxuan Zhang allege Apple’s dual-cameras are in infringement of U.S. Patent No. 6,611,289 for “Digital cameras using multiple sensors with multiple lenses”.

The patent describes methods for capturing multiple images using multiple lens and sensor arrays. The patent focuses on a four-camera setup that captures images on monochrome sensors and merges them into a single color image. According to the lawsuit Apple was aware of the existing patent as early as 2011.

The complaint also alleges that Apple’s own multi-sensor camera patent No. 8,115,825, “Electronic device with two image sensors.” which was filed for in 2008 and granted in 2012, claimed “many of the same features” as the patent from Yu and Zhang.

The plaintiffs note that Apple made significant investments into its dual-camera technology, acquiring 3D sensor specialist PrimeSense in 2013 and camera technology company LinX Imaging in 2015 but did not seek to license Yu and Zhang’s patent, launching several iPhone models knowing they were infringing on somebody else’s patent.

This is not the first time Apple has camera-related legal problems. Earlier this year Israel-based company CorePhotonics also files a lawsuit against the US company. We’ll continue to keep an eye on both cases.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Lawsuit claims Apple’s dual-camera setup in recent iPhones infringes on 2003 patent

Posted in Uncategorized

 

Luna Display, the dongle that turns your iPad into a second screen, now available online

12 Oct

Luna Display, the little hardware dongle that turns your iPad into a second display, is now available to consumers. Luna Display was developed by the makers of the Astropad, an iOS app that turns your iPad into a graphics tablet for the Mac, and started out as a crowdfunding project on Kickstarter.

Luna is available for USB-C or Mini DisplayPort and works through a Wi-Fi connection. The device lets you use your Mac directly from the iPad with full support for external keyboards, Apple Pencil and Apple touch interactions including pinching, panning and tapping.

According to its makers Luna Display can tap into the processing power of your Mac’s GPU, allowing for a virtually lag-free user experience and images without glitching, artifacts, or blurriness which purely software-based solutions are prone to.

Luna Display requires a Mac running macOS 10.11 El Capitan (or later). For optimal performance a MacBook Air (2012 and later), MacBook Pro (2012 and later), Mac mini (2012 and later), iMac (2012 and later) or Mac Pro (Late 2013) are recommended.

The iPad must run iOS 9.1 or newer and should be an iPad 2 (or later), any iPad Mini, or any iPad Pro.

Luna Display is now available for $ 79.99 on the Luna website where you’ll also find more information.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Luna Display, the dongle that turns your iPad into a second screen, now available online

Posted in Uncategorized

 

Google Pixel 3 interview: technical deep dive with the camera team

11 Oct

Recently, Science Editor Rishi Sanyal had the chance to sit down with two of Google’s most prominent imaging engineers and pick their brains about the software advances in the Pixel 3 and Pixel 3 XL. Isaac Reynolds is the Product Manager for Camera on Pixel and Marc Levoy is a Distinguished Engineer and is the Computational Photography Lead at Google. From computational Raw to learning-based auto white balance, they gave us an overview of some key new camera features and an explanation of the tech that makes them tick.

Features covered in this video include the wide-angle selfie camera, Synthetic Fill Flash, Night Sight, Super Resolution Zoom, computational Raw, Top Shot and the method behind improving depth maps in Portrait Mode.

These features are also covered in written form in a previously published article here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Pixel 3 interview: technical deep dive with the camera team

Posted in Uncategorized

 

Nikon Coolpix P1000 First impressions review

11 Oct

Four years ago, the typical superzoom ‘bridge’ camera had a zoom power of around 50x. Over the years that number has slowly risen, before leveling out at 65x. And then came the Nikon Coolpix P900, whose 83x, 24-2000mm equiv. lens suddenly took zoom ranges from ‘really long’ to ‘absurd’.

Nikon’s new Coolpix P1000 has moved the zoom needle to ‘ludicrous,’ with an equivalent focal length of 24-3000mm. That’s right, 3000mm. This is a lens so long that we were able to fill the frame with a 1 meter (3.3 foot) tall monkey that’s 2.3 kilometers (1.4 miles) away.

This does come at a cost, though. For one thing, the P1000 is huge and its lens is challenged by a slow maximum aperture (and thus diffraction) and image quality can be compromised by the same thermal and atmospheric issues that are typical of images taken at extreme focal lengths with any super telephoto lens.

Besides the lens, the P1000 features a 16MP 1/2.3″ BSI-CMOS sensor, a fully articulating LCD and high-res EVF, Raw support and the ability to capture 4K video.

Key features

  • 16MP, 1/2.3″ BSI-CMOS sensor
  • 24-3000mm equiv. F2.8-8 lens
  • ‘Dual Detect’ optical image stabilization
  • 3.2″, 921k-dot fully articulating LCD
  • 2.36M-dot OLED electronic viewfinder with eye sensor
  • Raw support
  • UHD 4K/30p video capture
  • Microphone input
  • Hot shoe
  • Wi-Fi + Bluetooth (SnapBridge)
  • 250 shots per charge (CIPA standard)

The P1000 has a spec sheet almost as long as its lens. From Raw support to a high-res EVF, the camera has just about everything you’d want in a bridge camera, save for decent battery life and a touchscreen (a glaring omission). Image stabilization is a requirement on superzoom cameras, and Nikon’s ‘Dual Detect VR’ reduces shake by up to 5 stops, according to Nikon. Being 2018, it’s no surprise that Wi-FI and Bluetooth are also onboard.


What’s new and how it compares

The Coolpix P1000 really is all about that lens.

Read more

Shooting experience

Find out what it’s like to use the P1000 at the Woodland Park Zoo in Seattle.

Read more

Sample gallery

View a variety of sample images from the Coolpix P1000.

Read more

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nikon Coolpix P1000 First impressions review

Posted in Uncategorized

 

We’re hiring! DPReview is looking to add three Software Development Engineers

11 Oct

DPReview is hiring! We’re seeking three Software Development Engineers at a range of experience levels to join our Seattle-based team. In addition to a Senior SDE, we’re looking for two more engineers to join us and help build the future of DPReview.

In these roles, you’ll build on the full power of AWS and use the latest web standards and technologies to create industry-leading experiences for millions of visitors. With quick release cycles, you will test your ideas in the real world and get instant feedback from a passionate audience. With full-stack ownership, you’ll have direct impact on the look, feel and infrastructure of one of the web’s top photography websites.

Find more information and a link to apply below.

Apply now:
Senior Software Development Engineer – Team Lead

Apply now:
Software Development Engineer
(1+ years of experience)

Apply now:
Software Developer
(4+ years of experience)

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on We’re hiring! DPReview is looking to add three Software Development Engineers

Posted in Uncategorized

 

Five ways Google Pixel 3 pushes the boundaries of computational photography

11 Oct

With the launch of the Google Pixel 3, smartphone cameras have taken yet another leap in capability. I had the opportunity to sit down with Isaac Reynolds, Product Manager for Camera on Pixel, and Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, to learn more about the technology behind the new camera in the Pixel 3.

One of the first things you might notice about the Pixel 3 is the single rear camera. At a time when we’re seeing companies add dual, triple, even quad-camera setups, one main camera seems at first an odd choice.

But after speaking to Marc and Isaac I think that the Pixel camera team is taking the correct approach – at least for now. Any technology that makes a single camera better will make multiple cameras in future models that much better, and we’ve seen in the past that a single camera approach can outperform a dual camera approach in Portrait Mode, particularly when the telephoto camera module has a smaller sensor and slower lens, or lacks reliable autofocus.

Let’s take a closer look at some of the Pixel 3’s core technologies.

1. Super Res Zoom

Last year the Pixel 2 showed us what was possible with burst photography. HDR+ was its secret sauce, and it worked by constantly buffering nine frames in memory. When you press the shutter, the camera essentially goes back in time to those last nine frames1, breaks each of them up into thousands of ’tiles’, aligns them all, and then averages them.

Breaking each image into small tiles allows for advanced alignment even when the photographer or subject introduces movement. Blurred elements in some shots can be discarded, or subjects that have moved from frame to frame can be realigned. Averaging simulates the effects of shooting with a larger sensor by ‘evening out’ noise. And going back in time to the last 9 frames captured right before you hit the shutter button means there’s zero shutter lag.

Like the Pixel 2, HDR+ allows the Pixel 3 to render sharp, low noise images even in high contrast situations. Click image to view the level of detail at 100%. Photo: Google

This year, the Pixel 3 pushes all this further. It uses HDR+ burst photography to buffer up to 15 images2, and then employs super-resolution techniques to increase the resolution of the image beyond what the sensor and lens combination would traditionally achieve3. Subtle shifts from handheld shake and optical image stabilization (OIS) allow scene detail to be localized with sub-pixel precision, since shifts are unlikely to be exact multiples of a pixel.

In fact, I was told the shifts are carefully controlled by the optical image stabilization system. “We can demonstrate the way the optical image stabilization moves very slightly” remarked Marc Levoy. Precise sub-pixel shifts are not necessary at the sensor level though; instead, OIS is used to uniformly distribute a bunch of scene samples across a pixel, and then the images are aligned to sub-pixel precision in software.

We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic

But Google – and Peyman Milanfar’s research team working on this particular feature – didn’t stop there. “We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic” explains Marc. If you have enough samples, you can expect any scene element to have fallen on a red, green, and blue pixel. After alignment, then, you have R, G, and B information for any given scene element, which removes the need to demosaic. That itself leads to an increase in resolution (since you don’t have to interpolate spatial data from neighboring pixels), and a decrease in noise since the math required for demosaicing is itself a source of noise. The benefits are essentially similar to what you get when shooting pixel shift modes on dedicated cameras.

Normal wide-angle (28mm equiv.) Super Res Zoom

There’s a small catch to all this – at least for now. Super Res only activates at 1.2x zoom or more. Not in the default ‘zoomed out’ 28mm equivalent mode. As expected, the lower your level of zoom, the more impressed you’ll be with the resulting Super Res images, and naturally the resolving power of the lens will be a limitation. But the claim is that you can get “digital zoom roughly competitive with a 2x optical zoom” according to Isaac Reynolds, and it all happens right on the phone.

The results I was shown at Google appeared to be more impressive than the example we were provided above, no doubt at least in part due to the extreme zoom of our example here. We’ll reserve judgement until we’ve had a chance to test the feature for ourselves.

Would the Pixel 3 benefit from a second rear camera? For certain scenarios – still landscapes for example – probably. But having more cameras doesn’t always mean better capabilities. Quite often ‘second’ cameras have worse low light performance due to a smaller sensor and slower lens, as well as poor autofocus due to the lack of, or fewer, phase-detect pixels. One huge advantage of Pixel’s Portrait Mode is that its autofocus doesn’t differ from normal wide-angle shooting: dual pixel AF combined with HDR+ and pixel-binning yields incredible low light performance, even with fast moving erratic subjects.

2. Computational Raw

The Pixel 3 introduces ‘computational Raw’ capture in the default camera app. Isaac stressed that when Google decided to enable Raw in its Pixel cameras, they wanted to do it right, taking advantage of the phone’s computational power.

Our Raw file is the result of aligning and merging multiple frames, which makes it look more like the result of a DSLR

“There’s one key difference relative to the rest of the industry. Our DNG is the result of aligning and merging [up to 15] multiple frames… which makes it look more like the result of a DSLR” explains Marc. There’s no exaggeration here: we know very well that image quality tends to scale with sensor size thanks to a greater amount of total light collected per exposure, which reduces the impact of the most dominant source of noise in images: photon shot, or statistical, noise.

The Pixel cameras can effectively make up for their small sensor sizes by capturing more total light through multiple exposures, while aligning moving objects from frame to frame so they can still be averaged to decrease noise. That means better low light performance and higher dynamic range than what you’d expect from such a small sensor.

Shooting Raw allows you to take advantage of that extra range: by pulling back blown highlights and raising shadows otherwise clipped to black in the JPEG, and with full freedom over white balance in post thanks to the fact that there’s no scaling of the color channels before the Raw file is written.

Pixel 3 introduces in-camera computational Raw capture.

Such ‘merged’ Raw files represent a major threat to traditional cameras. The math alone suggests that, solely based on sensor size, 15 averaged frames from the Pixel 3 sensor should compete with APS-C sized sensors in terms of noise levels. There are more factors at play, including fill factor, quantum efficiency and microlens design, but needless to say we’re very excited to get the Pixel 3 into our studio scene and compare it with dedicated cameras in Raw mode, where the effects of the JPEG engine can be decoupled from raw performance.

While solutions do exist for combining multiple Raws from traditional cameras with alignment into a single output DNG, having an integrated solution in a smartphone that takes advantage of Google’s frankly class-leading tile-based align and merge – with no ghosting artifacts even with moving objects in the frame – is incredibly exciting. This feature should prove highly beneficial to enthusiast photographers. And what’s more – Raws are automatically uploaded to Google Photos, so you don’t have to worry about transferring them as you do with traditional cameras.

3. Synthetic Fill Flash

‘Synthetic Fill Flash’ adds a glow to human subjects, as if a reflector were held out in front of them. Photo: Google

Often a photographer will use a reflector to light the faces of backlit subjects. Pixel 3 does this computationally. The same machine-learning based segmentation algorithm that the Pixel camera uses in Portrait Mode is used to identify human subjects and add a warm glow to them.

If you’ve used the front facing camera on the Pixel 2 for Portrait Mode selfies, you’ve probably noticed how well it detects and masks human subjects using only segmentation. By using that same segmentation method for synthetic fill flash, the Pixel 3 is able to relight human subjects very effectively, with believable results that don’t confuse and relight other objects in the frame.

Interestingly, the same segmentation methods used to identify human subjects are also used for front-facing video image stabilization, which is great news for vloggers. If you’re vlogging, you typically want yourself, not the background, to be stabilized. That’s impossible with typical gyro-based optical image stabilization. The Pixel 3 analyzes each frame of the video feed and uses digital stabilization to steady you in the frame. There’s a small crop penalty to enabling this mode, but it allows for very steady video of the person holding the camera.

4. Learning-based Portrait Mode

The Pixel 2 had one of the best Portrait Modes we’ve tested despite having only one lens. This was due to its clever use of split pixels to sample a stereo pair of images behind the lens, combined with machine-learning based segmentation to understand human vs. non-human objects in the scene (for an in-depth explanation, watch my video here). Furthermore, dual pixel AF meant robust performance of even moving subjects in low light – great for constantly moving toddlers. The Pixel 3 brings some significant improvements despite lacking a second lens.

According to computational lead Marc Levoy, “Where we used to compute stereo from the dual pixels, we now use a learning-based pipeline. It still utilizes the dual pixels, but it’s not a conventional algorithm, it’s learning based”. What this means is improved results: more uniformly defocused backgrounds and fewer depth map errors. Have a look at the improved results with complex objects, where many approaches are unable to reliably blur backgrounds ‘seen through’ holes in foreground objects:

Learned result. Background objects, especially those seen through the toy, are consistently blurred. Objects around the peripheries of the image are also more consistently blurred. Learned depth map. Note how objects in the background (blue) aren’t confused as being closer to the foreground (yellow) as they are in the heat map below.
Stereo-only result. Background objects, especially those seen through the toy, aren’t consistently blurred. Stereo-only based depth map from dual pixels. Note how some elements in the background appear to be closer to the foreground than they really are.

Interestingly, this learning-based approach also yields better results with mid-distance shots where a person is further away. Typically, the further away your subject is, the less difference in stereo disparity between your subject and background, making accurate depth maps difficult to compute given the small 1mm baseline of the split pixels. Take a look at the Portrait Mode comparison below, with the new algorithm on the left vs. the old on the right.

Learned result. The background is uniformly defocused, and the ground shows a smooth, gradual blur. Stereo-only result. Note the sharp railing in the background, and the harsh transition from in-focus to out-of-focus in the ground.

5. Night Sight

Rather than simply rely on long exposures for low light photography, ‘Night Sight’ utilizes HDR+ burst mode photography to take usable photos in very dark situations. Previously, the Pixel 2 would never drop below 1/15s shutter speed, simply because it needed faster shutter speeds to maintain that 9-frame buffer with zero shutter lag. That does mean that even the Pixel 2 could, in very low light, effectively sample 0.6 seconds (9 x 1/15s), but sometimes that’s not even enough to get a usable photo in extremely dark situations.

The camera will merge up to 15 frames… to get you an image equivalent to a 5 second exposure

The Pixel 3 now has a ‘Night Sight’ mode which sacrifices the zero shutter lag and expects you to hold the camera steady after you’ve pressed the shutter button. When you do so, the camera will merge up to 15 frames, each with shutter speeds as low as, say, 1/3s, to get you an image equivalent to a 5 second exposure. But without the motion blur that would inevitably result from such a long exposure.

Put simply: even though there might be subject or handheld movement over the entire 5s span of the 15 frame burst, many of the the 1/3s ‘snapshots’ of that burst are likely to still be sharp, albeit possibly displaced relative to one another. The tile-based alignment of Google’s ‘robust merge’ technology, however, can handle inter-frame movement by aligning objects that have moved and discarding tiles of any frame that have too much motion blur.

Have a look at the results below, which also shows you the benefit of the wider-angle, second front-facing ‘groupie’ camera:

Normal front-camera ‘selfie’ Night Sight ‘groupie’ with wide-angle front-facing lens

Furthermore, Night Sight mode takes a machine-learning based approach to auto white balance. It’s often very difficult to determine the dominant light source in such dark environments, so Google has opted to use learning-based AWB to yield natural looking images.

Final thoughts: simpler photography

The philosophy behind the Pixel camera – and for that matter the philosophy behind many smartphone cameras today – is one-button photography. A seamless experience without the need to activate various modes or features.

This is possible thanks to the computational approaches these devices embrace. The Pixel camera and software are designed to give you pleasing results without requiring you to think much about camera settings. Synthetic fill flash activates automatically with backlit human subjects, and Super Resolution automatically kicks in as you zoom.

At their best, these technologies allows you to focus on the moment

Motion photos turns on automatically when the camera detects interesting activity, and Top Shot now uses AI to automatically suggest the best photo of the bunch, even if it’s a moment that occurred before you pressed the shutter button. Autofocus typically focuses on human subjects very reliably, but when you need to specify your subject, just tap on it and ‘Motion Autofocus’ will continue to track and focus on it very reliably. Perfect for your toddler or pet.

At their best, these technologies allow you to focus on the moment, perhaps even enjoy it, and sometimes even help you to capture memories you might have otherwise missed.

We’ll be putting the Pixel 3 through its paces soon, so stay tuned. In the meantime, let us know in the comments below what your favorite features are, and what you’d like to see tested.


1In good light, these last 9 frames typically span the last 150ms before you pressed the shutter button. In very low light, it can span up to the last 0.6s.

2We were only told ‘say, maybe 15 images’ in conversation about the number of images in the buffer for Super Res Zoom and Night Sight. It may be more, it could be less, but we were at least told that it is more than 9 frames. One thing to keep in mind is that even if you have a 15-frame buffer, not all frames are guaranteed to be usable. For example, if in Night Sight one or more of these frames have too much subject motion blur, they’re discarded.

3You can achieve a similar super-resolution effect manually with traditional cameras, and we describe the process here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Five ways Google Pixel 3 pushes the boundaries of computational photography

Posted in Uncategorized

 

Report: Apple discretely acquired mixed-reality startup Spektral for $30M

10 Oct

According to a report from Fortune, Apple discretely acquired Danish visual effects startup Spektral in December 2017.

Spektral was once named CloudCutout and focused on a cloud-based solution to masking a subject from the background of an photograph. Now, Spektral specializes in masking technology that uses machine learning to separate a subject in an image from the background in real-time on mobile devices. “Combining deep neural networks and spectral graph theory with the computing power of modern GPUs, our engine can process images and video from the camera in real-time (60 fps) directly on the device,” says Spektral on its website.

Neither Apple nor Spektral have confirmed the acquisition, but Fortune reports the deal was worth “more than $ 30 million.”

With no comment, we can’t say for sure what Apple intends to do with Spektral’s intellectual property and personnel, but Spektral Co-Founder and Chief Technical Officer Toke Jansen now lists “Manager, Computational Imaging” as his title at Apple on his LinkedIn profile. Combined with Apple’s ongoing efforts to beef up its augmented reality efforts in its apps — both its own and third-party — it’s safe to assume we’ll see the fruits of the acquisition in the near future, if we haven’t already seen parts of it.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Report: Apple discretely acquired mixed-reality startup Spektral for $30M

Posted in Uncategorized