RSS
 

Posts Tagged ‘Camera’

Sony a6100 review: Should it be your next family camera?

29 Oct

The Sony a6100 is a 24MP APS-C mirrorless camera, aimed squarely at beginners and people who want attractive photos but don’t necessarily think of themselves as photographers. A new, powerful autofocus system makes it one of the easiest cameras to use, if you just trust it to do its thing and concentrate instead on what you’re shooting.

The a6100’s specifications aren’t cutting edge, and it’s priced accordingly. But that simple, effective autofocus system means it makes it easy to get the photos you want, and hence, arguably, it’s better value than those less expensive rivals. On paper it looks a lot like the bargain-basement a6000 but in use this is a vastly better camera.

Our main doubts about the camera come if you want to get more involved in the photographic process.

Key features:

  • 24MP APS-C CMOS sensor
  • Advanced AF system with highlight dependable subject tracking
  • 1.44M dot OLED electronic viewfinder
  • 0.9M dot LCD tilting rear touchscreen
  • Wi-Fi for image transfer to smart devices (with NFC for quick connection)
  • 4K video capture
  • USB charging

The Sony a6100 has a list price of $ 750, a $ 50 premium over the launch price of the original a6000, which suggests it may take a similar low-cost position in the lineup, long term. It has a list price of $ 850 with the small but uninspiring 16-50mm F3.5-5.6 power zoom. A two-lens kit adds a 55-210mm zoom for an additional $ 250.


What’s new and how it compares

The a6100 contains many familiar components, combines them in a way that can be excellent as a family camera. After some initial setup.

Click here to find out the camera, its features and its rivals

Image quality

Improved JPEG color, a good sensor and sophisticated processing mean the a6100 delivers attractive images.

Click here to take a closer at the a6100’s images

Autofocus and Video

Autofocus is the camera’s great claim-to-fame, and its video is easy to capture.

Click here to read more

Conclusion

The a6100 has probably the most powerful, easy-to-use autofocus systems on the market. But there are a few too many inconveniences, we feel.

Click here to read what we found

Sample gallery

We think the a6100 can turn its hand to a bit of everything, see what you think of its performance.

Click here to see our sample gallery

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sony a6100 review: Should it be your next family camera?

Posted in Uncategorized

 

Netflix certifies the Panasonic S1H for productions, making it the smallest (and only stills/video) camera on the list

26 Oct

Panasonic’s full-frame S1H full-frame mirrorless camera has been certified as a Netflix primary camera and is now part of the Netflix Post Technology Alliance. This recognition means productions are now able to use the relatively affordable S1H as a main camera, so long as the footage is captured within a range of formats and settings.

As laid out in Netflix’s camera production guide, the S1H needs to be shot in at least 4K (3840 x 2160 pixels or 4096 x 2160 pixels) in V-Log with 4:2:2 10-bit All-I (400Mbps) encoding and pixel-for-pixel readout in either full-frame or Super 35 modes.

The capture settings Netflix is requiring of the S1H, according to the camera production guide linked above.

Other specific requirements within the production guide include Noise Reduction be set to zero and sharpening be set to zero, while less-strict recommendations include turning off diffraction compensation and vignetting compensation. Additional suggestions include using the S1H’s Pixel Refresh setting at least yearly, using the sensor cleaning feature and making sure the firmware is always up to date.

Currently, there are no current Netflix Original productions using the S1H, ‘to the best of [Panasonic’s] knowledge,’ but having the full-frame mirrorless camera available as an approved camera should draw massive appeal for smaller operations where budget is more a factor. Yes, $ 4000 for a single camera body isn’t cheap, but it’s easily the most affordable camera on Netflix’s approved list, which includes the like of ARRI’s Alexa LF, Canon C700, RED Weapon Dragon 8K And Sony F55 camera systems.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Netflix certifies the Panasonic S1H for productions, making it the smallest (and only stills/video) camera on the list

Posted in Uncategorized

 

DxO PhotoLab 3 brings improved repair tools, local adjustment masks and new camera support

25 Oct

DxO has launched DxO PhotoLab 3, the latest version of the company’s photo editing software. The new version of PhotoLab brings new and improved tools, including an optimized Repair Tool and entirely new Local Adjustments Masks Manager, as well as support for keyword searches in PhotoLibrary and new camera support.

PhotoLab 3 introduces a new color adjustment mode as part of the software’s Hue, Saturation, and Luminance (HSL) Tool, one that is based on the DxO ColorWheel. With the tool, users are able to choose color ranges from eight separate channels, according to the company, as well as replacement colors and more.

Joining the ColorWheel is the new Local Adjustments Masks Manager, which enables users to manage an image’s layered local correction masks. This control includes individually adjusting opacity, reversing selected masks, and more.

Below is a brief rundown of the new features from photographer Robin Whalley:

Beyond that, PhotoLab’s Repair Tool, which allows users to scrub specific elements from an image, has also been ‘optimized’ to include a new Clone Mode and support for manually repositioning the tool’s source area. Both Repair Tool modes include support for opacity adjustment and feathering.

As mentioned, the software’s PhotoLibrary has also been updated to include keyword searches and management. The PhotoLibrary now shows image keywords in the interface, plus there’s the ability to add, rename, and delete keywords, including for multiple images at once. At this time, support for keywords in multi-criteria searches is only available on the macOS version of the software, but DxO says it will bring the same functionality to the Windows version soon.

DxO is offering PhotoLab 3 Essential and Elite Editions for Mac and PC at discounted prices until November 24:

  • DxO PhotoLab 3 Essential Edition: €99.99 / £86.99 / $ 99.99
  • DxO PhotoLab 3 Elite Edition: €149.99 / £129.99 / $ 149.99

DxO PhotoLab 3: The most colourful upgrade yet

DxO, one of the most innovative companies in the photography and image editing industry, is announcing its latest version of DxO PhotoLab, the most advanced photo editing software on the market in terms of image quality. With its completely redesigned HSL colour adjustment feature, optimized Repair Tool, and brand-new Local Adjustments Masks Manager, DxO PhotoLab 3 offers an exceptional level of colorimetry control, making the photographer’s job easier than ever before. Because the DxO PhotoLibrary now supports keyword searches, it offers an even more comprehensive workflow and improved compatibility with other photo editing software programs.

A new approach to adjusting colour

With its innovative and visual approach to colour management, DxO PhotoLab 3’s HSL (Hue, Saturation, and Luminance) Tool offers unparalleled control so you can produce even more natural- looking or creative images. It features a new colour adjustment mode based on a chromatic circle called the DxO ColorWheel. With this tool, you can select a colour range from eight different channels, fine-tune the value, select a replacement colour, and adjust your transitions to your heart’s desire. A new Uniformity setting also lets you adjust colour variations within a specific range. The Saturation and Luminance sliders now operate more independently, which offers more flexibility, especially when converting from colour to black and white and creating partially desaturated images.

“With the DxO ColorWheel, we were looking to create a new approach that could make colour management both flexible and fun. This tool is incredibly user friendly,” says Jean-Marc Alexia, VP Marketing & Product Strategy at DxO.

A Repair Tool that offers even more control

DxO PhotoLab 3 continues to improve its local adjustments options to offer users even more precision. One of these features, the Repair Tool, which acts as a brush that can erase unwanted elements from the image, has been updated. You can now manually reposition the area in the source image that you want to use to reconstruct an area in the image being edited. In addition to Repair Mode, DxO PhotoLab 3 also offers Clone Mode, which lets you directly replace the area you are editing. Feathering and opacity level can also be adjusted in both modes.

New Local Adjustments Masks Manager

DxO PhotoLab 3’s new Local Adjustments palette lets you manage local correction masks that have been layered within a single image. Make them visible, mask them, or adjust their opacity individually. The tool also lets you reverse the selected mask with a single click, adding additional flexibility and saving a significant amount of time.

A more complete workflow through keywords

In addition to the search criteria that are already available in the DxO PhotoLibrary (metadata, shooting parameters, folders, etc.), DxO PhotoLab 3 now offers keyword management and optimizes image organization all the way up to export. The keywords associated with an image can now be displayed in the interface, including when they are imported from other software programs. You can now add, delete, or rename keywords for one or multiple images simultaneously and include them in multi-criteria searches (macOS version only; this feature will be available in the Windows version in the near future). DxO PhotoLab 3 also offers more complete information and metadata display options as well as additional Projects management options.

New camera support

DxO PhotoLab 3 continues to add new cameras to the list of equipment it supports. It recently added the Canon G5 X Mark II and G7 X Mark III, the Nikon P1000, the Panasonic Lumix DC- G90/G95/G99/G91, the Lumix DC FZ1000 II and Lumix TZ95/ZS80, the Ricoh GR III, and the Sony A7R IV and RX100 VII. More than 3,000 optical modules have also been added to the database, which now includes over 50,000 different camera/lens combinations. The software’s de-noising capabilities for RAW photos taken with certain Canon and Olympus cameras has been improved as well.

Price & availability

The ESSENTIAL and ELITE editions of DxO PhotoLab 3 (PC and Mac) are now available for download on DxO’s website for the following launch prices until November 24, 2019:

page2image570121568

DxO PhotoLab 3 ESSENTIAL Edition: DxO PhotoLab 3 ELITE Edition:

DxO PhotoLab 3 ESSENTIAL Edition: DxO PhotoLab 3 ELITE Edition:

DxO PhotoLab 3 ESSENTIAL Edition: DxO PhotoLab 3 ELITE Edition:

€99.99 instead of €129 €149.99 instead of €199

£86.99 instead of £112 £129.99 instead of £169

$ 99.99 instead of $ 129 $ 149.99 instead of $ 199

You do not need a subscription to use DxO PhotoLab 3. You can install the program on two computers with the DxO PhotoLab 3 ESSENTIAL Edition or on three computers with the DxO PhotoLab 3 ELITE Edition. Photographers with a license for DxO OpticsPro or DxO PhotoLab 2 can purchase an upgrade license for DxO PhotoLab 3 by signing into their customer account on www.dxo.com. A fully-functional, one-month trial version of DxO PhotoLab 3 is available on the DxO website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on DxO PhotoLab 3 brings improved repair tools, local adjustment masks and new camera support

Posted in Uncategorized

 

Tether Tools Air Direct is a new and improved way to wirelessly connect your camera to your computer, mobile device

25 Oct

Tether Tools has introduced a new device that enables users to create a wireless connection between their camera and theirfavoritee tethering software, and which avoids the use of hot folders and middleman applications for camera controls.

The new Air Direct allows a much wider range of camera models and brands to be used than the company’s current Air Case, and enables tethering software applications to operate as if the camera were connected with a cable.

The idea is that those using software applications such as Capture One Pro can use the software’s tethering functions as normal, with the system sending live previews to the software and the software controlling the camera’s operation. While the Tether Tools Air Case was a Nikon/Canon-only device, the new Air Direct will work with Fujifilm, Olympus, Panasonic and Sony cameras as well — so long as they are compatible with the software in use. Canon and Nikon owners will also be able to tether via smartphones and tablets using the existing Air Remote App.

The new device uses twin antennas to send and receive data to and from the camera via 2.4GHz and 5GHz 802.11AC Wi-Fi networks, which Tether Tools claims, along with a USB-C cable, allows a transfer connection 5x faster than before. Air Direct has a range of 200ft/60m and can send both Raw and JPEG files to PC and Mac computers simultaneously. Battery or DC powered, the Air Direct doesn’t drain camera power and can be run using an external battery pack via the USB-C connection.

The Tether Tools Air Direct will cost £358.80/$ 329.99. For more information see the Tether Tools website.

Press release:

Tether Tools Air Direct

Shoot further, faster, from anywhere, to any tethering software of choice.

Air Direct wirelessly transfers RAW and JPG images to Capture One, SmartShooter, Lightroom and others, as if you were shooting with a cable. Connect DSLR, Medium Format and Mirrorless cameras direct to any supported tethering software.

Key Features:

  • Wireless camera control and transfer from your software. No compromises. Capture One, Lightroom, Smart Shooter, DarkRoom & many other tethering software programs.
  • USB-C technology for lightning fast transfer.
  • 802.11AC Wi-Fi connection. Internet access not required.
  • Two-way communication from computer to camera, camera to computer.
  • Transfer Raw and JPEG to Mac and PC simultaneously.
  • One to One camera connection for secure communication.
  • 5X faster transfer speed and range up to 200 feet (60m).
  • Compatible with Canon, Fuji, Nikon, Sony, Olympus, Panasonic LUMIX, Phase One, Hasselblad, Leica models with USB tether. Not designed for non-supported camera models.
  • Mobile users enjoy all the benefits of the Air Remote App on iOS and Android devices.
  • Powered by easy access LP-E6 battery or DC input. Air Direct utilizes its own power source and will not drain the camera’s battery. For longer shoots, use large external USB batteries or AC wall power via the supplied DC cable. Air Direct offers the flexibility to hot swap power without ever shutting down. Power and shoot simultaneously.

Computer

The Air Direct Utility or ADU allows for Wireless PTP communication between camera and tethering software on MacOS or Windows. Supported cameras: www.TetherTools.com/Air-Direct

Tethering software of your choice such as Capture One, Smart Shooter 4 or Lightroom and many others can be used with Air Direct. (Required for MAC and PC)

Mobile

Air Direct is compatible with iOS and Android devices for Canon and Nikon cameras via Air Remote Mobile App. (Logo/Icon)

Air Remote App features creative zone control, live view, bracketing, time-lapse, focus stacking, bulb time, movie mode and more.

Connection Setup

The Air Direct offers two ways to wirelessly tether your camera.

Connect the Air Direct to your PC or Mac via the Air Direct Utility (ADU) and use tethering software of your choice, OR

Connect the Air Direct to your mobile device (phone or tablet) and tether with the Air Remote App.*

*Note: Connecting the Air Direct to your mobile device phone or tablet) is compatible with Canon and Nikon cameras only
at this time.

Product Specifications:

  • USB Protocol: USB-C
  • Wi-Fi: 802.11AC
  • Bandwidth: 2.4 and 5GHz
  • Range:?up to 200 feet (60m)
  • Battery Life:?3-5 hours*
  • Size:?1.5″ x 3.6″ x 2.7″ (39mm x 92mm x 68.6mm)?
  • Weight: 6oz (170g)

*Results may vary based on phone, tablet, or computer used.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Tether Tools Air Direct is a new and improved way to wirelessly connect your camera to your computer, mobile device

Posted in Uncategorized

 

Blackmagic Camera 6.6 adds new features, functionality to 4K, 6K Pocket Cinema Cameras

24 Oct

Blackmagic Design has announced a new firmware update for its Blackmagic Pocket Cinema Camera 4K and 6K systems. that brings new features and functionality. Blackmagic Camera 6.6 mainly focuses on the Pocket Cinema Camera 4K (BMPCC4K), but also adds a few features to the newer Pocket Cinema Camera 6K (BMPCC6K) camera as well.

For the BMPCC6K, the Blackmagic Camera 6.6 update adds support for the Blackmagic Pocket Battery Grip, language localization, a built-in camera horizon tool, ‘pinch-to-zoom’ magnification up to 8x, USB PTP control support, the ability to type in customized frame guide ratio and improved autofocus performance.

The BMPCC4K receives all of the above features the BMPCC6K received with the Blackmagic Camera 6.6 update, as well as a slew of other new features. Below is a full list of the new features in the update for the BMPCC4K:

• Added support for 4K 2.4:1 4096 x 1712 recording in Blackmagic RAW up to 75 fps.
• Added support for 2.6K 2688 x 1512 up to 120 fps recording in Blackmagic RAW suitable for Super16mm lenses.
• Added support for 2.8K 4:3 2880 x 2160 recording in Blackmagic RAW up to 80 fps for anamorphic lenses.
• Added support for 2x desqueeze preview when recording 4K 4:3.
• Added support for 1.33x desqueeze preview.
• Added support for pinch to zoom up to 8x magnification.
• Added USB PTP control support.
• Added ability to embed custom 3D LUTs in Blackmagic RAW clips as metadata.
• Added common off-speed frame rate options above slider when changing frames.
• Added common ISO options above slider when changing ISO settings.
• Added 1:1 and 4:5 frame guide options.
• Added ability to monitor voltage level when powering via 12V DC connector.

The Blackmagic Camera 6.6 update is available to download for free through Blackmagic’s support page. You can find the links about half-way down the page under the far-left section that reads ‘Latest Downloads.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Blackmagic Camera 6.6 adds new features, functionality to 4K, 6K Pocket Cinema Cameras

Posted in Uncategorized

 

Photographer uses drone with thermal camera to find missing 6-year-old boy

24 Oct

Photographer Steve Fines helped save the life of a 6-year-old boy who went missing last week on a subfreezing night in Sherburne County, Minnesota.

A group of roughly 600 people showed up to search for Ethan and his dog. As temperatures dropped below 30ºF that evening, the situation became increasingly dire. ‘I heard about the search at 8:00 pm and arrived on site about 9:00 pm,’ Fines tells DPReview. He arrived carrying his DJI m210RTK v2 drone equipped with an XT2 dual thermal camera.

An annotated image shared by Fine showing the location of Ethan and his dog being rescued.

Since Fines uses the drone for business purposes, he already had 10-12 sets of batteries charged and ready to go. ‘I went to the command center and introduced myself. They asked me not to fly until the state police helicopter left the area, which happened about 10:30 pm,’ Fines told us. He also emphasized how important it is for drone operators to yield the right of way to helicopters and other emergency response efforts. Interfering with these critical operations is against the law and can result in fines upwards of $ 20,000. Drone operators need to coordinate with local authorities first before getting involved.

Once Fines received clearance for takeoff, he said ‘I quickly started flying and it was by using a programmed flight path that I could efficiently cover a lot of ground. After quite a few false positives – otters, deer, bear – at 1:40 am, I spotted the six-year-old and his dog. By 1:50 am, a ground rescue team made it to his location and I watched them pick him up on the thermal monitor.’

While Fines has received a lion’s share of the credit from local news station KARE 11 for the success of the rescue, he took to social media and gave thanks to the coordinated efforts led by the County Sheriff along with the hundreds of volunteers that helped guide him in the right direction.

This thermal image shows a stream of volunteers walking a path to find missing 6-year-old Ethan.

While I was running the camera that found him […] I only knew in which direction to look because volunteers on the ground had found a footprint that pointed me in the right direction. I knew which areas had already been searched because of the excellent coordination of the Sherburne County Sheriff. I had other volunteers running radios to coordinate ground search parties – the people moving across really rough ground to find him. There were 600 of us that found Ethan that night.

Below is a video from KARE 11 showing more behind-the-scenes footage of the rescue and a thank you from Ethan:

You can check out more of Fines’ work via his website and follow Fines Aerial Imaging on Facebook and Instagram.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Photographer uses drone with thermal camera to find missing 6-year-old boy

Posted in Uncategorized

 

Lomography launches LomoMod No.1 DIY cardboard camera with a liquid-filled lens

22 Oct

Lomography has introduced LomoMod No.1, a DIY medium-format camera constructed from cardboard, as well as a new lens with a shutter and aperture unit. The lens can be filled with liquid from a syringe, according to Lomography, in order to produce ‘unique artistic aesthetics’ using tea, coffee, and more.

Much in the same way as Google’s cardboard virtual reality headset, Lomography’s new LomoMod No.1 ships as flat-packed sustainable cardboard that the customer assembles at home. This process appears fairly straightforward, as the construction doesn’t require glue or screws to put together. In addition to being moddable, the cardboard camera is also doubled-sided to offer matte black and UV pattern options.

Below is a gallery of sample images taken with the camera:

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_8835030881″,”galleryId”:”8835030881″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Beyond the cardboard camera body, the LomoMod No.1 features a unique lens that replaces traditional filters with an injectable design. The companion shutter and aperture module feature both normal and bulb modes with support for long exposures. The unit features customizable aperture plates, as well, for creating ‘unique’ bokeh. Rounding out the design is a tripod mount and PC-sync and cable release socket.

The full kit ships with:

  • 11 Sheets of Cardboard Cutouts
  • 1 Sheet Aperture Plates Set
  • Sutton Lens Module
  • Aperture and Shutter Module
  • 120 Film Spool
  • Tripod Nut
  • Tube
  • Syringe
  • Valve
  • Colorful Stickers
  • Photo Book & User Manual

The Lomography LomoMod No.1 is available to pre-order for $ 59 USD. Units have already started shipping in Hong Kong but won’t start shipping in Japan, the United States or Europe until early next month.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Lomography launches LomoMod No.1 DIY cardboard camera with a liquid-filled lens

Posted in Uncategorized

 

These are the most important Google Pixel 4 camera updates

19 Oct

Google yesterday announced the Pixel 4 and Pixel 4 XL, updates to the popular line of Pixel smartphones.

We had the opportunity recently to sit down with Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, and Isaac Reynolds, Product Manager for Camera on Pixel, to dive deep into the imaging improvements brought to the lineup by the Pixel 4.

Table of contents:

  • More zoom
  • Dual exposure controls / Live HDR+
  • Improved Night Sight
  • DSLR-like bokeh
  • Portrait mode improvements
  • Further improvements
  • Conclusion

Note that we do not yet have access to a production-quality Pixel 4. As such, many of the sample images in this article were provided by Google.

More zoom

The Pixel 4 features a main camera module with a 27mm equivalent F1.7 lens, employing a 12MP 1/2.55″ type CMOS sensor. New is a second ‘zoomed-in’ camera module with a 48mm equivalent, F2.4 lens paired with a slightly smaller 16MP sensor. Both modules are optically stabilized. Google tells us the net result is 1x-3x zoom that is on par with a true 1x-3x optical zoom, and pleasing results all the way out to 4x-6x magnification factors. No doubt the extra resolution of the zoomed-in unit helps with those higher zoom ratios.

Have a look at what the combination of two lenses and super-res zoom gets you with these 1x to 8x full-resolution samples from Google.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_3571311037″,”galleryId”:”3571311037″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Marc emphasized that pinching and zooming to pre-compose your zoomed-in shot is far better than cropping after the fact. I’m speculating here, but I imagine much of this has to do with the ability of super-resolution techniques to generate imagery of higher resolution than any one frame. A 1x super-res zoom image (which you get by shooting 1x Night Sight) still only generates a 12MP image; cropping and upscaling from there is unlikely to get you as good results as feeding crops to the super-res pipeline for it to align and assemble on a higher resolution grid before it outputs a 12MP final image.

We’re told that Google is not using the ‘field-of-view fusion’ technique Huawei uses on its latest phones where, for example, a 3x photo gets its central region from the 5x unit and its peripheries from upscaling (using super-resolution) the 1x capture. But given Google’s choice of lenses, its decision makes sense: from our own testing with the Pixel 3, super-res zoom is more than capable of handling zoom factors between 1x and 1.8x, the latter being the magnification factor of Google’s zoomed-in lens.

Dual exposure controls with ‘Live HDR+’

The results of HDR+, the burst mode multi-frame averaging and tonemapping behind every photograph on Pixel devices, are compelling, retaining details in brights and darks in, usually, a pleasing, believable manner. But it’s computationally intensive to show the end result in the ‘viewfinder’ in real-time as you’re composing. This year, Google has opted to use machine learning to approximate HDR+ results in real-time, leading to a much better viewfinder experience.1 Google calls this ‘Live HDR+’. It’s essentially a WYSIWYG implementation that should give photographers more confidence in the end result, and possibly feel less of a need to adjust the overall exposure manually.

“If we have an intrinsically HDR camera, we should have HDR controls for it” – Marc Levoy

On the other hand, if you do have an approximate live view of the HDR+ result, wouldn’t it be nice if you could adjust it in real-time? That’s exactly what the new ‘dual exposure controls’ allow for. Tap on the screen to bring up two separate exposure sliders. The brightness slider, indicated by a white circle with a sun icon, adjusts the overall exposure, and therefore brightness, of the image. The shadows slider essentially adjusts the tonemap, so you can adjust shadow and midtone visibility and detail to suit your taste.

Default HDR+ result Brightness slider (top left) lowered to darken overall exposure
Shadows slider (top center) lowered to create silhouettes Final result

Dual exposure controls are a clever way to operate an ‘HDR’ camera, as it allows the user to adjust both the overall exposure and the final tonemap in one or two swift steps. Sometimes HDR and tonemapping algorithms can go a bit far (as in this iPhone XS example here), and in such situations photographers will appreciate having some control placed back in their hands.

And while you might think this may be easy to do after-the-fact, we’ve often found it quite difficult to use the simple editing tools on smartphones to push down the shadows we want darkened after tonemapping has already brightened them. There’s a simple reason for that: the ‘shadows’ or ‘blacks’ sliders in photo editing tools may or may not target the same range of tones the tonemapping algorithms did when initially processing the photo.

Improved Night Sight

Google’s Night Sight is widely regarded as an industry benchmark. We consistently talk about its use not just for low light photography, but for all types of photography because of its use of a super-resolution pipeline to yield higher resolution results with less aliasing and moire artifacts. Night Sight is what allowed the Pixel 3 to catch up to 1″-type and four-thirds image quality, both in terms of detail and noise performance in low light, as you can see here (all cameras shot with equivalent focal plane exposure). So how could Google improve on that?

Well, let’s start with the observation that some reviewers of the new iPhone 11 remarked that its night mode had surpassed the Pixel 3’s. While that’s not entirely true, as I covered in my in-depth look at the respective night modes, we have found that at very low light levels the Pixel 3 does fall behind. And it mostly has to do with the limits: handheld exposures per-frame in our shooting with the Pixel 3 were limited to ~1/3s to minimize blur caused by handshake. Meanwhile, the tripod-based mode only allowed shutter speeds up to 1s. Handheld and tripod-based shots were limited to 15 and 6 total frames, respectively, to avoid user fatigue. That meant the longest exposures you could ever take were limited to 5-6s.

Pixel 4 extends the per-frame exposure, when no motion is detected, to at least 16 seconds and up to 15 frames. That’s a total of 4 minutes of exposure. Which is what allows the Pixel 4 to capture the Milky Way:

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_6422957949″,”galleryId”:”6422957949″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Remarkable is the lack of user input: just set the phone up against a rock to stabilize it, and press one button. That’s it. It’s important to note you couldn’t get this result with one long exposure, either with the Pixel phone or a dedicated camera, because it would result in star trails. So how does the Pixel 4 get around this limitation?

The same technique that enables high quality imagery from a small sensor: burst photography. First, the camera picks a shutter speed short enough to ensure no star trails. Next, it takes many frames at this shutter speed and aligns them. Since alignment is tile-based, it can handle the moving stars due to the rotation of the sky just as the standard HDR+ algorithm handles motion in scenes. Normally, such alignment is very tricky for photographers shooting night skies with non-celestial, static objects in the frame, since aligning the stars would cause misalignment in the foreground static objects, and vice versa.

Improved Night Sight will not only benefit starry skyscapes, but all types of photography requiring long exposures

But Google’s robust tile-based merge can handle displacement of objects from frame to frame of up to ~8% in the frame2. Think of it as tile-based alignment where each frame is broken up into roughly 12,000 tiles, with each tile individually aligned to the base frame. That’s why the Pixel 4 has no trouble treating stars in the sky differently from static foreground objects.

Another issue with such long total exposures is hot pixels. These pixels can become ‘stuck’ at high luminance values as exposure times increase. The new Night Sight uses clever algorithms to emulate hot pixel suppression, to ensure you don’t have bright pixels scattered throughout your dark sky shot.

DSLR-like bokeh

This is potentially a big deal, and perhaps underplayed, but the Google Pixel 4 will render bokeh, particularly out-of-focus highlights, closer to what we’d expect from traditional cameras and optics. Until now, while Pixel phones did render proper disc-shaped blur for out of focus areas as real lenses do (as opposed to a simple Gaussian blur), blurred backgrounds simply didn’t have the impact they tend to have with traditional cameras, where out-of-focus highlights pop out of the image in gorgeous, bright, disc-shaped circles as they do in these comparative iPhone 11 examples here and also here.

The new bokeh rendition on the Pixel 4 takes things a step closer to traditional optics, while avoiding the ‘cheap’ technique some of its competitors use where bright circular discs are simply ‘stamped’ in to the image (compare the inconsistently ‘stamped’ bokeh balls in this Samsung S10+ image here next to the un-stamped, more accurate Pixel 3 image here). Have a look below at the improvements over the Pixel 3; internal comparisons graciously provided to me via Google.

Daytime bokeh

Daytime bokeh

Nighttime bokeh

Nighttime bokeh

The impactful, bright, disc-shaped bokeh of out-of-focus highlights are due to the processing of the blur at a Raw level, where linearity ensures that Google’s algorithms know just how bright those out-of-focus highlights are relative to their surroundings.

Previously, applying the blur to 8-bit tonemapped images resulted in less pronounced out-of-focus highlights, since HDR tonemapping usually compresses the difference in luminosity between these bright highlights and other tones in the scene. That meant that out-of-focus ‘bokeh balls’ weren’t as bright or separated from the rest of the scene as they would be with traditional cameras. But Google’s new approach of applying the blur at the Raw stage allows it to more realistically approximate what happens optically with conventional optics.

One thing I wonder about: if the blur is applied at the Raw stage, will we get Raw portrait mode images in a software update down-the-line?

Portrait mode improvements

Portrait mode has been improved in other ways apart from simply better bokeh, as outlined above. But before we begin I want to clarify something up front: the term ‘fake bokeh’ as our readers and many reviewers like to call blur modes on recent phones is not accurate. The best computational imaging devices, from smartphones to Lytro cameras (remember them?), can actually simulate blur true to what you’d expect from traditional optical devices. Just look at the gradual blur in this Pixel 2 shot here. The Pixel phones (and iPhones as well as other phones) generate actual depth maps, gradually blurring objects from near to far. This isn’t a simple case of ‘if area detected as background, add blurriness’.

The Google Pixel 3 generated a depth map from its split photodiodes with a ~1mm stereo disparity, and augmented it using machine learning. Google trained a neural network using depth maps generated by its dual pixel array (stereo disparity only) as input, and ‘ground truth’ results generated by a ‘franken-rig’ that used 5 Pixel cameras to create more accurate depth maps than simple split pixels, or even two cameras, could. That allowed Google’s Portrait mode to understand depth cues from things like defocus cues (out-of-focus objects are probably further away than in-focus ones) and semantic cues (smaller objects are probably further away than larger ones).

Deriving stereo disparity from two perpendicular baselines affords the Pixel 4 much more accurate depth maps

The Pixel 4’s additional zoomed-in lens now gives Google more stereo data to work with, and Google has been clever in its arrangement: if you’re holding the phone upright, the two lenses give you horizontal (left-right) stereo disparity, while the split pixels on the main camera sensor give you vertical (up-down) stereo disparity. Having stereo data along two perpendicular axes avoids artifacts related to the ‘aperture problem’, where detail along the axis of stereo disparity essentially has no measured disparity.

Try this: look at a horizontal object in front of you and blink to switch between your left and right eye. The object doesn’t look very different as you switch eyes, does it? Now hold out your index finger, pointing up, in front of you, and do the same experiment. You’ll see your finger moving dramatically left and right as you switch eyes.

Deriving stereo disparity from two perpendicular baselines affords the Pixel 4 much more accurate depth maps, with the dual cameras providing disparity information that the split pixels might miss, and vice versa. In the example below, provided by Google, the Pixel 4 result is far more believable than the Pixel 3 result, which has parts of the upper and lower green stem, and the horizontally-oriented green leaf near bottom right, accidentally blurred despite falling within the plane of focus.

(dual baseline)

(single baseline)

The combination of two baselines, one short (split pixels) and one significantly longer (the two lenses) also has other benefits. The longer stereo baselines of dual camera setups can run into the problem of occlusion: since the two perspectives are considerably different, one lens may see a background object that to the other lens is hidden behind a foreground object. The shorter 1mm disparity of the dual pixel sensor means its less prone to errors due to occlusion.

On the other hand, the short disparity of the split pixels means that further away objects that are not quite at infinity appear the same to ‘left-looking’ and ‘right-looking’ (or up/down) photodiodes. The longer baseline of the dual cameras means that stereo disparity can be calculated for these further away objects, which allows the Pixel 4’s portrait mode to better deal with distant subjects, or groups of people shot from further back, as you can see below.

There’s yet another benefit of the two separate methods for calculating stereo disparity: macro photography. If you’ve shot portrait mode on telephoto units of other smartphones, you’ve probably run into error messages like ‘Move farther away’. That’s because these telephoto lenses tend to have a minimum focus distance of ~20cm. Meanwhile, the minimum focus distance of the main camera on the Pixel 4 is only 10cm. That means that for close-up photography, the Pixel 4 can simply use its split pixels and learning-based approach to blur backgrounds.3

One thing we’ll be curious to test is if the additional burden of taking two images with the dual camera setup will lead to any latency. The iPhone 11, for example, has considerable shutter lag in portrait mode.

Google continues to keep a range of planes in perfect focus, which can sometimes lead to odd results where multiple people in a scene remain focused despite being at different depths. However, this approach avoids prematurely blurring parts of people that shouldn’t be blurred, a common problem with iPhones.

Oddly, portrait mode is unavailable with the zoomed-in lens, instead opting to use the same 1.5x crop from the main camera that the Pixel 3 used. This means images will have less detail compared to some competitors, especially since the super-res zoom pipeline is still not used in portrait mode. It also means you don’t get the versatility of both wide-angle and telephoto portrait shots. And if there’s one thing you probably know about me, it’s that I love my wide angle portraits!

Pixel 4’s portrait mode continues to use a 1.5x crop from the main camera. This means that, like the Pixel 3, it will have considerably less detail than portrait modes from competitors like the iPhone 11 Pro that use the full-resolution image from wide or tele modules. Click to view at 100%

Further improvements

There are a few more updates to note.

Learning-based AWB

The learning-based white balance that debuted in Night Sight is now the default auto white balance (AWB) algorithm in all camera modes on the Pixel 4. What is learning-based white balance? Google trained its traditional AWB algorithm to discriminate between poorly, and properly, white balanced images. The company did this by hand-correcting images captured using the traditional AWB algorithm, and then using these corrected images to train the algorithm to suggest appropriate color shifts to achieve a more neutral output.

Google tells us that the latest iteration of the algorithm is improved in a number of ways. A larger training data set has been used to yield better results in low light and adversarial lighting conditions. The new AWB algorithm is better at recognizing specific, common illuminants and adjusting for them, and also yields better results under artificial lights of one dominant color. We’ve been impressed with white balance results in Night Sight on the Pixel 3, and are glad to see it ported over to all camera modes. See below how Google’s learning-based AWB (top left) preserves both blue and red/orange tones in the sky compared to its traditional AWB (top right), and how much better it is at separating complex sunset colors (bottom left) compared to the iPhone XS (bottom right).

Learning-based AWB (Pixel 3 Night Sight) Traditional AWB (Pixel 3)
Learning-based AWB (Pixel 3 Night Sight) iPhone XS HDR result

New face detector

A new face detection algorithm based solely on machine learning is now used to detect, focus, and expose for faces in the scene. The new face detector is more robust at identifying faces in challenging lighting conditions. This should help the Pixel 4 better focus on and expose for, for example, strongly backlit faces. The Pixel 3 would often prioritize exposure for highlights and underexpose faces in backlit conditions.

Though tonemapping would brighten the face properly in post-processing, the shorter exposure would mean more noise in shadows and midtones, which after noise reduction could lead to smeared, blurry results. In the example below the Pixel 3 used an exposure time of 1/300s while the iPhone 11 yielded more detailed results due to its use of an exposure more appropriate for the subject (1/60s).

Along with the new face detector, the Pixel 4 will (finally) indicate the face it’s focusing on in the ‘viewfinder’ as you compose. In the past, Pixel phones would simply show a circle in the center of the screen every time it refocused, which was a very confusing experience that left users wondering whether the camera was in fact focusing on a face in the scene, or simply on the center. Indicating the face its focusing on should allow Pixel 4 users to worry less, and feel less of a need to tap on a face in the scene if the camera’s already indicating it’s focusing on it.

On previous Pixel phones, a circle focus indicator would pop up in the center when the camera refocused, leading to confusion. Is the camera focusing on the face, or the outstretched hand? On the Huawei P20, the camera indicates when it’s tracking a face. The Pixel 4 will have a similar visual indicator.

Semantic segmentation

This isn’t new, but in his keynote Marc mentioned ‘semantic segmentation’ which, like the iPhone, allows image processing to treat different portions of the scene differently. It’s been around for years in fact, allowing Pixel phones to brighten faces (‘synthetic fill flash’), or to better separate foregrounds and backgrounds in Portrait mode shots. I’d personally point out that Google takes a more conservative approach in its implementation: faces aren’t brightened or treated differently as much as they tend to be with the iPhone 11. The end result is a matter of personal taste.

Conclusion

The questions on the minds of many of our readers will undoubtedly be: (1) what is the best smartphone for photography I can buy, and (2) when should I consider using such a device as opposed to my dedicated camera?

We have much testing to do and many side-by-sides to come. But from our tests thus far and our recent iPhone 11 vs. Pixel 3 Night Sight article, one thing is clear: in most situations the Pixel cameras are capable of a level of image quality unsurpassed by any other smartphone when you compare images at the pixel (no pun intended) level.

But other devices are catching up, or exceeding Pixel phone capabilities. Huawei’s field-of-view fusion offers compelling image quality across multiple zoom ratios thanks to its fusion of image data from multiple lenses. iPhones offer a wide-angle portrait mode far more suited for the types of photography casual users engage in, with better image quality to boot than Pixel’s (cropped) Portrait mode.

The Pixel 4 takes an already great camera and refines it to achieve results closer to, and in some cases surpassing, traditional cameras and optics

Overall though, Google Pixel phones deliver some of the best image quality we’ve seen from a mobile device. No other phone can compete with its Raw results, since Raws are a result of a burst of images stacked using Google’s robust align-and-merge algorithm. Night Sight is now improved to allow for superior results with static scenes demanding long exposures. And Portrait mode is vastly improved thanks to dual baselines and machine learning, with fewer depth map errors and better ability to ‘cut around’ complex objects like pet fur or loose hair strands. And pleasing out-of-focus highlights thanks to ‘DSLR-like bokeh’. AWB is improved, and a new learning-based face detector should improve focus and exposure of faces under challenging lighting.

It’s not going to replace your dedicated camera in all situations, but in many it might. The Pixel 4 takes an already great camera in the Pixel 3, and refines it further to achieve results closer to, and in some cases surpassing, traditional cameras and optics. Stay tuned for more thorough tests once we get a unit in our hands.

Finally, have a watch of Marc Levoy's Keynote presentation yesterday below. And if you haven’t already, watch his lectures on digital photography or visit his course website from the digital photography class he taught while at Stanford. There’s a wealth of information on digital imaging in those talks, and Marc has a knack for distilling complex topics into elegantly simple terms.


Footnotes:

1 The Pixel 3’s dim display combined with the dark shadows of a non-HDR preview often made the experience of shooting high contrast scenes outdoors lackluster, sometimes even making it difficult to compose. Live HDR+ should dramatically improve the experience, though the display remains relatively dim compared to the iPhone 11 Pro.

2 The original paper on HDR+ by Hasinoff and Levoy claims HDR+ can handle displacements of up to 169 pixels within a single raw color channel image. For a 12MP 4:3 Bayer sensor, that’s 169 pixels of a 2000 pixel wide (3MP) image, which amounts to ~8.5%. Furthermore, tile-based alignment is performed using as small as 16×16 pixel blocks of that single raw channel image. That amounts to ~12,000 effective tiles that can be individually aligned.

3 The iPhone 11’s wide angle portrait mode also allows you to get closer to subjects, since its ultra-wide and wide cameras can focus on nearer subjects than its telephoto lens.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on These are the most important Google Pixel 4 camera updates

Posted in Uncategorized

 

London researchers develop plant-powered camera system for conservation efforts

19 Oct

ZSL London Zoo has detailed the results of a new scientific trial that successfully powered a tiny camera using plants. At the core of the system are microbial fuel cells designed to harness the energy produced by bacteria in the soil, which works to breakdown biomatter produced by plants. The end result, according to ZSL, may one day be plant-powered cameras that can be used as part of conservation efforts.

The microbial fuel cells were installed in the London Zoo’s Rainforest Life exhibit for use with a maidenhair fern named Pete. Unlike batteries, which need to be regularly recharged using sunlight or an external power source, plant-based fuel cells can be used to power many low-energy sensors, cameras, and other devices in a variety of environments.

‘We’ve quite literally plugged into nature to help protect the world’s wildlife: Pete has surpassed our expectations and is currently taking a photo every 20 seconds,’ said ZSL Conservation Technology Specialist Al Davies. ‘He’s been working so well we’ve even accidentally photobombed him a few times!’ Below are a few photos captured with the system:

By utilizing this technology, conservationists may be able to monitor plant growth, temperature, and other data using remote hardware without relying on solar panels and batteries. Following additional refinement, the team plans to test the technology in the wild.


Image credits: Photos shared with kind permission from ZSL London Zoo.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on London researchers develop plant-powered camera system for conservation efforts

Posted in Uncategorized

 

5 Questions to Ask Before Buying Used Camera Gear

18 Oct

The post 5 Questions to Ask Before Buying Used Camera Gear appeared first on Digital Photography School. It was authored by Jaymes Dempsey.

Questions to Ask Before Buying Used Camera Gear

I’ve bought a lot of used gear over the last decade.

Cameras.

Lenses.

Batteries.

And more.

Questions-to-Ask-Before-Buying-Used-Camera-Gear

A lot of those purchases turned out great. Some of them I still use to this day.

But a large chunk of the used purchases I made?

Trash.

In fact, in my more naive years, I was forced to return over 50% of the gear that I purchased. There were just so many problems: sand in focusing rings, stains on the front element, shutter buttons that couldn’t communicate with the shutter. (Oh, and my least favorite: Fungus inside the lens. Doesn’t that just make you shiver?)

And here’s the kicker:

I bought all of this gear through respectable buyers, who described the equipment as in “excellent condition,” “flawless,” “perfect,” “like new,” – you name it.

It got so bad that I considered leaving the used market entirely and just buying new. But I resisted.

Why?

Used camera gear is a real bargain – if you buy carefully. This is why I took all of my negative gear-buying experiences and turned them into a process for making sure I purchased good used gear.

At the core of that process is a series of questions. Questions that I’m going to share with you today. Some of the questions are for you, the buyer. Others should be posed to the seller before you put any cash down.

Are you ready to discover how to buy used gear effectively?

Let’s get started!

Questions-to-Ask-Before-Buying-Used-Camera-Gear

Question 1: Are you buying from a reputable seller with a money-back guarantee?

This is the number one most important thing that you should do when buying used gear.

Purchase from a seller that you trust – and that gives you an enforceable money-back guarantee. You don’t want to purchase a camera online, only to find that it’s full of water damage and sports a cracked LCD.

This means that buying used through Amazon is fine. All of their products are backed by Amazon month-long guarantees.

Buying used through eBay is also fine. Ebay’s buyer protection ensures that you’re not going to get ripped off in such an obvious fashion.

5 Questions to Ask Before Buying Used Camera Gear

But this makes most forums (if not all forums) off-limits. If the forum doesn’t have a serious money-back guarantee that’s honored by the site itself, then stay away.

This also makes in-person sales off-limits, such as those done through Craigslist. Sure, you can inspect the item upon receipt, but what are you going to do when you get home, put that lens under a light, and realize it’s filled with an army of fungus?

It’ll be too late, and your seller may not be so receptive to a return.

So just don’t do it. Instead, use sites like Amazon, eBay, B&H, or KEH, which all have clear money-back guarantees.

Question 2: Does the seller include actual pictures of the gear?

Sellers not including pictures is a big warning sign, especially on a website like eBay, where pictures are the norm. It should make you ask: Why doesn’t the seller want to show off their “excellent condition” item? Is there something they’re trying to hide?

Another red flag is only showing a stock photo. These are easy to spot; they look way better than anything that a casual, eBay-selling photographer would have taken, and there tends to be only one or two of them.

5 Questions to Ask Before Buying Used Camera Gear

If you like the price and everything else checks out, then go ahead and shoot the seller an email, asking for in-depth pictures of the item. If the seller refuses, then it’s time to look elsewhere.

You might come across some sellers who are offering many units of the same item (e.g., five Canon 7D Mark II’s). In this case, they likely have shown a stock photo, or a photo of one item, because they don’t want to go through the effort of photographing each piece of kit.

In such cases, you should message the seller and ask for pictures of the exact item that you’ll be purchasing. It’s too easy, especially with these big sellers, to end up with an item that you’ll have to send back.

Question 3: How many shutter actuations has the camera fired?

(Note: This section is for buying cameras.)

First things first: A shutter actuation refers to a single shot taken with a camera.

Every camera has a number of actuations its shutter is rated for. Once the shutter has reached around that point, it just…fails. While you can get the shutter replaced, it generally costs enough that you’re probably better off buying a new camera body.

5 Questions to Ask Before Buying Used Camera Gear

If you want to know the shutter actuation rating of any particular camera, you can look it up through a quick Google search.

Of course, the shutter rating isn’t a hard and fast rule. There are some cameras that go far beyond their predicted shutter count, and there are some cameras that fail far sooner. The shutter count is just an average.

Now, when you look at camera listings online, you’ll see that shutter actuations are reported about fifty percent of the time.

But the other fifty percent of the time, there will be no mention of them.

This is for three possible reasons:

  1. The seller doesn’t know about the importance of shutter actuations.
  2. The seller can’t figure out how to determine the shutter actuations for their camera.
  3. The seller doesn’t wish to share the shutter count because it won’t help the sale.

I would never buy a camera without knowing its shutter count. Therefore, I recommend reaching out to the seller and asking.

If the seller refuses to share the count, then let the camera go. If the seller claims they don’t know how to view the shutter count, explain that they should be able to find it easily, either within the camera itself or through a website such as https://www.camerashuttercount.com/.

If they still won’t give you the count, then don’t buy. It’s not worth risking it.

Question Four: Does the lens have any blemishes on the glass, fungus, scratches, haze, or problems with the focusing ring?

(Note that this is for purchasing lenses.)

This is a question to ask the seller, and I suggest you do it every single time you make a purchase.

5 Questions to Ask Before Buying Used Camera Gear

Yes, the seller may be annoyed by your specific question. But this is a transaction; it’s not about being nice to the seller! And I’ve never had someone refuse to sell to me because I annoyed them with questions.

In fact, what makes this question so valuable is that it often forces sellers to actually consider the equipment they’re selling. Up until this point, the seller may not have really thought about some of these things. So it can act as a bit of a wake-up call and make the seller describe the item beyond “excellent condition.”

When you ask this question, make it clear that you want a detailed description. You genuinely want the seller to check for scratches on the glass, fungus in the lens, problems with the focusing ring, and more. You don’t want a perfunctory examination.

Unfortunately, there will still be some people who don’t do a serious examination, or who lie in the hopes that you won’t notice the issues (or be bothered enough to make a return). But asking the question is the best you can do.

Question Five: Has the seller noticed any issues with the item in the past?

This is another question to ask the seller before you hit the Buy button. It’s meant as a final attempt to determine whether the item has any issues.

In this case, by asking about the item’s past.

5 Questions to Ask Before Buying Used Camera Gear

Unfortunately, there will be sellers who have had an item break repeatedly – but, as long as it’s working at the moment they take the photos, they’ll give it the “perfect condition” label. Fortunately, many sellers will still be honest with you. If they’ve had a problem with the item, they’ll say.

So it’s definitely worth asking – just to be safe.

5 Questions to ask before buying used camera gear: Conclusion

Now that you know the five most important questions to ask before buying used camera gear, you’re well equipped to start buying gear online.

Questions-to-Ask-Before-Buying-Used-Camera-Gear

Yes, you’re still going to run into the occasional issue, but if you’re careful, and you think about these crucial questions to ask before buying used camera gear…

…the number of issues will be far, far lower.

And you’ll be able to effectively take advantage of used camera equipment!

Questions-to-Ask-Before-Buying-Used-Camera-Gear

The post 5 Questions to Ask Before Buying Used Camera Gear appeared first on Digital Photography School. It was authored by Jaymes Dempsey.


Digital Photography School

 
Comments Off on 5 Questions to Ask Before Buying Used Camera Gear

Posted in Photography