RSS
 

Posts Tagged ‘Sight’

Google explains its Night Sight astrophotography mode in detail

27 Nov

Ever since Google launched its Night Sight feature on the Pixel 3 series the low light photography feature has been very popular with users. On the new Pixel 4 Google has updated Night Sight with a specific mode for astrophotography. The team behind it has now authored a blog post to explained the function in more detail.

In order to capture as much light as possible without using shutter speeds that would require a tripod and/or lead to blur on any moving subject, Night Sight splits the exposure across multiple frames that are aligned to compensate for camera shake and in-scene motion. In a second step the frames are averaged to reduce noise and increase image detail.

The astrophotography feature uses the same approach in principle but uses longer exposure times for individual frames and therefore relies on tripod use or some other kind of support.

Image with hot pixels (left) and the corrected version (right)

The team decided exposure times of individual frames should not be longer than 16 seconds to make the stars look like points of light rather than streaks. The team also found that most users were not patient enough to wait longer than four minutes for a full exposure. So the feature uses a maximum of 15 frames with up to 16 seconds exposure time per frame.

At such long exposure times hot pixel can become a problem. The system identifies them by comparing neighboring pixels within the same frame as well as across a sequence of frames recorded for a Night Sight image. If an outlier is detected its value is replaced by an average.

In addition the feature uses AI to identify the sky in night images and selectively darken it for image results that are closer to the real scene than what you would achieve with a conventional long exposure.

This image was captured under the lighting of a full moon. The left half shows the version without any sky processing applied. On the right the sky has been slightly darkened for a more realistic result, without affecting the landscape elements in the frame.

Night Sight is not only about capture, though, it also includes a special viewfinder that is optimized for shooting in ultra-low light. When the shutter is pressed each individual long-exposure frame is displayed as it is captured, showing much more detail than the standard preview image. The composition can then be corrected and a new Night Sight shot triggered.

Some of the results we have seen have been impressive. For more more technical detail head over to the original post on the Google blog. A n album of full-size sample images can be found here. The team has also put together a helpful guide for using the feature in pdf format.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google explains its Night Sight astrophotography mode in detail

Posted in Uncategorized

 

Fujifilm X-Pro3 review: living in the moment, not a screen in sight

12 Nov

$ (document).ready(function() { SampleGalleryStripV2({“galleryId”:”5133487436″,”isMobile”:false}) })

Sample photoSample photoSample photoSample photoSample photo
Silver Award

85%
Overall score

The Fujifilm X-Pro3 is a 26 megapixel mirrorless interchangeable lens camera built around a clever optical / electronic viewfinder and designed to look like a classic rangefinder.

This, the third iteration of Fujifilm’s first X-mount camera gains titanium top and base plates but the most noteworthy feature is an LCD panel that faces the back of the camera and needs to be flipped down to use it. The viewfinder and rear screen are the main distinctions between this and the similarly-specced X-T3.

A low-resolution status panel on the back of the camera speaks to the underlying ethos of the camera, which we’ll look into in more detail on the next page.

Key Specifications

  • 26MP APS-C BSI CMOS sensor
  • Optical/Electronic hybrid viewfinder
  • Fold down rear LCD
  • Rear-facing Memory LCD status panel
  • Titanium top/bottom plates
  • 4K video at up to 30p, 200Mbps
  • 11 Film Simulation modes, now with ‘Classic Neg’

The X-Pro3 is available in painted black version with a list price of $ 1799 or with the silver or black hardened, coated surface for $ 1999.


What’s new and how it compares

The X-Pro3 looks a lot like its predecessors except for one major change.

Read more

Body and controls

A new titanium top plate, rear ‘sub monitor’ and hidden flip-out LCD round out the major body updates.

Read more

First impressions

Photo editor Dan Bracaglia took a pre-production X-Pro3 on holiday to Northern California. Here are his thoughts on the hidden rear screen.

Read more

Image quality

The X-Pro3 offers the excellent image quality and attractive processing options we saw in the X-T3. It also gains an in-camera HDR mode.

Read more

Autofocus

The X-Pro3’s autofocus is highly capable but requires more user input than the best of its peers.

Read more

Video

Despite its old-skool stills ethos, the X-Pro3 can shoot some impressive video footage.

Read more

Shooting with the X-Pro3

The X-Pro3’s design pushes you to shoot with the optical finder or with the camera at waist level. We found both methods to be limiting and engaging to different degrees.

Read more

Conclusion

The X-Pro3 is an intentionally divisive camera, but one we think will hold a certain appeal for some photographers.

Read more

Sample gallery

The X-Pro3 gains the ‘Classic Negative’ film stimulation. Check out examples of it and more in our hardy samples gallery.

See more

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Fujifilm X-Pro3 review: living in the moment, not a screen in sight

Posted in Uncategorized

 

The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight

19 Oct

The post The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight appeared first on Digital Photography School. It was authored by Jaymes Dempsey.

 

The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight

Earlier this week Google announced the long-awaited Pixel 4, which promises to take smartphone photography to a whole new level.

This comes in the wake of Apple’s iPhone 11 Pro announcement last month, which saw the debut of a triple-camera setup and features such as Night Mode.

In other words, the Pixel 4 is a competitor in an intense fight to create the best cameras, the best lenses, and the best camera software.

So what does the Google Pixel 4 offer?

Let’s take a closer look:

First, the Google Pixel 4 features a dual-camera setup, offering the usual wide-angle lens alongside a new 2X telephoto option. This isn’t unique (Apple has regularly included “telephoto” lenses going all the way back to the iPhone 7 Plus), but it is a nice addition for those who need a bit more reach. You can use the 2X lens for tighter portraits, and it’s also useful for street photography, where you often need to photograph subjects from a distance.

Interestingly, Google has decided to keep the wide-angle camera at 12 megapixels, but has packed in a 16-megapixel sensor for the telephoto camera. While plenty of photographers will be excited by this jump in resolution, it remains to be seen whether such tiny pixels will result in significant noise.

The dual-camera setup should also improve Google’s Portrait Mode, and Google has promised more natural background blur and very precise edges (e.g., when dealing with hair). Truthfully, I’m skeptical. I’ve yet to see a Portrait mode photo that looks perfect on any smartphone camera. But I’ll wait until I see the results from the Pixel 4 before judging.

One cool new feature that will debut in the Pixel 4 is Live HDR. When you go to capture an HDR photo, you’ll be able to see a live HDR preview on your smartphone screen; this should give you a sense of what you can expect from the HDR+ effect.

Finally, if you enjoy doing astrophotography, you’re in luck: The Pixel 4 offers an improved Night Sight mode, in which you can take stunning photos of the night sky. It works by taking a series of long exposures, before blending them together to create a beautiful final photo. Note that you’ll need a tripod or other method of stabilization to get sharp astrophotography shots.

Overall, the Google Pixel 4 offers some impressive new features, even if none of them feel totally groundbreaking. Up until now, the Pixel lineup has dominated regarding low-light shooting, and the enhanced Night Sight suggests that Google plans to keep running with this success.

The Google Pixel 4 is currently available for preorder starting at $ 799 USD and will hit the shelves on October 24.

You can check out this first look video from cnet to get more of an idea of the Google Pixel 4.

?

Are you interested in the Google Pixel 4? Let us know in the comments!

The post The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight appeared first on Digital Photography School. It was authored by Jaymes Dempsey.


Digital Photography School

 
Comments Off on The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight

Posted in Photography

 

How does iPhone 11 Night Mode compare to Google Pixel 3 Night Sight?

15 Oct

Many smartphones today take great images in broad daylight. That’s no surprise – when there’s a lot of light, it doesn’t matter so much that the small smartphone sensor doesn’t collect as many photons as a larger sensor: there’s an abundance of photons to begin with. But smartphone image quality can take a nosedive as light levels drop and there just aren’t many photons to collect (especially for a small sensor). That’s where computational techniques and burst photography come in.

Low light performance is a huge differentiator that separates the best smartphones from
the worst

Low light performance is a huge differentiator that separates the best smartphones from the worst. And Google’s Night Sight has been the low-light king of recent1, thanks to its averaging of many (up to 15) frames, its clever tile-based alignment to deal with hand movement and motion in the scene, and its use of a super-resolution pipeline that yields far better resolution, particularly color resolution, and lower noise than simple frame stacking techniques.

With the iPhone 11, Apple launched its own Night Mode to compete with offerings from Android phones. It uses ‘adaptive bracketing’ to combine both long and short exposures (to freeze any movement) to build a high quality image in low light conditions. Let’s see how it stacks up compared to Google’s Night Sight and Apple’s own previous generation iPhone XS.

The set-up

‘Low light performance’ is difficult to sum up in one number or picture when it comes to computational imaging. Different devices take different approaches, which ultimately means that comparative performance across devices can vary significantly with light level. Hence we’ve chosen to look at how the iPhone 11 performs as light levels decrease from evening light before sunset to very low light conditions well after sunset. The images span an hour-long time frame, from approximately 500 lux to 5 lux. All shots are handheld, since this is how we expect users to operate their smartphones. The iPhone 11 images spanning this time period are shown below.

7:00 pm, evening light
1/60 | ISO 100
485 lux | 7.6 EV

7:25 pm, late evening light
1/8 | ISO 250
25 lux | 3.4 EV

7:50 pm, low light
1/4 | ISO 640
5 lux | 1 EV
8:05 pm, very low light
1/8 | ISO 1250
<5 lux | <1 EV

Note that Night mode is only available with the main camera unit, not the 2x or 0.5x cameras. And before we proceed to our comparisons, please see this footnote about the rollovers and crops that follow: on ‘HiDPI’ screens like smartphones and higher-end laptops/displays, the following crops are 100%, but on ‘standard’ displays you’ll only see 50% crops.2

Now, on to the comparisons. In the headings, we’ve labeled the winner.

Evening light (485 lux) | Winner: Google Pixel 3

Before sunset, there’s still a good amount of available light. At this light level (485 lux, as measured by the iPhone 11 camera), the option for Night mode on iPhone 11 is not available. Yet Night Sight on the Google Pixel 3 is available, as it is in all situations. And thanks to its averaging of up to 15 frames and its super-resolution pipeline, it provides far more detail than the iPhone 11.

It’s not even close.

Take a look at the detail in the foreground trees and foliage, particularly right behind the fence at the bottom. Or the buildings and their windows up top, which appear far crisper on the Pixel 3.

Late evening light (25 lux) | Winner: Google Pixel 3

As the sun sets, light levels drop, and at 25 lux we finally have the option to turn on Night Mode on the iPhone, though it’s clearly not suggested by Apple since it’s not turned on by default. You’ll see the Night Mode option as a moon-like icon appearing on the bottom left of the screen in landscape orientation. Below we have a comparison of the iPhone with Night Mode manually turned on next to the Google Pixel 3 Night Sight (also manually enabled).

There’s more detail and far less noise – particularly in the skies – in the Google Pixel 3 shot. It’s hard to tell what shutter speeds and total exposure time either camera used, due to stacking techniques using differing shutter speeds and discarding frames or tiles at will based on their quality or usability. But it appears that, at best, the Pixel 3 utilized 15 frames of 1/5s shutter speeds, or 3s total, while the iPhone 11 indicated it would use a total of 1s in the user interface (the EXIF indicates 1/8s, so is likely un-representative). In other words, here it appears the Pixel 3 used a longer total exposure time.

Apart from that, though, the fact that the iPhone result looks noisier than the same shot with Night Mode manually turned off (not shown) leads us to believe that the noisy results are at least in part due to Apple’s decision to use less noise reduction in Night Mode. This mode appears to assume that the longer overall exposures will lead to lower noise and, therefore, less of a need for noise reduction.

However, in the end, it appears that under these light levels Apple is not using a long enough total exposure (the cumulative result of short and long frames) to yield low enough noise results that the lower noise reduction levels are appropriate. So, in these conditions when it appears light levels are not low enough for Apple to turn on Night Mode by default, the Google Pixel 3 outperforms, again.

Low light (5 lux) | Winner: Tie

As light levels drop further to around 5 lux, the iPhone 11 Night mode appears to catch up to Google’s Night Sight. Take a look above, and it’s hard to choose a winner. The EXIF data indicates the Pixel used 1/8s shutter speeds per frame, while the iPhone used at least 1/4s shutter speed for one or more frames, so it’s possible that the iPhone’s use of longer exposure times per frame allows it to catch up to Google’s result, despite presumably using fewer total frames. Keynotes from Apple and personal conversations with Google indicate that Apple only uses up to 8-9 frames of both short and long exposures, while the Pixel uses up to 15 frames of consistent exposure, for each phone’s respective burst photography frame-stacking methods.

Very low light (< 5 lux) | Winner: iPhone 11

As light levels drop even further, the iPhone 11 catches up to and surpasses Google’s Night Sight results. Note the lower noise in the dark blue sky above the cityscape. And while overall detail levels appear similar, buildings and windows look crisper thanks to lower noise and a higher signal:noise ratio. We presume this is due to the use of longer exposure times per frame.

It’s worth noting the iPhone, in this case, delivers a slightly darker result, which arguably ends up being more pleasing, to me anyway. Google’s Night Sight also does a good job of ensuring that nighttime shots don’t end up looking like daytime, but Apple appears to take a slightly more conservative approach.

We shot an even darker scene to see if the iPhone’s advantage persisted. Indeed, the iPhone 11’s advantage became even greater as light levels dropped further. Have a look below.

(Night Mode Off)

(Night Sight Off)

As you can see, the iPhone 11 delivers a more pleasing result, with more detail and considerably less noise, particularly in peripheral areas of the image where lens vignetting considerably lowers image quality as evidenced by the drastically increased noise in the Pixel 3 results.

Ultimately it appears that the lower the light levels, the better the iPhone 11 performs comparatively.

A consideration: (slightly) moving subjects

Neither camera’s night mode is meant for photographing moving subjects, but that doesn’t mean they can’t deal with motion. Because these devices use tile-based alignment to merge frames to the base frame, static and moving subjects in a scene can be treated differently. For example, on the iPhone, shorter and longer exposures can be used for moving and static subjects, respectively. Frames with too much motion blur for the moving subjects may be discarded, or perhaps only have their static portions used if the algorithms are clever enough.

Below we take a look at a slightly moving subject in two lighting conditions: the first dark enough for Night mode to be available as an option on the iPhone (though it isn’t automatically triggered until darker conditions), and the second in very dim indoor lighting where Night mode automatically triggers.

Although I asked my subject to stay still, she moved around a bit as children are wont to do. The iPhone handles this modest motion well. You’ll recall that Apple’s Night mode uses adaptive bracketing, meaning it can combine both short and long exposures for the final result. It appears that the exposure times used for the face weren’t long enough to avoid a considerable degree of noise, which is exacerbated by more conservative application of noise reduction to Night mode shots. Here, we prefer the results without Night mode enabled, despite the slight watercolor painting-like result when viewed at 100%.

We tested the iPhone 11 vs. the Google Pixel 3 with very slightly moving subjects under even darker conditions below.

Here you can see that Apple’s Night mode yields lower noise than with the mode (manually) turned off. With the mode turned off, it appears Deep Fusion is active3, which yields slightly more detail at the cost of more noise (the lack of a smeary, watercolor painting-like texture is a giveaway that Deep Fusion kicked in). Neither iPhone result is as noise-free and crisply detailed as the Pixel 3 Night Sight shot, though. We can speculate that the better result is due to either the use of more total frames, or perhaps more effective use of frames where the subject has slightly moved, or some combination thereof. Google’s tile-based alignment can deal with inter-frame subject movement of up to 8% of the frame, instead of simply discarding tiles and frames where the subject has moved. It is unclear how robust Apple’s align-and-merge algorithm is comparatively.

Vs. iPhone XS

We tested the iPhone 11 Night Mode vs. the iPhone XS, which has no Night Mode to begin with. As you can see below, the XS image is far darker, with more noise and less detail than the iPhone 11. This is no surprise, but it’s informative to see the difference between the two cameras.

Conclusion

iPhone 11’s Night Mode is formidable and a very welcome tool in Apple’s arsenal. It not only provides pleasing images for its users, but it sometimes even surpass what is easily achievable by dedicated cameras. In the very lowest of light conditions, Apple has even managed to surpass the results of Google’s Night Sight, highly regarded – and rightfully so – as the industry standard for low light smartphone photography.

But there are some caveats. First, in less low light conditions – situations you’re actually more likely to be shooting in – Google’s use of more frames and its super-resolution pipeline mean that its Pixel 3 renders considerably better results, both in terms of noise and resolution. In fact, the Pixel 3 can out-resolve even the full-frame Sony a7S II, with more color resolution and less color aliasing.

Second, as soon as you throw people as subjects into the mix, things get a bit muddled. Both cameras perform pretty well, but we found Google’s Night Sight to more consistently yield sharper images with modest subject motion in the scene. Its use of up to 15 frames ensures lower noise, and its align-and-stack method can actually make use of many of those frames even if you subject has slightly moved, since the algorithm can tolerate inter-frame subject movement of up to ~8% of the frame.

If you’re photographing perfectly still scenes in very low light, Apple’s iPhone 11 is your best bet

That shouldn’t undermine Apple’s effort here which, overall, is actually currently class-leading under very, very low light conditions where the iPhone can use and fuse multiple frames of very long exposure. We’re told the iPhone 11 can use total exposure times of 10s handheld, and 28s on a tripod. Google’s Night Sight, on the other hand, tends to use an upper limit of 1/3s per frame handheld, or up to 1s on a tripod. Rumors however appear to suggest the Pixel 4 being capable of even longer total exposures, so it remains to be seen who will be the ultimate low light king.

Currently though, if you’re photographing perfectly still scenes in very low light, Apple’s iPhone 11 is your best bet. For most users, factoring in moving subjects and less low light (yet still dark) conditions, Google’s Night Sight remains the technology to beat.


Footnotes:

1 Huawei phones have their own formidable night modes; while we haven’t gotten our hands on the latest P30 Pro, The Verge has its own results that show a very compelling offering from the Chinese company.

2 A note about our presentation: these are rollovers, so on desktop you can hover your mouse over the states below the image to switch the crop. On mobile, simply tap the states at the bottom of each rollover to switch the crop. Tap (or click) on the crop itself to launch a separate window with the full-resolution image. Finally, on ‘Retina’ laptops and nearly all modern higher-end smartphones, these are 100% crops (each pixel maps 1 display pixel); however, on ‘standard’ (not HiDPI) displays these are 50% crops. In other words, on standard displays the differences you see are actually under-represented. [return to text]

3We had updated the iPhone 11 to the latest iOS 13.2 public beta by the time this set of shots was taken; hence the (sudden) availability of Deep Fusion.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on How does iPhone 11 Night Mode compare to Google Pixel 3 Night Sight?

Posted in Uncategorized

 

Leaked Pixel 4 photos show new and improved astrophotography, portrait and Night Sight modes

03 Oct

The Google Pixel 4 is just around the corner, expected to be announced at the Made by Google Event on October 15. We’ve already seen what the Pixel 4 will look like, thanks to both Google and third-party leakers, but today we’re getting more than a hardware leak. 9to5Google has obtained exclusive images that it claims Google will use to promote the new camera capabilities of its impending device.

9to5Google has kindly given us permission to share the full-resolution images directly from their source and only saved once with a watermark over them. The images, as you’ll see below, are a combination of images captured with the front-facing selfie camera and the rear-facing cameras (rumors point to there being a 12-megapixel main camera and a 16-megapixel telephoto camera). The images appear to include photos shot in multiple camera modes, including the improved Night Sight mode and a new star-shooting mode that’s been rumored for some time now.

First up are a few photos that appear to show off the portrait mode of the front-facing camera onboard the Pixel 4. Interestingly, these photos measure in at 4.5-megapixels, nearly half the resolution of the 8-megapixel onboard the Pixel 3, so we’re not sure whether these are simply resized or from a larger sensor that’s been supersampled, but whatever the case is, they look impressive. The faked bokeh looks both realistic and smooth, while the outline, even around hair, seems to be precise, with only a few notable exceptions (specifically the arm on the white jacket).

Next up are more portrait mode shots with what we presume to be the rear-facing camera on the Pixel 4. These shots measure in at 7-megapixels and were taken with the main camera (the Pixel 4 will feature multiple camera modules). Like the previous shots, the fake bokeh appears to be incredibly accurate, even on difficult subjects, such as a long-haired pet and flyaway hairs.

Moving along, we have three photos (two 9.2-megapixels and one 5.2-megapixels) that appear to be taken with Google’s Night Sight mode. Based on the EXIF data embedded in some of the images, the photos were taken with the main 27mm (35mm equivalent) F1.7 camera onboard the Pixel 4. The actual lighting scenario in the scene isn’t known, but the images appear both bright and vibrant with nice dynamic range, even in the images that have multiple light sources at different color temperatures.

Along the lines of Night Sight, it appears a pair of photos showing off the much-rumored night sky camera mode expected to be onboard the Pixel 4. Based on the EXIF data, these images (the header image of this article and the below image) were also captured with the main camera unit and the GPS data reveals the shots were captured at Pinnacles National Park in Central California along State Route 146. For being captured with a smartphone, the amount of detail captured in the night sky is absolutely incredible. It seems as though stars get lost around the silhouette of the trees in the frames, but the rest of the sky showcases countless stars in the Milky Way.

The remainder of the photos showcase a number of scenes, but it’s not clear what specific camera modes are being used to capture these images. As noted by 9to5Google, it’s been rumored there will be a ‘Motion Mode’ with the Pixel 4, but that’s not yet confirmed, even though a few action-style shots are seen in the following images.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_6823414838″,”galleryId”:”6823414838″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Plenty still remains to be seen, but with the Made by Google Event less than two weeks away, it won’t be long before we know just what the Pixel 4 is capable of. 9to5Google has also detailed a new ‘Dual Exposure’ mode that’s believed to be avaialble on the Pixel 4.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Leaked Pixel 4 photos show new and improved astrophotography, portrait and Night Sight modes

Posted in Uncategorized

 

NeuralCam Night Photo app brings Google-inspired ‘Night Sight’ functionality to iPhones

27 Aug

An app called NeuralCam Night Photo uses machine learning and computational photography to offer a ‘Night Sight’ mode on the iPhone. The technology works to transform very low-light images into brighter, clearer photos without the need for a tripod using only software.

NeuralCam Night Photo was recently featured on Product Hunt where app creator Alex Camilar had the following to say about the app:

‘Our inspiration for NeuralCam comes from all the various Night Modes available on Android phones, that helped people make brighter and nicer photos in low light settings, whether natural or artificial. We wanted to make the best out of the iPhone’s hardware and give it the software spin needed to get its own Night Mode photography update.’

NeuralCam Night Photo can be used in a variety of low-light settings, including for both indoor and outdoor shots, according to Camilar. The entire process is done behind the scenes, meaning NeuralCam should more or less work the same as any other camera app for iOS; compose the scene you want to capture, wait for the app to focus, capture the image, and within a few seconds you should see a much brighter and clearer photo than would otherwise be possible.

A comparison shared by NeuralCam to show the difference between an image shot in the standard iPhone camera app (left) and NeuralCam (right).

The app works by capturing multiple images and processing them using machine learning. This same computational photography approach has been used by Google for its single-camera Pixel smartphones.

NeuralCam Night Photo is available for the iPhone 6 and newer; it requires iOS 12 and is supports both the front and rear cameras on these phones with the exception of the iPhone 6s / 6s Plus, which only has rear camera support. A full list of supported image resolutions for each iPhone model can be found on the app’s App Store listing, where the product is temporarily discounted to $ 2.99.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NeuralCam Night Photo app brings Google-inspired ‘Night Sight’ functionality to iPhones

Posted in Uncategorized

 

Google’s Night Sight allows for photography in near darkness

15 Nov

Google’s latest Pixel 3 smartphone generation comes with the company’s new Night Sight feature that allows for the capture of well-exposed and clean images in near darkness, without using a tripod or flash. Today Google published a post on its Research Blog, explaining in detail thecomputational photography and machine learning techniques used by the feature and describing the challenges the development team had to overcome in order to capture the desired image results.

Night Sight builds on Google’s multi-frame-merging HDR+ mode that was first introduced in 2014, but takes things a few steps further, merging a larger number of frames and aiming to improve image quality in extremely low light levels between 3 lux and 0.3 lux.

One key difference between HDR+ and Night Sight are longer exposure times for individual frames, allowing for lower noise levels. HDR+ uses short exposures to provide a minimum frame rate in the viewfinder image and instant image capture using zero-shutter-lag technology. Night Sight waits until after you press the shutter button before capturing images which means users need to hold still for a short time after pressing the shutter but achieve much cleaner images.

The longer per-frame exposure times could also result in motion blur caused by handshake or to moving objects in the scene. This problem is solved by measuring motion in a scene and setting an exposure time that minimizes blur. Exposure times also vary based on a number of other factors, including whether the camera features OIS and the device motion detected by the gyroscope.

In addition to per-frame exposure, Night Sight also varies the number of frames that are captured and merged, 6 if the phone is on a tripod and up to 15 if it is handheld.

Frame alignment and merging are additional challenges that you can read all about in detail on the Google Research Blog. Our science editor Rishi Sanyal also had a closer look at Night Sight and the Pixel 3’s other computational imaging features in this article.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google’s Night Sight allows for photography in near darkness

Posted in Uncategorized

 

The Illusion of Photography and the Miracle of Sight

27 Aug

The photographic process is a grand illusion from top to bottom. Think about it. Everything about the process is visual trickery. Photography provides a reasonable facsimile of real-life perception, and luckily, your brain’s visual cortex is very forgiving and willing to play along with this ruse.

Here’s what I mean.

Illusion of Photography CameraLensEye

Your brain is pretty smart; way smarter, adaptable and intelligent than we sometimes give it credit for. Human perception takes place on a completely different level than photography (or even videography). But photographic science and advanced camera functions do hold certain advantages over nature’s system. Here’s a look at a comparison of the two systems.

Motion / Still Life

Your camera is able to capture slices of time and literally freeze motion in its tracks. Camera shutter speeds slice and dice life into instances of time lasting just thousandths of a second each. Only when we fail to set the shutter speed and (ISO) sensitivity properly are objects in motion recorded as a blur.

Illusion of Photography Motion Blur

Your eyes, on the other hand, have rarely seen anything absolutely still, unless it is a rock formation or building. Even then, our view is constantly changing simply because our body moves continually.

While your eyes capture thousands of frames each second, they process the images quite differently than your camera. They stream high-speed snapshots to your brain’s visual cortex – two at a time (right and left views) providing dimension and shape. And they do this all day, every day. No batteries or memory cards are required.

Your eyes shift and refresh their view thousands of times every second to paint complete three-dimensional moving scenes in your mind. This is perpetual “streaming” at the speed of light.

Video

The illusion of moving pictures (or movies) comes close to replicating what human eyes perceive as motion. The action portrayed in motion pictures is accomplished when single-frame images are flashed onto a screen sequentially at the same speed that they were recorded. The process works effectively to simulate what the human eye processes at much higher rates.

The major difference is the processing speed. Specific video codecs (computer word for the compression and decompression process) involve industry-standard capture/playback speeds (frames-per-second) designed to match the processing power of various playback systems. Videos are recorded and played back at speeds up to 60/fps to trick the eye into perceiving motion instead of seeing individual frames flickering by.

Autofocus and Blurred Backgrounds

The camera focuses on a single plane or depth of field and blurs the rest of the picture. You have the option to automatically focus on all subjects in the scene or select specific pinpoint areas.

If you set the camera to autofocus, you must remember that the camera always seeks and focuses on the objects with the highest contrast ratio in the scene. To control this you may select between face detection, autofocus tracking, multiple focus points (zone focus) or overall scene settings to tell the camera your preference.

Camera focus is all about managing the blur; making the eye concentrate on a particular part of the scene.

Illusion of Photography Autofocus Blur

Your eyes don’t really see blurs at all. They automatically focus on the single subject of your attention and gradually defocus and separate the view of the non-subject areas. This is quite different than camera “bokeh.” Close one eye and view a scene in the room, then switch eyes and notice how the background shifts.

The human eye displaces subjects in the background while the camera attempts to blur them. We’ve been conditioned to accept photo blurs as if they are a part of real life, even though they aren’t!

Two Versus Three-Dimensionality

Single lens cameras capture only two-dimensional images; with height and width. Items in focus are limited to a single defined “plane,” or distance from the camera. The dimension of depth is simulated by blurring objects which are not in perfect focus.

Your eyes never observe scenes in only two dimensions; they see every scene in three dimensions, through two converged horizontal viewpoints, your left, and right eyes. Your eyes adjust and shift focal length almost instantaneously. Only recently has Hollywood caught onto the 3-D trick.

Dimension, like depth, is perceived visually by slightly defocusing and horizontally shifting the two scenes behind the object in focus. This differs significantly from the camera’s method of simply blurring and softening the background. While depth can be simulated, dimension cannot be. Dimension requires a process called parallax, a word derived from the French “parallaxe” meaning “fact of seeing wrongly.”

Depth of Field

The camera uses its single lens to capture subjects from a direct frontal view. With the camera, you can also determine how much of the scene you want in focus by a managing the depth of field (DOF); blurring both the foreground and background for emphasis.

You can’t do this with your eyes. If you concentrate on an object close to you, pretty much everything behind the subject will automatically be defocussed.

Illusion of Photography DOF

Each of your two eyes sees that same subject from a slightly horizontally-offset angle, which is a very good thing! This overlapping, crisscross view allows you to see enough of the sides of each subject to sense dimension, judge distances, and safely navigate your way around obstacles. When the eye’s two views are combined, they provide a unique depth and dimension to your perception.

Try walking around when viewing the scene ONLY through your camera’s viewfinder and you’ll notice the difference.

Sphere of Focus

Camera lenses all have one thing in common. When they focus on an object a measured distance from the lens, everything else in the scene (the same distance from the lens) is also in focus. The optical nature of the spherical shape of the lens makes this happen. When you employ a wide angle lens, you can see everything in the scene in near-perfect focus.

Illusion of Photography Sphere of Focus

The human eye is quite different. Our focus on a subject is actually limited to a very small radius of view, between 7-10° wide. Everything outside that window appears defocused; not blurred, but just out of sharp focus.

While our peripheral vision spans nearly 180°, only a very tight circle of view appears totally focused. The way we perceive entire scenes by our eyes is constantly shifting and sending patches of focus to the cerebral cortex, which paints a momentary scene in our mind.

Try staring at one word on this screen. You’ll notice that unless your attention shifts slightly, the words on either side of that word aren’t really “in focus.” The real magic is that both of your eyes have this agility and they both work in perfect unison, viewing the same exact spot and shifting together at precisely the same moment.

Monochrome

All digital cameras are able to record images using only the luminosity channel producing “black-and-white” images. Monochrome photographic images remove all chrominance (color) information and rely only on single color contrast (luminance) to portray the scene.

Photography’s earliest roots are in black and white photography as the development of film emulsions were able to capture only luminance (monochrome) values with the light-sensitive silver halide particles. Even color films used this same monochromatic process but added color filters to capture individual RGB light waves.

Illusion of Photography Monochrome

Your eyes have never experienced this phenomenon except in photographic reproduction. The eye’s rods and cones that make up the image receptors interpret every scene in full color. Red, green, and blue receptors in your eyes perform this same service for your vision.

This characteristic of photography is perhaps the most bazar example of visual forgiveness, though the eye’s rods (more receptive to the green frequency of light) are most able to perceive forms and shapes under very low lighting conditions. This is why identifying colors in low light is so difficult. Not coincidentally, the green channel of color digital photography captures the most realistic monochromatic information.

Zoom, Wide Angle, and Telephoto

You probably own either a zoom lens, a fixed-focus telephoto, or a wide-angle lens for your camera. These variable distance lenses allow you to capture scenes either closer or farther away than your eyes typically see. Your human eyes are “fixed” on a 1:1 or “real-time” vantage point.

If you want to see a subject at a different distance, you have to adjust your personal distance to the subject or view the world through magnifying lenses like binoculars.

Resolution

Illusion of Photography Resolution

Here’s another area where photographic systems hold an advantage over human vision. When ultra-sharp lenses are coupled with high megapixel image sensors, the number of pixels available to publish a photo far exceeds the size and magnification capabilities of human vision. When pixels are displayed small enough to escape detection (roughly 100 per inch), image projection and reproduction sizes are nearly limitless.

Autofocus

Your camera can capture a scene in which everything is in near-perfect focus. From an object just feet away to a mountain five miles away, everything is sharp and clear. It is impossible for your eyes to view entire scenes in perfect focus, though photographic prints depend on the brain’s forgiving acceptance of this abnormal interpretation.

Y0ur eyes very rarely maintain the same focus for any period of time. Your brain stays hungry for visual information and your eyes know how to satisfy that appetite. Your eyes shift their attention rapidly to maintain their focus on moving objects.

Try staring at this page for more than 15 seconds and you’ll probably notice your eyes shifting briefly before returning to the word you were reading. Your eyes and brain have an insatiable visual appetite and a boundless curiosity.

Pixels, Dots, and Spots

Illusion of Photography Pixel Dots Spots

And then there’s the whole pixel/halftone illusion itself. Your eyes register nature’s colors as continuous tones, colors that have no stages or gradations. A feat we graphic illusionists have never been able to reproduce. Every image we reproduce has to be broken down into minuscule particles of color so small that human vision cannot readily identify them individually (I’ve exaggerated the pixels and halftone dot sizes for those who don’t know the trick).

Something to think about

For all the similarities between the camera and the human eye, there are just as many (if not more) differences.

But in spite of those differences, we would be much the poorer without the precision of the human eye and the features of the digital camera. Appreciate both systems for what they add to your perception of life.

The post The Illusion of Photography and the Miracle of Sight appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on The Illusion of Photography and the Miracle of Sight

Posted in Photography

 

Nikon offers optional Dot Sight accessory to aid telephoto photography

27 Aug

Alongside the official unveiling of the Nikkor 500mm F5.6E PF ED, Nikon has announced an optional Dot Sight accessory to help telephoto photographers better track moving subjects. The DF-M1 makes it easier to aim a super-telephoto lens like the 500mm at a fast-moving, distant subject by presenting a wider field-of-view and an illuminated dot target that moves relative to the lens. The Dot Sight accessory will cost $ 175.

Press release:

NIKON RELEASES THE AF-S NIKKOR 500mm f/5.6E PF ED VR, A FIXED FOCAL LENTH SUPER-TELEPHOTO LENS COMPATIBLE WITH THE NIKON FX FORMAT

Delivers Exceptional Agility that Makes Hand-Held Super-Telephoto Photography Enjoyable, as Well as Offering Superior Optical Performance and Functionality

MELVILLE, NY (AUGUST 23, 2018 AT 1:01 A.M. EDT) –Nikon Inc. is pleased to announce the release of the AF-S NIKKOR 500mm f/5.6E PF ED VR, a fixed focal length super-telephoto lens compatible with Nikon FX-format digital SLR cameras.

The AF-S NIKKOR 500mm f/5.6E PF ED VR is a high-performance, FX-format, super-telephoto lens that supports 500 mm focal length. The adoption of a Phase Fresnel (PF) lens element has significantly reduced the size and weight of the lens, making hand-held super-telephoto photography easier and more enjoyable.

With a maximum diameter of 106 mm and length of 237 mm, the AF-S NIKKOR 500mm f/5.6E PF ED VR, which weighs 1,460g (roughly the same weight as the AF-S NIKKOR 70-200mm f/2.8E FL ED VR) is significantly lighter than previous500mm lenses which can typically weigh up to more than 3,000g. The AF-S NIKKOR 500mm f/5.6E PF ED VR is designed with consideration to dust- and drip- resistance, which in addition to the fluorine coat applied to the front lens surface, allows greater agility when shooting.

The use of one PF lens element and three ED glass elements enables extremely sharp and detailed rendering that is compatible with high pixel-count digital cameras. In addition, the materials used in the new PF lens element have been developed effectively to reduce PF (diffraction) flare, allowing light sources to be reproduced in near-original colors. In combination with Nikon’s coating technologies, such as the Nano Crystal Coat, effective in controlling ghost and flare, extremely clear images are achieved.

AF speed has been increased by making lens elements in the focusing group lighter. The AF-S NIKKOR 500mm f/5.6E PF ED VR is equipped with a VR mechanism that offers camera shake compensation equivalent to a 4.01-stop increase in shutter speed. The SPORT VR mode that has been adopted is especially effective when photographing fast-moving and unpredictable subjects such as wild birds, or in scenes such as sporting events. The stabilization of the image displayed in the viewfinder is also an effective feature for recording movies.

Additionally, the use of the Mount Adapter FTZ will allow the lens to be used with mirrorless cameras Nikon Z 7 and Nikon Z 6, also announced today. Users will be able to enjoy super-telephoto shooting at the 500 mm focal length with a system that is even more compact than ever before.

We are also planning to release the Dot Sight DF-M1, an accessory that is highly effective with super-telephoto photography. With super-telephoto shooting, a narrow field of view in the viewfinder tends to be made visible – making it easy to lose track of the subject. The Dot Sight DF-M1 makes it easy to keep track of the intended subject within the frame, even if the subject exhibits sudden movement.

PF (Phase Fresnel) Lens Elements
The PF (Phase Fresnel) lens, developed by Nikon, effectively compensates chromatic aberration, utilizing the photo diffraction phenomenon2. It provides superior chromatic aberration compensation performance when combined with a normal glass lens. Compared to many general camera lenses that employ an optical system using the photorefractive phenomenon, a remarkably compact and lightweight body can be attained with fewer lens elements.

Primary Features

  • Significantly smaller and lighter with the adoption of a Phase Fresnel (PF) lens element, making 500 mm hand-held super-telephoto photography easier and more enjoyable
  • Designed with consideration to dust- and drip-resistance; fluorine coat applied to front lens surface, effectively repelling water droplets, grease, and dirt
  • Adoption of one PF lens element and three ED glass elements for extremely sharp and detailed rendering, compatible with high pixel-count digital cameras
  • Optical performance that is not compromised with the use of the TC-14E III AF-S teleconverter
  • Materials used in the new PF lens element effectively control PF (diffraction) flare
  • Ghost and flare effectively suppressed with the adoption of the Nano Crystal Coat, enabling clear images
  • AF speed increased by making lens elements in the focusing group lighter
  • Equipped with a VR mechanism that offers camera shake compensation equivalent to a 4.01-stop increase in shutter speed, in two modes: NORMAL and SPORT
  • Electromagnetic diaphragm mechanism adopted for extremely precise aperture control

Optional Accessories
We will release the Dot Sight DF-M1 (available separately), an accessory that is highly effective with super-telephoto photography. This accessory makes it easy to keep track of the intended subject, even if the subject exhibits sudden movement.

Price and Availability
The AF-S NIKKOR 500mm f/5.6E PF ED VR will be available September 13 for the suggested retail prices (SRP) of $ 3599.95*. The Dot Sight DF-M1 will be available for $ 174.95 SRP*. For more information on these and other Nikon products, please visit www.nikonusa.com.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nikon offers optional Dot Sight accessory to aid telephoto photography

Posted in Uncategorized

 

High Sight launches the Mini portable cable camera system

14 Dec

Manufacturer of cable camera systems High Sight has unveiled the latest addition to its product lineup. The Mini System was designed with portability and ease of use in mind, but builds on High Sight’s experience building larger and more complex products. The unit is controlled via a button interface and can carry gimbals, such as the DJI Osmo, Gopro Karma Grip and similar models.

“The High Sight Mini has been a blast to create and will be a game changer.” said Kevin Brower, president and chief executive officer of High Sight. “The Mini has evolved into something more than we could’ve hoped for. With our ping pong mode, you can set it up and walk away, it’s like having an extra cameraman on set just continually getting great footage.”

The Mini uses speed and position sensing for smooth movement and has been developed to be be fully autonomous. According to High Sight, this means the operator can focus on camera control, allowing for single user operation when normally two users would be required.

The Mini is made from machined aluminum and weighs only 1.3 lbs (0.6 kg). It can carry a payload of 3.3 lbs (1.5 kg) and easily fits into a backpack.

The demo reel below will give you a better idea of the kind of shots that are possible with the company’s cable systems. And if you think the Mini could be a useful tool for shooting your next video, you can find more information on the High Sight website.

Press Release:

High Sight Mini Sets The Bar With Ultra-Portable Design And Smart Functionality

Features Fully Autonomous Mode, Whisper Quiet Movement, and Reliable Performance. High Sight Launches New Product Allowing One of a Kind Shot.

Salt Lake City, Utah, November 7th, 2017 High Sight (highsightcam.com) cable camera systems is proud to launch the ultra-portable and fully autonomous Mini system. The new system was developed through years of experience building larger and more complex products. The Mini was brought about when creator and owner of High Sight saw a need for a smaller version in their current product line.

“The High Sight Mini has been a blast to create and will be a game changer.” said Kevin Brower, president and chief executive officer of High Sight. “The Mini has evolved into something more than we could’ve hoped for. With our ping pong mode, you can set it up and walk away, it’s like having an extra cameraman on set just continually getting great footage.”

Innovative: The Mini was designed to be compact, easy to use, and intelligent. Through years of experience High Sight developed the mini to be fully autonomous. By eliminating the task of controlling the Mini the operator can focus live camera control. This functionality allows for a single user to capture the same shot that would normally require two users. The Mini is great at capturing new and creative angles. Use it to shoot
interesting b-roll or set it on ping pong mode and capture great moments in your next BTS video.

  • Intelligent speed and position sensing for perfectly smooth movement
  • Fully Autonomous mode
  • Button interface for quick and easy operation
  • Compact size allows for maximum portability
  • ¼-20 mount to carry gimbals like the DJI Osmo, Gopro Karma Grip and many more
  • Machined aluminum for increased durability and protection
  • Made in the USA

Specs and Details:

  • Weight: 1.3 lbs. / .6 kg
  • Dimensions: 7.48” Long : 3.2″ Wide : 2.3″ Tall
  • Max Payload: 3.3 lbs. / 1.5 kg
  • Max Speed: 10 mph
  • Battery: Rechargeable: Lithium ion battery

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on High Sight launches the Mini portable cable camera system

Posted in Uncategorized