RSS
 

Posts Tagged ‘Boundaries’

Five ways Google Pixel 3 pushes the boundaries of computational photography

11 Oct

With the launch of the Google Pixel 3, smartphone cameras have taken yet another leap in capability. I had the opportunity to sit down with Isaac Reynolds, Product Manager for Camera on Pixel, and Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, to learn more about the technology behind the new camera in the Pixel 3.

One of the first things you might notice about the Pixel 3 is the single rear camera. At a time when we’re seeing companies add dual, triple, even quad-camera setups, one main camera seems at first an odd choice.

But after speaking to Marc and Isaac I think that the Pixel camera team is taking the correct approach – at least for now. Any technology that makes a single camera better will make multiple cameras in future models that much better, and we’ve seen in the past that a single camera approach can outperform a dual camera approach in Portrait Mode, particularly when the telephoto camera module has a smaller sensor and slower lens, or lacks reliable autofocus.

Let’s take a closer look at some of the Pixel 3’s core technologies.

1. Super Res Zoom

Last year the Pixel 2 showed us what was possible with burst photography. HDR+ was its secret sauce, and it worked by constantly buffering nine frames in memory. When you press the shutter, the camera essentially goes back in time to those last nine frames1, breaks each of them up into thousands of ’tiles’, aligns them all, and then averages them.

Breaking each image into small tiles allows for advanced alignment even when the photographer or subject introduces movement. Blurred elements in some shots can be discarded, or subjects that have moved from frame to frame can be realigned. Averaging simulates the effects of shooting with a larger sensor by ‘evening out’ noise. And going back in time to the last 9 frames captured right before you hit the shutter button means there’s zero shutter lag.

Like the Pixel 2, HDR+ allows the Pixel 3 to render sharp, low noise images even in high contrast situations. Click image to view the level of detail at 100%. Photo: Google

This year, the Pixel 3 pushes all this further. It uses HDR+ burst photography to buffer up to 15 images2, and then employs super-resolution techniques to increase the resolution of the image beyond what the sensor and lens combination would traditionally achieve3. Subtle shifts from handheld shake and optical image stabilization (OIS) allow scene detail to be localized with sub-pixel precision, since shifts are unlikely to be exact multiples of a pixel.

In fact, I was told the shifts are carefully controlled by the optical image stabilization system. “We can demonstrate the way the optical image stabilization moves very slightly” remarked Marc Levoy. Precise sub-pixel shifts are not necessary at the sensor level though; instead, OIS is used to uniformly distribute a bunch of scene samples across a pixel, and then the images are aligned to sub-pixel precision in software.

We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic

But Google – and Peyman Milanfar’s research team working on this particular feature – didn’t stop there. “We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there’s no more need to demosaic” explains Marc. If you have enough samples, you can expect any scene element to have fallen on a red, green, and blue pixel. After alignment, then, you have R, G, and B information for any given scene element, which removes the need to demosaic. That itself leads to an increase in resolution (since you don’t have to interpolate spatial data from neighboring pixels), and a decrease in noise since the math required for demosaicing is itself a source of noise. The benefits are essentially similar to what you get when shooting pixel shift modes on dedicated cameras.

Normal wide-angle (28mm equiv.) Super Res Zoom

There’s a small catch to all this – at least for now. Super Res only activates at 1.2x zoom or more. Not in the default ‘zoomed out’ 28mm equivalent mode. As expected, the lower your level of zoom, the more impressed you’ll be with the resulting Super Res images, and naturally the resolving power of the lens will be a limitation. But the claim is that you can get “digital zoom roughly competitive with a 2x optical zoom” according to Isaac Reynolds, and it all happens right on the phone.

The results I was shown at Google appeared to be more impressive than the example we were provided above, no doubt at least in part due to the extreme zoom of our example here. We’ll reserve judgement until we’ve had a chance to test the feature for ourselves.

Would the Pixel 3 benefit from a second rear camera? For certain scenarios – still landscapes for example – probably. But having more cameras doesn’t always mean better capabilities. Quite often ‘second’ cameras have worse low light performance due to a smaller sensor and slower lens, as well as poor autofocus due to the lack of, or fewer, phase-detect pixels. One huge advantage of Pixel’s Portrait Mode is that its autofocus doesn’t differ from normal wide-angle shooting: dual pixel AF combined with HDR+ and pixel-binning yields incredible low light performance, even with fast moving erratic subjects.

2. Computational Raw

The Pixel 3 introduces ‘computational Raw’ capture in the default camera app. Isaac stressed that when Google decided to enable Raw in its Pixel cameras, they wanted to do it right, taking advantage of the phone’s computational power.

Our Raw file is the result of aligning and merging multiple frames, which makes it look more like the result of a DSLR

“There’s one key difference relative to the rest of the industry. Our DNG is the result of aligning and merging [up to 15] multiple frames… which makes it look more like the result of a DSLR” explains Marc. There’s no exaggeration here: we know very well that image quality tends to scale with sensor size thanks to a greater amount of total light collected per exposure, which reduces the impact of the most dominant source of noise in images: photon shot, or statistical, noise.

The Pixel cameras can effectively make up for their small sensor sizes by capturing more total light through multiple exposures, while aligning moving objects from frame to frame so they can still be averaged to decrease noise. That means better low light performance and higher dynamic range than what you’d expect from such a small sensor.

Shooting Raw allows you to take advantage of that extra range: by pulling back blown highlights and raising shadows otherwise clipped to black in the JPEG, and with full freedom over white balance in post thanks to the fact that there’s no scaling of the color channels before the Raw file is written.

Pixel 3 introduces in-camera computational Raw capture.

Such ‘merged’ Raw files represent a major threat to traditional cameras. The math alone suggests that, solely based on sensor size, 15 averaged frames from the Pixel 3 sensor should compete with APS-C sized sensors in terms of noise levels. There are more factors at play, including fill factor, quantum efficiency and microlens design, but needless to say we’re very excited to get the Pixel 3 into our studio scene and compare it with dedicated cameras in Raw mode, where the effects of the JPEG engine can be decoupled from raw performance.

While solutions do exist for combining multiple Raws from traditional cameras with alignment into a single output DNG, having an integrated solution in a smartphone that takes advantage of Google’s frankly class-leading tile-based align and merge – with no ghosting artifacts even with moving objects in the frame – is incredibly exciting. This feature should prove highly beneficial to enthusiast photographers. And what’s more – Raws are automatically uploaded to Google Photos, so you don’t have to worry about transferring them as you do with traditional cameras.

3. Synthetic Fill Flash

‘Synthetic Fill Flash’ adds a glow to human subjects, as if a reflector were held out in front of them. Photo: Google

Often a photographer will use a reflector to light the faces of backlit subjects. Pixel 3 does this computationally. The same machine-learning based segmentation algorithm that the Pixel camera uses in Portrait Mode is used to identify human subjects and add a warm glow to them.

If you’ve used the front facing camera on the Pixel 2 for Portrait Mode selfies, you’ve probably noticed how well it detects and masks human subjects using only segmentation. By using that same segmentation method for synthetic fill flash, the Pixel 3 is able to relight human subjects very effectively, with believable results that don’t confuse and relight other objects in the frame.

Interestingly, the same segmentation methods used to identify human subjects are also used for front-facing video image stabilization, which is great news for vloggers. If you’re vlogging, you typically want yourself, not the background, to be stabilized. That’s impossible with typical gyro-based optical image stabilization. The Pixel 3 analyzes each frame of the video feed and uses digital stabilization to steady you in the frame. There’s a small crop penalty to enabling this mode, but it allows for very steady video of the person holding the camera.

4. Learning-based Portrait Mode

The Pixel 2 had one of the best Portrait Modes we’ve tested despite having only one lens. This was due to its clever use of split pixels to sample a stereo pair of images behind the lens, combined with machine-learning based segmentation to understand human vs. non-human objects in the scene (for an in-depth explanation, watch my video here). Furthermore, dual pixel AF meant robust performance of even moving subjects in low light – great for constantly moving toddlers. The Pixel 3 brings some significant improvements despite lacking a second lens.

According to computational lead Marc Levoy, “Where we used to compute stereo from the dual pixels, we now use a learning-based pipeline. It still utilizes the dual pixels, but it’s not a conventional algorithm, it’s learning based”. What this means is improved results: more uniformly defocused backgrounds and fewer depth map errors. Have a look at the improved results with complex objects, where many approaches are unable to reliably blur backgrounds ‘seen through’ holes in foreground objects:

Learned result. Background objects, especially those seen through the toy, are consistently blurred. Objects around the peripheries of the image are also more consistently blurred. Learned depth map. Note how objects in the background (blue) aren’t confused as being closer to the foreground (yellow) as they are in the heat map below.
Stereo-only result. Background objects, especially those seen through the toy, aren’t consistently blurred. Stereo-only based depth map from dual pixels. Note how some elements in the background appear to be closer to the foreground than they really are.

Interestingly, this learning-based approach also yields better results with mid-distance shots where a person is further away. Typically, the further away your subject is, the less difference in stereo disparity between your subject and background, making accurate depth maps difficult to compute given the small 1mm baseline of the split pixels. Take a look at the Portrait Mode comparison below, with the new algorithm on the left vs. the old on the right.

Learned result. The background is uniformly defocused, and the ground shows a smooth, gradual blur. Stereo-only result. Note the sharp railing in the background, and the harsh transition from in-focus to out-of-focus in the ground.

5. Night Sight

Rather than simply rely on long exposures for low light photography, ‘Night Sight’ utilizes HDR+ burst mode photography to take usable photos in very dark situations. Previously, the Pixel 2 would never drop below 1/15s shutter speed, simply because it needed faster shutter speeds to maintain that 9-frame buffer with zero shutter lag. That does mean that even the Pixel 2 could, in very low light, effectively sample 0.6 seconds (9 x 1/15s), but sometimes that’s not even enough to get a usable photo in extremely dark situations.

The camera will merge up to 15 frames… to get you an image equivalent to a 5 second exposure

The Pixel 3 now has a ‘Night Sight’ mode which sacrifices the zero shutter lag and expects you to hold the camera steady after you’ve pressed the shutter button. When you do so, the camera will merge up to 15 frames, each with shutter speeds as low as, say, 1/3s, to get you an image equivalent to a 5 second exposure. But without the motion blur that would inevitably result from such a long exposure.

Put simply: even though there might be subject or handheld movement over the entire 5s span of the 15 frame burst, many of the the 1/3s ‘snapshots’ of that burst are likely to still be sharp, albeit possibly displaced relative to one another. The tile-based alignment of Google’s ‘robust merge’ technology, however, can handle inter-frame movement by aligning objects that have moved and discarding tiles of any frame that have too much motion blur.

Have a look at the results below, which also shows you the benefit of the wider-angle, second front-facing ‘groupie’ camera:

Normal front-camera ‘selfie’ Night Sight ‘groupie’ with wide-angle front-facing lens

Furthermore, Night Sight mode takes a machine-learning based approach to auto white balance. It’s often very difficult to determine the dominant light source in such dark environments, so Google has opted to use learning-based AWB to yield natural looking images.

Final thoughts: simpler photography

The philosophy behind the Pixel camera – and for that matter the philosophy behind many smartphone cameras today – is one-button photography. A seamless experience without the need to activate various modes or features.

This is possible thanks to the computational approaches these devices embrace. The Pixel camera and software are designed to give you pleasing results without requiring you to think much about camera settings. Synthetic fill flash activates automatically with backlit human subjects, and Super Resolution automatically kicks in as you zoom.

At their best, these technologies allows you to focus on the moment

Motion photos turns on automatically when the camera detects interesting activity, and Top Shot now uses AI to automatically suggest the best photo of the bunch, even if it’s a moment that occurred before you pressed the shutter button. Autofocus typically focuses on human subjects very reliably, but when you need to specify your subject, just tap on it and ‘Motion Autofocus’ will continue to track and focus on it very reliably. Perfect for your toddler or pet.

At their best, these technologies allow you to focus on the moment, perhaps even enjoy it, and sometimes even help you to capture memories you might have otherwise missed.

We’ll be putting the Pixel 3 through its paces soon, so stay tuned. In the meantime, let us know in the comments below what your favorite features are, and what you’d like to see tested.


1In good light, these last 9 frames typically span the last 150ms before you pressed the shutter button. In very low light, it can span up to the last 0.6s.

2We were only told ‘say, maybe 15 images’ in conversation about the number of images in the buffer for Super Res Zoom and Night Sight. It may be more, it could be less, but we were at least told that it is more than 9 frames. One thing to keep in mind is that even if you have a 15-frame buffer, not all frames are guaranteed to be usable. For example, if in Night Sight one or more of these frames have too much subject motion blur, they’re discarded.

3You can achieve a similar super-resolution effect manually with traditional cameras, and we describe the process here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Five ways Google Pixel 3 pushes the boundaries of computational photography

Posted in Uncategorized

 

3 Tips for Setting Boundaries to Avoid Burnout in Photography

11 Jan

Whether you’re a professional photographer or a hobbyist, there have likely been times when you’ve felt ready to throw in the towel and quit photography forever. Those feelings are normal and are usually the result of added stress from things like a looming deadline or a project that you don’t feel prepared to tackle.

Often, once the added stress subsides, so does the desire to quit photography. However, if that stress becomes chronic it can cause physical, emotional, and mental burnout that’s much more difficult to bounce back from.

3 Tips for Setting Boundaries to Avoid Burnout in Photography

One of the most effective things that photographers of all levels can do to prevent and avoid burnout is to set appropriate boundaries. In this article, we’ll discuss a few different boundaries that you may want to consider setting now in order to protect yourself from future burnout.

1. Set Office Hours

Between smartphones and wi-fi access, it’s easy to be available all the time. It’s easy to respond to a quick text while you’re on vacation. Replying to a midnight email when you’re already awake doesn’t seem like a big deal.

On one hand, being accessible to your clients (whether paid or unpaid) can make them feel appreciated and enhance the client experience. On the other hand, it can also lead to exhaustion and burnout because it always feels like you’re “on duty”.

It’s actually okay not to be available 24/7. In fact, it’s more than just okay. Setting boundaries in terms of availability is crucial to a healthy balance between your work and your personal life.

3 Tips for Setting Boundaries to Avoid Burnout in Photography

Consider setting business/office hours, and do your best to stick to them! The client experience is primarily driven by the quality of the service they receive rather than the speed with which they receive it. Most current or potential clients will be satisfied to receive a response within 24-48 hours.

Just because you happen to see a midnight email pop through doesn’t mean you need to respond to it right away! If you’ve always been immediately accessible and are concerned about making this transition, it’s easy to set an auto email or Facebook Messenger reply to let potential clients know that you’ve received their inquiry and when they can expect a response back from you.

3 Tips for Setting Boundaries to Avoid Burnout in Photography

2. Build in Downtime

I know as well as anyone that it can be really difficult to build downtime into your schedule because doing so often feels like you’re either losing opportunities or income. However, when you’re very busy with photography, it’s important to remember to schedule two kinds of downtime in order to prevent burnout – processing time, and days off.

In the spring, summer, and fall, it can be tempting to book photo sessions every night and weekend. It’s not a bad thing to fill your schedule, but don’t forget that your work generally isn’t done once you leave the session itself. Most sessions require some degree of processing time, which could include everything from culling, editing, social media posts, communication with your clients, and arranging for delivery.

When you’re creating your calendar of availability, don’t forget to factor in all the time you’ll spend after the actual session itself and build in that processing time (or plan to outsource it) accordingly.

3 Tips for Setting Boundaries to Avoid Burnout in Photography

In addition to processing time, I have discovered that there’s tremendous value in blocking out a day or two on my calendar as personal days, even during my busiest season. For me, this is so important both in terms of self-care and also in terms of prioritizing and preserving relationships with my family.

Although I started doing this in order to save my own sanity, I’ve discovered that setting and communicating boundaries in terms of my availability has been helpful in other ways as well. Potential clients tend to book more quickly than they used to because they know that my availability is limited. I also receive far fewer last minute requests to reschedule to a different date or time for the same reason.

3 Tips for Setting Boundaries to Avoid Burnout in Photography

3. Communicate Your Timeline

Another small thing that can greatly reduce your stress and frustration is to communicate your timeline with your clients up front and let them know what they can expect in regards to receiving their images.

Make sure that this timeline is realistic. Factor in all scheduled sessions, your post-session processing time, and your scheduled downtime. By doing so, you’ll be able to give clients a more realistic timeline for receiving their images, while also decreasing the number of all-night editing sessions for you.

3 Tips for Setting Boundaries to Avoid Burnout in Photography

Obviously, the timeline for a professional photographer with five weddings in their queue is going to be totally different than a hobbyist photographer taking photos of a friend’s children. However, you don’t know who a client has worked with in the past, or what their expectations are as they enter into a session, which is why it’s so important to clearly communicate your timeline from the beginning!

Conclusion

Do you have any other advice for setting boundaries to avoid burnout in photography? Have you experienced it? What did you do to prevent it from happening again? Please share your thoughts in the comments below!

The post 3 Tips for Setting Boundaries to Avoid Burnout in Photography by Meredith Clark appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on 3 Tips for Setting Boundaries to Avoid Burnout in Photography

Posted in Photography

 

US Megaregions: Algorithm Redefines Boundaries of Metropolitan Areas

07 Dec

[ By WebUrbanist in Culture & History & Travel. ]

real-america

A new geographical study of the United States reveals the functional boundaries of megapolises around the country, defining them by usage rather than arbitrary political borders. Unlike gerrymandered districts or state lines, these sprawling areas are rooted in deep data analytics versus historical accident.

america-borders

Historical geographer Garrett Dash Nelson teamed up with urban analyst Alasdair Rae to publish a paper using commuting information and computational algorithms. Studying over 4,000,000 commutes, they traced interconnections between economically connected points and reported the results in An Economic Geography of the United States: From Commutes to Megaregions.

border-edges

Taking it a step further, the authors also devised names for various megaregions extrapolated from the data – while semi-subjective, they start to give a sense of the real shape of metropolitan zones (and reveal areas where few residents and vast distances make it hard to define or confine regions).

connections-raw

Some cities at the heart of various sub-regions are not surprising — San Francisco and Los Angeles were givens — but others may be new to some people, like Fresno, California. Many cities trace influence across state borders, like Minneapolis into Wisconsin or New York City into effectively every adjacent state. Some overlap while others are isolated, especially in the west.

main

In the end, this is not a definitive way to look at geography within the Lower 48, but it does start to push the observer to rethink conventional regions of influence and defined borders. From the abstract: “The emergence in the United States of large-scale ‘megaregions’ centered on major metropolitan areas is a phenomenon often taken for granted in both scholarly studies and popular accounts of contemporary economic geography. We compare a method which uses a visual heuristic for understanding areal aggregation to a method which uses a computational partitioning algorithm, and we reflect upon the strengths and limitations of both. We discuss how choices about input parameters and scale of analysis can lead to different results, and stress the importance of comparing computational results with ‘common sense’ interpretations of geographic coherence.”

Share on Facebook





[ By WebUrbanist in Culture & History & Travel. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]


WebUrbanist

 
Comments Off on US Megaregions: Algorithm Redefines Boundaries of Metropolitan Areas

Posted in Creativity

 

Nikon’s New D5 and D500 Push the Boundaries of DSLR

09 Jan

Nikon D5 and D500 Push the Boundaries of DSLR

CES 2016 saw the announcement of two important DSLRs from Nikon, including an update to its flagship line, as well as an almost mythical product many had given up hope of ever seeing: a true D300 replacement.

The newly announced D5 is Nikon’s top-of-the-line professional DSLR, with a 20.8MP full frame sensor capable of shooting at up to 12fps with AF and 14fps without (with the mirror locked up). The headline feature, though, is arguably the new 153 point AF system with 99 cross-sensors. AF tracking with this new system will also benefit from the doubling in resolution of the RGB metering sensor used for scene analysis, and the D5 is the first Nikon camera capable of 4K video.

The biggest news though may be the long-awaited replacement of the D300S. The 20.9MP APS-C D500 is Nikon’s ‘best enthusiast DX offering’, and the term ‘enthusiast’ might be an understatement. With continuous shooting speeds of 10 fps and a 200 shot buffer for Raw images, the camera is aimed squarely at action and fast-paced photographers who don’t mind the smaller sensor, or even benefit from its extra reach. It features the same 153-point AF system and 180k-pixel RGB metering sensor of the D5, along with the EXPEED 5 processor. It can also capture 4K/UHD video and also features ‘SnapBridge’, a constant connection to a smartphone using Bluetooth.

Join us as we take a closer look at the technologies inside these cameras.* Pro tip: you may find viewing this slideshow easier if you hover over and click the ‘fullscreen’ button at the upper right of the slide, and use left/right keys to leaf through the slideshow.


* Some of the information in these slides come from Nikon’s technology digest on the D500, here.

Nikon D5 and D500 Push the Boundaries of DSLR

Let’s start with the AF module, which is shared between both the D5 and D500. Here is the Multi-Cam 20K in all its glory. It’s a major step up from the Multi-Cam 3500FX module, variants of which were found in the D4s, D810, and D750. Up from 51 total AF points with 15 central cross-sensors, the module in the D5 and D500 offers 153 phase-detect points with 99 cross-sensors spread across much of the frame. 

The improvements don’t stop there though: the module has its own dedicated processor, to deal with the computationally intensive information coming from 153 AF points cross-referenced with the scene analysis system (more on that later). The center AF point is now sensitive down to -4 EV. All 152 other points are sensitive down to -3 EV, much like the D750 and D7200, albeit now with an even wider spread of points.

If Nikon’s claims are true, we can expect formidable AF performance in low light from the D5 and D500 – possibly the best from any DSLR. Although we’ve previously found Sony’s a7S to focus in at nearly -5 EV, its contrast-detect AF, and associated hunting, made it quite slow in practice. -4 EV phase-detect AF on a DSLR should be seriously impressive because it will likely be far more decisive than mirrorless, contrast-based systems. Additionally, cross-type sensors tend to perform better in low light and with low contrast subjects: cross-sensors are able to make focus measurements from subjects containing both horizontal and vertical detail (or, at least, detail that has either a horizontal or vertical component to it). In low light or backlit situations, where lowered contrast already makes it difficult to distinguish subject detail, sensors looking along multiple axes for detail to ‘lock on’ to simply have a higher chance of success than sensors that can only ‘see’ detail with a, say, horizontal component.

Nikon D5 and D500 Push the Boundaries of DSLR

Here’s the spread of AF points across the frame in the D5. The new AF module appears to provide more AF coverage across the frame than any previous Nikon full-frame camera (and likely any full-frame DSLR), though not quite as much coverage of phase-detect points as Sony’s recent Alpha 7R II mirrorless full-frame.

55 of the points are user-selectable, indicated by squares. The AF points indicated by dots are essentially assist points, used by the camera if your subject moves to or simply happens to fall in between the user-selectable points. What makes these assist points particularly useful in a Nikon? Nikon’s industry-leading 3D tracking, which’ll select any one of them for you automatically if your subject happens to move, or you recompose, such that it falls under one of these assist points (in AF-C ‘Auto’ and ‘3D’ modes, that is). The video below shows how 3D tracking can be used on a D750 to precisely track an eye, so those wondering how 153 points might be useful, well, imagine this sort of performance but even more precise, with wider AF point coverage.

35 of these 55 points are cross-type: the outermost two sections of 10 points each as well as the central section of 15. This is more clearly demarcated in the next slide.

Nikon D5 and D500 Push the Boundaries of DSLR

Here’s what you get by putting the designed-for-full-frame Multi-Cam 20000 module inside the APS-C D500. The AF points stretch out to the literal edges of the frame. Red points indicate cross-sensors. While Canon’s nearest competitor, the 7D Mark II, comes quite close to this level of coverage – with all cross-sensors to boot – it doesn’t quite match it.

But it’s not even these headline features that excite us the most. It’s details such as the addition of an automated system for applying AF fine tune that have really caught our eyes. We’ve written before about how mirrorless cameras, with their direct measurement of focus (rather than a secondary sensor, acting as a proxy) tends to be more accurate when it comes to fine-focusing, especially when using fast lenses. However, we’re not alone in proposing the idea of using DSLR’s often slow, but highly precise live view autofocus to help make it easier to correct for the cumulative errors that can undermine dedicated sensor phase detection systems. Patents have been issued yet this is the first time we’ve seen it implemented in a final product. Automating the process means far more photographers may actually calibrate their lenses for more accurate focus. Furthermore, the reality of DSLR AF is that the optimal calibration values can depend on lighting, environmental factors, wide or tele end of zoom, and subject distance; hence, automating the process will realistically allow users to calibrate more often for any given scenario. Sadly, there’s no indication that calibration values can be saved for different focal lengths or subject distances (a la Sigma lenses via their USB dock), nor is there any mention of higher precision central points that give the latest Canon cameras’ central AF point nearly mirrorless-levels of precision.

We’ve not yet had a chance to use the D5/500’s automated AF fine tune but you can be sure it’ll be one of the first things we try when one gets into our studio.

Nikon D5 and D500 Push the Boundaries of DSLR

Remember that ‘scene analysis system’ the AF system cross-references with information from the 153 AF points? It’s enabled by essentially a whole separate image sensor in the DSLR whose sole job is to analyze the scene to understand how to expose and focus it. Now with 180,000 pixels in the D5 and D500, this sensor has doubled in resolution compared to the D4s, D810, and D750.

Confused by how this works? Let’s break it down. Your smartphone or mirrorless camera projects light from the lens directly onto the imaging sensor, which can ‘see’ the scene to focus and expose it properly, even find faces or other subjects and track them no matter where they move to in the frame. DSLRs have it much tougher – all the light entering the lens is being diverted either upward to the optical viewfinder, or downward to a dedicated AF module with its phase-detect sensors that understand only distance. Some of that light going to the viewfinder is itself diverted to a metering sensor, which determines appropriate exposure. Some time back, DSLR manufacturers replaced this rudimentary metering sensor with an actual RGB 2D array or, essentially, an image sensor.

While years ago this image sensor started at a measly 1,005 pixels in the D300, it did enable rudimentary subject tracking (‘3D tracking’ in Nikon terms), since the sensor provided some color and spatial information about the subject underneath any AF point, which the camera could combine with an understanding of subject distance from the phase-detect AF sensors to understand where your subject of interest is at any given moment. Today, cameras like the D750 and D810 provide uncanny subject tracking with their 91,000-pixel metering sensors – able in many cases to track even objects as specific as a human eye. Nikon DSLRs are the only DSLRs we’ve tested to-date that are capable of the level of class-leading tracking precision you see in the videos linked above (Canon’s newer DSLRs do well with distant subjects well isolated with respect to depth, but lag behind in more demanding applications requiring higher precision). Hence, a doubling in resolution of the metering sensor is likely to further Nikon’s lead in this arena. Furthermore, metering applications also benefit from the increased resolution: as the flowchart above indicates, numerous features like face exposure, fill-flash, Active D-Lighting, and highlight-weighted metering will experience increased accuracy. 

Nikon D5 and D500 Push the Boundaries of DSLR

Click on the button at the upper right of the image to view this entire slideshow in fullscreen for a better view.

So what exactly does this 180,000-pixel RGB metering sensor ‘see’ such that it can aid the camera in finding faces and tracking subjects? We’ve taken the liberty of doing some guesswork to simulate a ‘worst case’ representation of how a 180k-pixel sensor might ‘see’ a typical scene being photographed.

If we assume that the 180k figure refers to the total number of red, green and blue pixels, then we can surmise that there’s only, at best, 60k pixels of true spatial information for any given color. For a 3:2 aspect ratio, that’s about 300×200 pixels. So we’ve taken an image and reduced it to 300×200, then blown it back up for ease of viewing. That’s what you see above.

In reality, the metering sensor is likely to ‘see’ a bit more resolution, since the above only represents the spatial resolution of any one color channel (or 3 R, G, and B pixels combined). Even still, you can get an idea of how the sensor can detect faces, and even understand what was underneath your selected AF point when you initiated focus in order to track it even if it moves to a position underneath a different AF point. With such increases in resolution of the scene analysis system, we wouldn’t be surprised if DSLRs one day were capable of eye detection. And while we fully expect the D5/500 to be capable of tracking an eye, it’ll only do so in ‘3D tracking’ mode once you’ve ‘told’ the camera where the eye of interest is by initiating focus with your focus point over it. We’ll be curious to see if the automatic face detection in ‘Auto’ area mode prioritizes eyes of faces. 

Nikon D5 and D500 Push the Boundaries of DSLR

Click on the button at the upper right of the image to view this entire slideshow in fullscreen for a better view.

Compare the last image to this one (use the left/right keys on your keyboard for ease): a 213×142 pixel representation of the same image that simulates the spatial resolution of any one color channel for the 91,000-pixel RGB metering sensor in previous full-frame Nikon DSLRs. It’s not hard to imagine how even with this level of understanding of a scene, previous Nikon full-frames were able to track quite well. But every bit of resolution helps increase precision of tracking, so while the image above isn’t a huge step down from the last image representing what the new 180k-pixel sensor sees, there’s still a significant difference.

And remember, Nikon already led the industry with its previous 91,000-pixel RGB sensor, even performing better at subject tracking than the Canon 5DS and 7D Mark II with their 150,000-pixel RGB+IR metering sensors. Hence, we expect the doubling in metering sensor resolution to further widen the gap in performance between Nikon and all other DSLRs, potentially making the Nikon platform the best for applications that benefit from continuous subject tracking (barring any missteps on Nikon’s part). Continuous eye tracking on a Sony a7R II is still likely to give the D5/500 a run for its money, but general subject tracking of any subject, even to aid ‘focus-and-recompose’ by having the camera automatically select an appropriate AF point to stick to your subject as you recompose, will likely remain unparalleled on these Nikons compared to any other camera. 3D tracking’s ability to combine scene analysis with the distance information reported by every AF point on the phase-detect sensor makes for subject tracking that I, personally, find indispensable shooting candid portraits or weddings and events in a more photojournalistic style: I ‘define’ my subject by initiating focus on it, and the camera retains focus on it no matter how I recompose or where the subject moves to as long as I keep the shutter button half-depressed.

Nikon D5 and D500 Push the Boundaries of DSLR

Click on the button at the upper right of the image to view this entire slideshow in fullscreen for a better view.

The 180k-pixel metering sensor is a huge step up from previous DX offerings from Nikon, which only featured – at best – a 2,016-pixel RGB metering sensor. The 90-fold increase in metering sensor resolution should bring a level of subject tracking to the DX format never before seen.

Above is a 55×37 pixel representation of our previous image – and this time that’s a sort of ‘best case’ representation of what cameras like the D7200’s scene analysis system ‘saw’. Instead of showing you what any one color channel sees, we’ve decided to show you what 2k pixels in total looks like, as one-third of this resolution is a pixelated, unintelligible 32×21-pixel mess (from this forum discussion). In other words, the image above represents only a 30x drop in resolution compared to our previous 180k-pixel representation, and so likely underestimates the increased performance the scene analysis system in the D500 is likely to exhibit compared to previous DX offerings (which still performed surprisingly well for their low resolution metering sensors).

Nikon D5 and D500 Push the Boundaries of DSLR

Another feature enabled by the RGB metering sensor is flicker reduction. While this is only available in video on the D5, the D500 is capable of waiting until the right moment to fire the shutter under flickering light, so as to achieve and maintain proper exposure. Although Canon has been offering this since the 7D Mark II, it’s the first time we’re seeing this feature in a Nikon camera.

Nikon D5 and D500 Push the Boundaries of DSLR

It’s worth emphasizing here something Nikon clearly emphasized in their press conference: one of the true advantages of a DSLR over current mirrorless cameras is the lack of viewfinder lag and the true view of the scene – at least in between mirror blackouts – compared to the typical stop-motion sequence of last-shot images most mirrorless cameras exhibit during fast bursts. This simply makes it easier to follow action with an optical viewfinder than with a mirrorless camera, which is why in the video screenshot above, the photographer was able to maintain the center AF point over his subject with the D5, while missing the subject with the ‘mirror-less’ camera example on the left. It’s worth noting though that Nikon’s own 1-series cameras provide a live feed even during continuous shooting, which actually circumvents this shortcoming of mirrorless (hint: that’s how mirrorless cameras will undoubtedly address this issue in the future).

DSLRs have also been optimized to make quick phase-detect AF measurements in between those quick mirror blackouts, allowing cameras like the D5 and Canon’s 1D X to acquire AF almost instantaneously even during 12 and 11 fps bursts. Impressive to say the least. That’s not to say mirrorless cameras aren’t catching up – in good light, Samsung’s NX1 can often successfully continuously refocus at 15 fps. Which means, yes, we do have to call out Nikon for suggesting that all mirrorless cameras have ‘soft and slow AF’: we can’t help but wonder if in that particular video sequence, the Sony Alpha series camera were left in AF-S, as cameras like the a7R II can, in fact, successfully refocus on approaching subjects (and when it can’t, the box doesn’t remain green as it does in the out-of-focus example in the Nikon press conference video – unless the focus mode is left in AF-S).

Regardless, though, with mirrorless cameras you’re still left with the issue of difficulty in following the subject without a live view during bursts.

Video: Matt Granger

Nikon D5 and D500 Push the Boundaries of DSLR

Another feature that really helps fast-paced photography is direct access to AF point selection. The D500 is the first DX-format Nikon to sport a dedicated AF-selection joystick, pictured right below and to the left of the AF-ON button. In fact, short of the D4/D5-series of cameras, it’s the only Nikon camera to feature this joystick. Cameras like the D750 and D810 dedicate their D-pads (pictured here above the ‘info’ button) to AF point selection, which works well, but never felt as fast as Canon’s dedicated AF-selection joystick. So the joystick is a welcome addition.

And if we understand correctly, the D500’s touchscreen LCD can also be used to directly access AF point selection (don’t quote us on this yet though). We first saw this on the D5500, where in OVF shooting you could dedicate the right half of the touchscreen to AF point selection. This made it really easy to use your thumb to instantly jump over to any AF point instantaneously, without your eye ever leaving the viewfinder. It’s actually faster than using a dedicated joystick, and we’re hoping to see similar functionality in the D500. The D5 does not retain this functionality with its touchscreen, though.

This brings me to a point I’ve made for some time to manufacturers now: why not just replace the AF joystick and area where your thumb rests with an AF touchpad? It could be relatively small, but with a 1:1 mapping to selectable AF points such that – over time – your thumb would learn to quickly jump to, or near, any desired AF point. For enhanced precision of selection, make the touchpad pressure sensitive and have different pressures activate different granularity of AF point movement. Want to avoid accidentally shifting the AF point? Allow the user to adjust pressure sensitivity of the touchpad. The possibilities are limitless with some good hardware and some clever programming.

Speaking of fast AF point selection, those fond of the ‘focus and recompose’ technique should take note: turn on Nikon’s ‘3D tracking’ in AF-C, place your selected AF point over your subject, half-press and hold the shutter button, then recompose. This is probably the fastest way to select a different AF point: by having your camera do it automatically using Nikon’s industry leading subject tracking. If your subject falls outside of the AF area, just let the camera track it all the way out to the nearest AF point, then hold down the AF joystick (‘sub-selector’) to lock AF, and continue recomposing.

Nikon D5 and D500 Push the Boundaries of DSLR

Speaking of ergonomic improvements, notice anything different near the shutter button? That’s right, you no longer have to re-assign the movie button to ISO, because there’s now a dedicated ISO button! This is a boon for one-handed shooting: previously, I’d always reassign the movie record button to ISO so I could change ISO setting during one-handed shooting (since the ISO button is usually on the left side of Nikon DSLRs).

It’s also worth pointing out the button next to the ISO button: the dedicated exposure compensation (EC) button. Common to most higher-end Nikons, this button is really not to be overlooked. It means easy, consistent access to exposure compensation no matter what shooting mode you’re in, including M mode with Auto ISO. Certain competitors without dedicated EC dials or buttons make it it quite difficult to bias brightness in M mode with Auto ISO engaged (looking at you: 1D X, which will make you sacrifice the SET button for EC or make you pull your eye away from the viewfinder to use the Q menu to adjust EC in M mode).

Nikon D5 and D500 Push the Boundaries of DSLR

Not to be overlooked is the new SB-5000 Speedlight: the company’s new flagship flash. Its standout feature is its ability to operate and trigger via radio frequency, a first for Nikon’s line of portable flashes. This brings the Nikon system in-line with what Canon has offered for some time now, and also obviates the need for 3rd party accessories.

Nikon claims that when the flash is paired with the WR-R10 Wireless Remote Adapter set and a D5 or D500, the flash will operate without a direct line of sight at a range of up to approximately 98 feet (30 meters). With that same combination, the flash will be able to control up to six groups or 18 Speedlights. Photographer Todd Owyoung confirms that Nikon CLS features like TTL, Manual power, Groups, and Flash Exposure Compensation settings are all accessible directly via the camera menu system as it essentially always has with Nikon’s sensible flash system design. Just with the added power of radio control, now. And with Nikon’s extensive button customization, this will all be accessible with just one button press.

The SB-5000 is a significant addition to Nikon’s flash line-up, not only for the radio-triggered control it brings during applications where line-of-sight isn’t feasible or practical, but also because pairing flashes to the WR-R10 is arguably preferable to pairing to the outdated SU-800 commander – which is so dated that it attaching it to your camera disables Auto ISO. That said, we really hope to see an radio-controlled update to the SU-800 commander, since the WR-R10 remote adapter doesn’t have an IR/red AF assist beam, which I personally find indispensable for dance floor photography at weddings where I typically only use off-camera flash anyway and, therefore, prefer not to waste the weight of a full-blown flash on my hotshoe.

Which reminds me: I don’t mind the lack of a built-in flash one bit on the D500 (or D5 for that matter), as I’ll take the bigger pentaprism box and its increased viewfinder magnification (or space for a higher resolution metering sensor) over an on-board flash that I’ll never use compared to the bounce or off-camera flash of a Speedlight.

Nikon D5 and D500 Push the Boundaries of DSLR

The D5 and D500 are Nikon’s first 4K capable DSLRs, but 4K comes with some severe limitations. On the D5, recording is only available for 3 minutes at a time (29:59 for the D500), and both cameras record 4K UHD (no DCI 4K) with a heavy crop factor. It’s nearly a 1.5x crop factor (nearly Super 35) on the full-frame D5, while the D500 experiences an even larger crop factor for 4K, pictured in red above (the yellow rectangle outlines the DX/Super 35 area on the D500, compatible with Full HD).

All in all – and perhaps we’re being a bit cynical – we’re not terribly excited about the inclusion of 4K on the two cameras. Yes 4K can be a pretty handy thing to have (and the uses for it will only increase as more people buy 4K capable displays), but there are numerous hints that these cameras aren’t making the most of it. Aside from the heavy crop factors above, lack of oversampling and almost certain presence of rolling shutter will likely limit the usefulness of 4K video from these cameras. Furthermore, the continued absence of focus peaking to aid manual focus or zebra warnings to help set exposure are starting to look like major oversights. And, although we’d love to be surprised, we’re concerned that Nikon’s continued adherence to the less efficient 8-bit H.264 compression system and its reluctance to publicize bitrates means the in-camera capture won’t be as exciting as the headline specs suggest. That said, there’s always the option to output 4K over HDMI to an external recorder, so it might find some use for more dedicated video shooters – if a good signal is sent over HDMI.

And then there’s autofocus in video, where Nikon DSLRs tend to fall well behind the competition. Lack of any form of on-sensor phase-detect AF, even available in Canon’s nearest D500 competitor the 7D Mark II, means that AF in video is essentially unusable.

Nikon D5 and D500 Push the Boundaries of DSLR

On a more positive note, there are some solid additions to video on these cameras, like Active D-Lighting (ADL). We’ve always found Nikon’s ADL to be quite effective at reducing exposure to retain highlights, while boosting deeper tones to retain shadows. And ADL does a nice job of this global contrast reduction while attempting to preserve local contrast using its advanced tone-mapping algorithms.

This can be computationally intensive though, so has not been available previously in video. With the new EXPEED 5 image-processing engine, though, ADL is available in movies at resolutions up to 1080p.

For incredibly high contrast scenes, when ADL’s highest setting may not be enough to tame the scene’s extreme contrast, you can use the Flat picture profile and grade your footage later.

Nikon D5 and D500 Push the Boundaries of DSLR

On the D500, the EXPEED 5 processor even enables electronic vibration reduction (VR) in videos up to 1080p. Electronic VR can correct for horizontal and vertical movement, as well as rotation. This helps stabilize video footage, particularly when combined with optical VR in lenses.

The combination of electronic (or ‘digital’) and optical VR or IS (image stabilization) is something we tend to see more of in mirrorless cameras, like Olympus’ E-M5 II or the latest 1″-type sensor compacts from Canon and Sony. It’s great to see in a DSLR form-factor.

There’s no mention of this feature in the D5, though.

Nikon D5 and D500 Push the Boundaries of DSLR

Then there’s that continuously-connected wireless system. Snapbridge sounds very impressive – making use of the low-energy Bluetooth standard to maintain a connection between the cameras and a smart device so that images can be transferred without having to constantly re-establish connections. Our experiences suggest that the easier a feature is to use, the more likely we are to use it and the more likely we are to appreciate its benfits (something that often crosses our minds when DSLR shooters tell us they don’t use video on their camera). 

And in our connected age, there’s no excuse for camera manufacturers to not facilitate ease of image management and sharing. DSLRs have it a little tough in this regard: they’re not running the full-blown OS smartphone cameras have access to, which means that apps and ecosystems are limited in scope. But we’ve seen smartphone connectivity evolving in DSLRs, and we’re all for it. We’ll be curious to test out how SnapBridge functions on the D500.

We’ve covered a bit of ground in this slideshow, so if we were to sum up our overall thoughts on these cameras, they’d be as follows: we’re impressed that Nikon has taken a tried-and-true system and improved significantly on it. Nikon has addressed shortcomings, like lack of cross-type AF sensors, radio-controlled flash, buffer depth and burst speeds in DX format, as well as added some serious goodies: unprecedented AF frame coverage, low light AF ability, and automated focus calibration. Combine these with best-in-class object tracking in continuous AF, and the high performance sensors we typically see from Nikon that offer class-leading ISO performance and dynamic range, and we potentially have some industry-leading DSLRs on the horizon.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nikon’s New D5 and D500 Push the Boundaries of DSLR

Posted in Uncategorized

 

Nikon’s New D5 and D500 Push the Boundaries of DSLR

08 Jan

Nikon D5 and D500 Push the Boundaries of DSLR

CES 2016 saw the announcement of two important DSLRs from Nikon, including an update to its flagship line, as well as an almost mythical product many had given up hope of ever seeing: a true D300 replacement.

The newly announced D5 is Nikon’s top-of-the-line professional DSLR, with a 20.8MP full frame sensor capable of shooting at up to 12fps with AF and 14fps without (with the mirror locked up). The headline feature, though, is arguably the new 153 point AF system with 99 cross-sensors. AF tracking with this new system will also benefit from the doubling in resolution of the RGB metering sensor used for scene analysis, and the D5 is the first Nikon camera capable of 4K video.

The biggest news though may be the long-awaited replacement of the D300S. The 20.9MP APS-C D500 is Nikon’s ‘best enthusiast DX offering’, and the term ‘enthusiast’ might be an understatement. With continuous shooting speeds of 10 fps and a 200 shot buffer for Raw images, the camera is aimed squarely at action and fast-paced photographers who don’t mind the smaller sensor, or even benefit from its extra reach. It features the same 153-point AF system and 180k-pixel RGB metering sensor of the D5, along with the EXPEED 5 processor. It can also capture 4K/UHD video and also features ‘SnapBridge’, a constant connection to a smartphone using Bluetooth.

Join us as we take a closer look at the technologies inside these cameras.

Nikon D5 and D500 Push the Boundaries of DSLR

Let’s start with the AF module, which is shared between both the D5 and D500. Here is the Multi-Cam 20K in all its glory. It’s a major step up from the Multi-Cam 3500FX module, variants of which were found in the D4s, D810, and D750. Up from 51 total AF points with 15 central cross-sensors, the module in the D5 and D500 offers 153 phase-detect points with 99 cross-sensors spread across much of the frame. 

The improvements don’t stop there though: the module has its own dedicated processor, to deal with the computationally intensive information coming from 153 AF points cross-referenced with the scene analysis system (more on that later). The center AF point is now sensitive down to -4 EV. All 152 other points are sensitive down to -3 EV, much like the D750 and D7200, albeit now with an even wider spread of points.

If Nikon’s claims are true, we can expect formidable AF performance in low light from the D5 and D500 – possibly the best from any DSLR. Although we’ve previously found Sony’s a7S to focus in at nearly -5 EV, its contrast-detect AF, and associated hunting, made it quite slow in practice. -4 EV phase-detect AF on a DSLR should be seriously impressive because it will likely be far more decisive than mirrorless, contrast-based systems. Additionally, cross-type sensors tend to perform better in low light and with low contrast subjects: cross-sensors are able to make focus measurements from subjects containing both horizontal and vertical detail (or, at least, detail that has either a horizontal or vertical component to it). In low light or backlit situations, where lowered contrast already makes it difficult to distinguish subject detail, sensors looking along multiple axes for detail to ‘lock on’ to simply have a higher chance of success than sensors that can only ‘see’ detail with a, say, horizontal component.

Nikon D5 and D500 Push the Boundaries of DSLR

Here’s the spread of AF points across the frame in the D5. The new AF module appears to provide more AF coverage across the frame than any previous Nikon full-frame camera (and likely any full-frame DSLR), though not quite as much coverage of phase-detect points as Sony’s recent Alpha 7R II mirrorless full-frame.

55 of the points are user-selectable, indicated by squares. The AF points indicated by dots are essentially assist points, used by the camera if your subject moves to or simply happens to fall in between the user-selectable points. We’ll get to why these assist points are still incredibly useful in a bit. 35 of these 55 points are cross-type: the outermost two sections of 10 points each as well as the central section of 15. This is more clearly demarcated in the next slide.

Nikon D5 and D500 Push the Boundaries of DSLR

Here’s what you get by putting the designed-for-full-frame Multi-Cam 20000 module inside the APS-C D500. The AF points stretch out to the literal edges of the frame. Red points indicate cross-sensors. While Canon’s nearest competitor, the 7D Mark II, comes quite close to this level of coverage – with all cross-sensors to boot – it doesn’t quite match it.

But it’s not even these headline features that excite us the most. It’s details such as the addition of an automated system for applying AF fine tune that have really caught our eyes. We’ve written before about how mirrorless cameras, with their direct measurement of focus (rather than a secondary sensor, acting as a proxy) tends to be more accurate when it comes to fine-focusing, especially when using fast lenses. However, we’re not alone in proposing the idea of using DSLR’s often slow, but highly precise live view autofocus to help make it easier to correct for the cumulative errors that can undermine dedicated sensor phase detection systems. Patents have been issued yet this is the first time we’ve seen it implemented in a final product. Automating the process means far more photographers may actually calibrate their lenses for more accurate focus. Furthermore, the reality of DSLR AF is that the optimal calibration values can depend on lighting, environmental factors, wide or tele end of zoom, and subject distance; hence, automating the process will realistically allow users to calibrate more often for any given scenario. Sadly, there’s no indication that calibration values can be saved for different focal lengths or subject distances (a la Sigma lenses via their USB dock), nor is there any mention of higher precision central points that give the latest Canon cameras’ central AF point nearly mirrorless-levels of precision.

We’ve not yet had a chance to use the D5/500’s automated AF fine tune but you can be sure it’ll be one of the first things we try when one gets into our studio.

Nikon D5 and D500 Push the Boundaries of DSLR

Remember that ‘scene analysis system’ the AF system cross-references with information from the 153 AF points? It’s enabled by essentially a whole separate image sensor in the DSLR whose sole job is to analyze the scene to understand how to expose and focus it. Now with 180,000 pixels in the D5 and D500, this sensor has doubled in resolution compared to the D4s, D810, and D750.

Confused by how this works? Let’s break this down. Your smartphone or mirrorless camera projects light from the lens directly onto the imaging sensor, which can ‘see’ the scene to focus and expose it properly, even find faces or other subjects and track them no matter where they move to in the frame. DSLRs have it much tougher – all the light entering the lens is being diverted either upward to the optical viewfinder, or downward to a dedicated AF module with its phase-detect sensors that understand only distance. Some of that light going to the viewfinder is itself diverted to a metering sensor, which determines appropriate exposure. Some time back, DSLR manufacturers replaced this rudimentary metering sensor with an actual RGB 2D array or, essentially, an image sensor.

While years ago this image sensor started at a measly 1,005 pixels in the D300, it did enable rudimentary subject tracking (‘3D tracking’ in Nikon terms), since the sensor provided some color and spatial information about the subject underneath any AF point, which the camera could combine with an understanding of subject distance from the phase-detect AF sensors to understand where your subject of interest is at any given moment. Today, cameras like the D750 and D810 provide uncanny subject tracking with their 91,000-pixel metering sensors – able in many cases to track even objects as specific as a human eye. Nikon DSLRs are the only DSLRs we’ve tested to-date that are capable of the level of class-leading tracking precision you see in the videos linked above (Canon’s newer DSLRs do well with distant subjects well isolated with respect to depth, but lag behind in more demanding applications requiring higher precision). Hence, a doubling in resolution of the metering sensor is likely to further Nikon’s lead in this arena. Furthermore, metering applications also benefit from the increased resolution: as the flowchart above indicates, numerous features like face exposure, fill-flash, Active D-Lighting, and highlight-weighted metering will experience increased accuracy. 

Nikon D5 and D500 Push the Boundaries of DSLR

Click on the button at the upper right of the image to view this entire slideshow in fullscreen for a better view.

So what exactly does this 180,000-pixel RGB metering sensor ‘see’ such that it can aid the camera in finding faces and tracking subjects? We’ve taken the liberty of doing some guesswork to simulate a ‘worst case’ representation of how a 180k-pixel sensor might ‘see’ a typical scene being photographed.

If we assume that the 180k figure refers to the total number of red, green and blue pixels, then we can surmise that there’s only, at best, 60k pixels of true spatial information for any given color. For a 3:2 aspect ratio, that’s about 300×200 pixels. So we’ve taken an image and reduced it to 300×200, then blown it back up for ease of viewing. That’s what you see above.

In reality, the metering sensor is likely to ‘see’ a bit more resolution, since the above only represents the spatial resolution of any one color channel (or 3 R, G, and B pixels combined). Even still, you can get an idea of how the sensor can detect faces, and even understand what was underneath your selected AF point when you initiated focus in order to track it even if it moves to a position underneath a different AF point. With such increases in resolution of the scene analysis system, we wouldn’t be surprised if DSLRs one day were capable of eye detection.

Nikon D5 and D500 Push the Boundaries of DSLR

Click on the button at the upper right of the image to view this entire slideshow in fullscreen for a better view.

Compare the last image to this one (use the left/right keys on your keyboard for ease): a 213×142 pixel representation of the same image that simulates the spatial resolution of any one color channel for the 91,000-pixel RGB metering sensor in previous full-frame Nikon DSLRs. It’s not hard to imagine how even with this level of understanding of a scene, previous Nikon full-frames were able to track quite well. But every bit of resolution helps increase precision of tracking, so while the image above isn’t a huge step down from the last image representing what the new 180k-pixel sensor sees, there’s still a significant difference.

Nikon D5 and D500 Push the Boundaries of DSLR

Click on the button at the upper right of the image to view this entire slideshow in fullscreen for a better view.

The 180k-pixel metering sensor is a huge step up from previous DX offerings from Nikon, which only featured – at best – a 2,016-pixel RGB metering sensor. The 90-fold increase in metering sensor resolution should bring a level of subject tracking to the DX format never before seen.

Above is a 55×37 pixel representation of our previous image – and this time that’s a sort of ‘best case’ representation of what cameras like the D7200’s scene analysis system ‘saw’. Instead of showing you what any one color channel sees, we’ve decided to show you what 2k pixels in total looks like, as one-third of this resolution is a pixelated, unintelligible mess. In other words, this image represents only a 30x drop in resolution compared to our previous 180k-pixel representation, and so likely underestimates the increased performance the scene analysis system in the D500 is likely to exhibit compared to previous DX offerings (which still performed surprisingly well for their low resolution metering sensors).

Nikon D5 and D500 Push the Boundaries of DSLR

Another feature enabled by the RGB metering sensor is flicker reduction. While this is only available in video on the D5, the D500 is capable of waiting until the right moment to fire the shutter under flickering light, so as to achieve and maintain proper exposure. Although Canon has been offering this since the 7D Mark II, it’s the first time we’re seeing this feature in a Nikon camera.

Nikon D5 and D500 Push the Boundaries of DSLR

It’s worth emphasizing here something Nikon clearly emphasized in their press conference: one of the true advantages of a DSLR over current mirrorless cameras is the lack of viewfinder lag and the true view of the scene – at least in between mirror blackouts – compared to the typical stop-motion sequence of last-shot images most mirrorless cameras exhibit during fast bursts. This simply makes it easier to follow action with an optical viewfinder than with a mirrorless camera, which is why in the video screenshot above, the photographer was able to maintain the center AF point over his subject with the D5, while missing the subject with the ‘mirror-less’ camera example on the left. It’s worth noting though that Nikon’s own 1-series cameras provide a live feed even during continuous shooting, which actually circumvents this shortcoming of mirrorless (hint: that’s how mirrorless cameras will undoubtedly address this issue in the future).

DSLRs have also been optimized to make quick phase-detect AF measurements in between those quick mirror blackouts, allowing cameras like the D5 and Canon’s 1D X to acquire AF almost instantaneously even during 12 and 11 fps bursts. Impressive to say the least. That’s not to say mirrorless cameras aren’t catching up – in good light, Samsung’s NX1 can often successfully continuously refocus at 15 fps. Which means, yes, we do have to call out Nikon for suggesting that all mirrorless cameras have ‘soft and slow AF’: we can’t help but wonder if in that particular video sequence, the Sony Alpha series camera were left in AF-S, as cameras like the a7R II can, in fact, successfully refocus on approaching subjects (and when it can’t, the box doesn’t remain green as it does in the out-of-focus example in the Nikon press conference video – unless the focus mode is left in AF-S).

Regardless, though, with mirrorless cameras you’re still left with the issue of difficulty in following the subject without a live view during bursts.

Video: Matt Granger

Nikon D5 and D500 Push the Boundaries of DSLR

Another feature that really helps fast-paced photography is direct access to AF point selection. The D500 is the first DX-format Nikon to sport a dedicated AF-selection joystick, pictured right below and to the left of the AF-ON button. In fact, short of the D4/D5-series of cameras, it’s the only Nikon camera to feature this joystick. Cameras like the D750 and D810 dedicate their D-pads (pictured here above the ‘info’ button) to AF point selection, which works well, but never felt as fast as Canon’s dedicated AF-selection joystick. So the joystick is a welcome addition.

And if we understand correctly, the D500’s touchscreen LCD can also be used to directly access AF point selection. We first saw this on the D5500, where in OVF shooting you could dedicate the right half of the touchscreen to AF point selection. This made it really easy to use your thumb to instantly jump over to any AF point instantaneously, without your eye ever leaving the viewfinder. It’s actually faster than using a dedicated joystick, and we’re hoping to see similar functionality in the D500. The D5 does not retain this functionality with its touchscreen, though.

Speaking of fast AF point selection, those fond of the ‘focus and recompose’ technique should take note: turn on Nikon’s ‘3D tracking’ in AF-C, place your selected AF point over your subject, half-press and hold the shutter button, then recompose. This is probably the fastest way to select a different AF point: by having your camera do it automatically using Nikon’s industry leading subject tracking. If your subject falls outside of the AF area, just let the camera track it all the way out to the nearest AF point, then hold down the AF joystick (‘sub-selector’) to lock AF, and continue recomposing.

Nikon D5 and D500 Push the Boundaries of DSLR

Speaking of ergonomic improvements, notice anything different near the shutter button? That’s right, you no longer have to re-assign the movie button to ISO, because there’s now a dedicated ISO button! This is a boon for one-handed shooting: previously, I’d always reassign the movie record button to ISO so I could change ISO setting during one-handed shooting (since the ISO button is usually on the left side of Nikon DSLRs).

It’s also worth pointing out the button next to the ISO button: the dedicated exposure compensation (EC) button. Common to most higher-end Nikons, this button is really not to be overlooked. It means easy, consistent access to exposure compensation no matter what shooting mode you’re in, including M mode with Auto ISO. Certain competitors without dedicated EC dials or buttons make it it quite difficult to bias brightness in M mode with Auto ISO engaged (looking at you: 1D X, which will make you sacrifice the SET button for EC or make you pull your eye away from the viewfinder to use the Q menu to adjust EC in M mode).

Nikon D5 and D500 Push the Boundaries of DSLR

Not to be overlooked is the new SB-5000 Speedlight: the company’s new flagship flash. Its standout feature is its ability to operate and trigger via radio frequency, a first for Nikon’s line of portable flashes. This brings the Nikon system in-line with what Canon has offered for some time now, and also obviates the need for 3rd party accessories.

Nikon claims that when the flash is paired with the WR-R10 Wireless Remote Adapter set and a D5 or D500, the flash will operate without a direct line of sight at a range of up to approximately 98 feet (30 meters). With that same combination, the flash will be able to control up to six groups or 18 Speedlights. We expect Nikon CLS features like TTL, Manual power, Groups, and Flash Exposure Compensation settings will be accessible via the camera menu system. 

This is a significant addition to Nikon’s flash line-up, not only for the radio-triggered control it brings during applications where line-of-sight isn’t feasible or practical, but also because pairing flashes to the WR-R10 is arguably preferable to pairing to the outdated SU-800 commander (which is so dated that it attaching it to your camera disables Auto ISO). 

Also new for the SB-5000 is a redesigned cooling system promising improved consecutive firing times before cool-downs, and 120 continuous shots at 5-second intervals. Controls are updated with an ‘i’ button for frequently used settings, and the flash head maintains tilt-and-rotate capability.

Nikon D5 and D500 Push the Boundaries of DSLR

The D5 and D500 are Nikon’s first 4K capable cameras, but 4K comes with some severe limitations. On the D5, recording is only available for 3 minutes at a time (29:59 for the D500), and both cameras record 4K UHD (no DCI 4K) with a heavy crop factor. It’s nearly a 1.5x crop factor (nearly Super 35) on the full-frame D5, while the D500 experiences an even larger crop factor for 4K, pictured in red above (the yellow rectangle outlines the DX/Super 35 area on the D500, compatible with Full HD).

All in all – and perhaps we’re being a bit cynical – we’re not terribly excited about the inclusion of 4K on the two cameras. Yes 4K can be a pretty handy thing to have (and the uses for it will only increase as more people buy 4K capable displays), but there are numerous hints that these cameras aren’t making the most of it. Aside from the heavy crop factors above, lack of oversampling and almost certain presence of rolling shutter will likely limit the usefulness of 4K video from these cameras. Furthermore, the continued absence of focus peaking to aid manual focus or zebra warnings to help set exposure are starting to look like major oversights. And, although we’d love to be surprised, we’re concerned that Nikon’s continued adherence to the less efficient 8-bit H.264 compression system and its reluctance to publicize bitrates means the in-camera capture won’t be as exciting as the headline specs suggest. That said, there’s always the option to output 4K over HDMI to an external recorder, so it might find some use for more dedicated video shooters – if a good signal is sent over HDMI.

Nikon D5 and D500 Push the Boundaries of DSLR

On a more positive note, there are some solid additions to video on these cameras, like Active D-Lighting (ADL). We’ve always found Nikon’s ADL to be quite effective at reducing exposure to retain highlights, while boosting deeper tones to retain shadows. And ADL does a nice job of this global contrast reduction while attempting to preserve local contrast using its advanced tone-mapping algorithms.

This can be computationally intensive though, so has not been available previously in video. With the new EXPEED 5 image-processing engine, though, ADL is available in movies at resolutions up to 1080p.

For incredibly high contrast scenes, when ADL’s highest setting may not be enough to tame the scene’s extreme contrast, you can use the Flat picture profile and grade your footage later.

Nikon D5 and D500 Push the Boundaries of DSLR

On the D500, the EXPEED 5 processor even enables electronic vibration reduction (VR) in videos up to 1080p. Electronic VR can correct for horizontal and vertical movement, as well as rotation. This helps stabilize video footage, particularly when combined with optical VR in lenses.

The combination of electronic (or ‘digital’) and optical VR or IS (image stabilization) is something we tend to see more of in mirrorless cameras, like Olympus’ E-M5 II or the latest 1″-type sensor compacts from Canon and Sony. It’s great to see in a DSLR form-factor.

There’s no mention of this feature in the D5, though.

Nikon D5 and D500 Push the Boundaries of DSLR

Then there’s that continuously-connected wireless system. Snapbridge sounds very impressive – making use of the low-energy Bluetooth standard to maintain a connection between the cameras and a smart device so that images can be transferred without having to constantly re-establish connections. Our experiences suggest that the easier a feature is to use, the more likely we are to use it and the more likely we are to appreciate its benfits (something that often crosses our minds when DSLR shooters tell us they don’t use video on their camera). 

And in our connected age, there’s no excuse for camera manufacturers to not facilitate ease of image management and sharing. DSLRs have it a little tough in this regard: they’re not running the full-blown OS smartphone cameras have access to, which means that apps and ecosystems are limited in scope. But we’ve seen smartphone connectivity evolving in DSLRs, and we’re all for it. We’ll be curious to test out how SnapBridge functions on the D500.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nikon’s New D5 and D500 Push the Boundaries of DSLR

Posted in Uncategorized

 

Blurring Boundaries: 14 Modern Houses That Open to the Air

15 Sep

[ By Steph in Architecture & Houses & Residential. ]

willow house singapore 2

Ceilings retract and glazed walls swing open to connect intimate indoor spaces with courtyards, terraces and gardens in these modern residences blurring the lines between indoors and out. Located everywhere from Colorado to Kuala Lumpur, these open-air homes take advantage of mild climates and spectacular views, with alternatives to conventional walls enabling natural ventilation and a sense of being connected to nature.

Kloof House, Johannesburg, South Africa
open air house kloof 5

open air house kloof 4

open air house kloof 3

open air house kloof 2

open air house kloof 1

Every room in the sculptural Kloof House by Nico van der Meulen Architects opens directly to the outdoors via gigantic sliding glass walls. The kitchen, living room, dining room and bedrooms can all be fully connected to various outdoor spaces like courtyards, balconies and gardens. The swimming pool becomes part of the living room area, and one bedroom connects to a cantilevered koi pond.

The Fish House, Singapore
open air house fish 4

open air house fish 3

open air house fish 2

open air house fish 1

This modern tropical residence in Singapore seamlessly integrates courtyard spaces into the interiors on every level for natural ventilation and nearly uninterrupted views of the ocean. A glass-walled lounge cantilevers out over the swimming pool, and residents can walk up onto the green roof, which is partially shaded with solar panels.

Loft 24-7, Sao Paulo, Brazil
open air houses loft 4

open air houses loft 3

open air houses loft 2

open air houses loft 1

Decks and terraces connect the various freestanding volumes that make up Loft 24-7 by Fernanda Marques Arquitetos Associados, with the effect continued indoors using glazed walls and ceilings. “Being inside feeling like one is outside. I believe that to be a key issue in understanding the interior design being produced today,” says the architect. “In times when environmental awareness is growing, and, of course, also the desire to be close to nature.”

Casa P, Sao Paulo, Brazil
open air house casa p 4

open air house casa p 3

open air house casa p 2

open air house casa p 1

The ground floor of Casa P by Studio MK27 is enclosed with a slatted wooden ‘freijó’ wall, which acts as a privacy screen and offers natural ventilation. These oversized shutters can be opened completely to connect the interiors to the courtyard. Two more concrete volumes are stacked on top of the first, with the topmost one boasting all-glass walls for optimal views.

Willow House, Singapore
willow house singapore 5

willow house singapore 4

willow house singapore 3

willow house singapore 1

Greenery from the planted roof drips down into a living space via an open oculus, living spaces overlook swimming pools and reflecting pools, and trees grow indoors in this boundary-blurring house by Guz Architects. Taking advantage of Singapore’s warm, humid climate, the tropical residence blends traditional Singaporean architectural typologies with modern aesthetics.

Next Page – Click Below to Read More:
Blurring Boundaries 14 Modern Houses That Open To The Air

Share on Facebook





[ By Steph in Architecture & Houses & Residential. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]


WebUrbanist

 
Comments Off on Blurring Boundaries: 14 Modern Houses That Open to the Air

Posted in Creativity