RSS
 

Posts Tagged ‘Leap’

Canon EOS R5: The long game ends with a big leap

23 Apr
The fruit of Canon’s R&D emerges from the shadows

Canon has been the best selling camera brand for most of the digital era. Different people might ascribe this dominance to different areas of strength, be that lens design, ergonomics, color response or simply very successful marketing that’s resulted in a history of cameras that people want at a price they’re willing to pay. For the past few years though, its once proud reputation for innovation hasn’t seemed so evident.

Canon’s US press releases still proudly boast about how many patents the company has been granted, but its electronics development prowess hasn’t shone especially brightly in recent models. The EOS R5’s disclosed specs reveal a camera capable of generating and processing immense amounts of data. This suggests a leap forward in Canon’s semiconductor design and one that might shed some light on why some of their most recent cameras have seemed somewhat lackluster.

Blimey!

It’s difficult to over-stress how much of a technical challenge it is to capture and record 8K footage. Just four years ago virtually every camera maker we interviewed said that 4K was really difficult because of the heat generated in the process and there are many models that stop recording or become very hot if they shoot for extended periods. Canon is promising a camera that can capture four times as much data, from the full width of its sensor while still being able to run its dual pixel AF system in parallel.

The EOS R5’s disclosed specs suggests a leap forward in Canon’s semiconductor design

If that doesn’t strike you as ground-breaking, consider that the EOS R5 can shoot 4K at up to 120 fps. Then look around the current batch of large sensor cameras and count how many can achieve 4K/60. It’s a short list, and one that gets even shorter if you mark off the ones that can only do so using a cropped region of their sensor. The EOS R5 almost certainly sub-samples to achieve this, but that’s still a lot of data.

We don’t know the camera’s full specs, yet, but this all points to a radical improvement in sensor and processing technologies.

A history of innovation

Canon was the first camera maker to fully embrace CMOS technology for its DSLRs, which gave it industry-leading performance for many years (it was another seven years until we saw a camera with a CMOS chip from Sony Semiconductor). It was also the first company to produce a large sensor camera that could capture Full HD video. Technologies such Dual Pixel AF show that the company has continued to work away at pushing its cameras forwards.

And yet, the last few generations of Canon stills cameras haven’t always sparkled, particularly in terms of video: notably the most processor-intensive feature. The EOS 5D Mark IV had to crop its sensor to deliver 4K and still showed a fair amount of rolling shutter when it did, suggesting there was a major bottleneck either in terms of sensor readout rate or the ability to process this data fast enough.

It’s also interesting to note that Canon cameras tend to achieve much lower shot-count ratings per Watt Hour of battery capacity than other companies manage, which is likely to be indicative of lower processor efficiency.

The EOS R, as the first RF mount camera, had plenty of innovations in it, but its cropped 4K video suggests a similar lack of processing power to that of the EOS 5D VI, which wasn’t especially cutting edge two years earlier. The EOS R5 is a vast leap forward from this.

And this has seen Canon’s specs begin to fall behind. This need to crop to produce 4K video was off the pace in when the 5D IV was launched in 2016 (Sony’s a7R II offered full-width 4K capture a year earlier), so to see that same limitation in 2018’s EOS R looked a little embarrassing compared to the oversampled 4K footage offered by Leica, Nikon, Panasonic, Sigma and Sony in their contemporaneous full frame models. It’s a similar story with the EOS 6D II and EOS RP and, despite the appearance of a novel 32MP sensor in the EOS M6 Mark II, the need to sub-sample the chip to generate its video also hints at a processing bottleneck.

So why had this company with a history of innovation dropped so far behind its rivals?

What’s been going on, then?

While it sources 1-inch and smaller sensors from other companies, Canon makes its own APS-C and full-frame sensors and generally hasn’t made them available to rival camera companies. This means that Canon has to recoup its R&D costs entirely from its own models, whereas most other camera makers buy all of their sensors in from a supplier that can spread out those costs amongst its many customers. That obviously creates an incentive for Canon to keep using the same chips for as long as it can.

The differing challenges of building cinema and consumer cameras make it impossible to say whether know-how has been reserved for the Cinema EOS line or has trickled down from it.

Another possibility is that Canon has been keeping this know-how for its more profitable pro video users, holding the main EOS line back to avoid cannibalizing its Cinema EOS sales. But this isn’t necessarily true: the Cinema EOS cameras work in an environment where large batteries and built-in fans are the norm, meaning there isn’t the same pressure on them to be as super-efficient as the mainline EOS cameras need to be. So I’m not sure that’s what we’ve seen: if anything it’s just as likely that the EOS R5 is benefitting from lessons Canon learned through the process of developing the Cinema EOS line.

Playing the long game

Instead, I wonder whether Canon made the decision to step back from the constant two-year development cycle for sensors and processors that other camera makers build their model ranges around, and instead decided to conduct a longer-term project to reclaim the technological lead it’d previously enjoyed.

There are, perhaps, parallels with the way Canon approached its switch to autofocus, back in the 1980s: seemingly content to let Minolta and Nikon own the AF market, only to leap ahead with its EOS system.

Taking a longer-term approach would explain both why the company had dropped so far behind and how it can now not just to catch up but jump ahead

We may never know for sure, but I can’t think of a time when Canon has so clearly fallen behind what the rest of its rivals are offering. That’s why it looks to me like the apparent lull in Canon’s innovation might have been because it wasn’t content to just keep up with its rivals but instead was willing to cede a little ground in the short term, so that it could take a significant lead in the long run. That would explain both why the company had seemingly dropped so far behind and how it’s now looks able not just to catch up, but to jump ahead.

Of course this is likely to be little comfort for customers who bought Canon cameras from the end of the previous cycle, built on technology that was significantly outdated in comparison to their rivals.

So while the rest of the market has been constantly tussling over small gains, seemingly leaving Canon in the dust, the industry’s biggest player appears to have been patiently working to leapfrog them all, taking a bigger lead than we’ve become used to seeing in the industry.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon EOS R5: The long game ends with a big leap

Posted in Uncategorized

 

Sony’s ‘Real-time tracking’ is a big leap forward for autofocus

14 Feb

One of the biggest frustrations when taking pictures is discovering that your photos are out of focus. Over the past few years, camera autofocus systems from every manufacturer have become much more sophisticated, but they’ve also become more complex. If you want to utilize them to their full potential, you’re often required to change settings for different scenarios.

The autofocus system introduced in Sony’s a6400 as well as in the a9 via a firmware update aims to change that, making autofocus simple for everyone from casual users to pro photographers. And while all manufacturers are aiming to make autofocus more intelligent and easier to use, our first impressions are that in practice, Sony’s new ‘real-time tracking’ AF system really does take away the complexity and removes much of the headache of autofocus so that you can focus on the action, the moment, and your composition. Spoiler: if you’d just like to jump to our real-world demonstration video below that shows just how versatile this system can be, click here.

When I initiated focus on this skater, he was far away and tiny in the frame, so the a9 used general subject tracking to lock on to him at first. It then tracked him fully through his run, switching automatically to Face Detect as he approached. This seamless tracking, combined with a 20fps burst, allowed me to focus on my composition and get the lighting just right, without having to constrain myself by keeping an AF point over his face. For fast-paced erratic motion, good subject tracking can make or break your shot.

So what is ‘Real-time tracking’? Simply now called ‘Tracking’, it’s Sony’s new subject tracking mode. Subject tracking allows you to indicate to your camera what your subject is, which you then trust it to track. Simply place your AF point over the subject, half-press the shutter to focus, and the camera will keep track of it no matter where it moves to in the frame – by automatically shifting the AF points as necessary. The best implementation we’d seen until recently was Nikon’s 3D Tracking on its DSLRs. Sony’s new system takes some giant leaps forward, replacing the ‘Lock-on AF’ mode that was often unreliable, sometimes jumping to unrelated subjects far away or tracking an entire human body and missing focus on the face and eyes. The new system is rock-solid, meaning you can just trust it to track and focus your subject while you concentrate on composing your photos.

You can trust it to track and focus your subject while you concentrate on composing your photos

What makes the new system better? Real-time tracking now uses additional information to track your subject – so much information, in fact, that it feels as if the autofocus system really understands who or what your subject is, making it arguably the ‘stickiest’ system we’ve seen to date.

$ (document).ready(function() { SampleGalleryStripV2({“galleryId”:”2553378816″,”isMobile”:false}) })

Sample photoSample photoSample photoSample photoSample photo
Subject tracking isn’t just for action. I used it even in this shot. Good subject tracking, like Sony’s ‘Real-time tracking’, keeps track of your subject for you, freeing you up to try many different poses and framings quickly. Most of these 20 shots were captured in under 19 seconds, without ever letting off the AF-ON button. The camera never lost our model, not even when her face went behind highly-reflective glass. The seamless transitioning between Eye AF and general subject tracking helps the AF system act in such a robust manner. Not having to think about focus allows one to work faster, get more poses and compositions, so you can get to the shot you’re happy with faster. Click here or on any thumbnail above to launch a gallery to scroll through all 20 images.

Pattern recognition is now used to identify your subject, while color, brightness, and distance information are now used more intelligently for tracking so that, for example, the camera won’t jump from a near subject to a very far one. What’s most clever though is the use of machine-learning trained face and eye detection to help the camera truly understand a human subject.

What do we mean when we say ‘machine-learning’? More and more camera – and smartphone – manufacturers are using machine learning to improve everything from image quality to autofocus. Here, Sony has essentially trained a model to detect human subjects, faces, and eyes by feeding it hundreds, thousands, perhaps millions of images of humans. These images of faces and eyes of different people, kids, adults, even animals, in different positions have been previously tagged (presumably with human input) to identify the eyes and faces – this allows Sony’s AF system to ‘learn’ and build up a model for detecting human and animal eyes in a very robust manner.

Machine learning… allows Sony’s AF system to detect human and animal eyes in a very robust manner

This model is then used in real-time by the camera’s AF system to detect eyes and understand your subject in the camera’s new ‘real-time tracking’ mode. While companies like Olympus and Panasonic are using similar machine-learning approaches to detect bodies, trains, motorcyclists and more, Sony’s system is the most versatile in our initial testing.

Real-time tracking’s ability to seamlessly transition from Eye AF to general subject tracking means that even when there was an eye to track up until this perfect candid moment, your subject will still remain in focus when the eye disappears – so you don’t miss short-lived moments such as this one. Note: this image is illustrative and was not shot using Sony’s ‘Tracking’ mode.

What does all of this mean for the photographer? Most importantly, it means you have an autofocus system that works reliably in almost any situation. Reframe your composition to place your AF point over your subject, half-press the shutter, and real-time tracking will collect pattern, color, brightness, distance, face and eye information about your subject so comprehensively it can use all that to keep track of your subject in real-time. This means you can focus on the composition and the moment. There is no longer a need to focus (pun intended) on keeping your AF point over your subject, which for years has constrained composition and made it difficult to maintain focus on erratic subjects.

There is no need to focus on keeping your AF point over your subject, which for years has constrained composition and made it difficult to focus on erratic subjects

The best part of this system is that it just works, seamlessly transitioning between Eye AF and Face Detect and ‘general’ subject tracking. If you’re tracking a human, the camera will always prioritize the eye. If it can’t find the eye, it’ll prioritize its face. Even if your subject turns away so that you can’t see their face, or is momentarily occluded, real-time tracking will continue to track your subject, instantly switching back to the face or eye when they’re once again visible. This means your subject is almost always already focused, ready for you to snap the exact moment you wish to capture.

$ (document).ready(function() { SampleGalleryStripV2({“galleryId”:”4012823305″,”isMobile”:false}) })

Sample photoSample photoSample photoSample photoSample photo
The tracking mode lets you specify a subject and it’ll prioritize their eye, switching to face detection if it loses the eye and treating them as a generic subject to track if they, for instance, turn their head away from the camera. Click on the images and follow the entire sequence to see how the camera focuses on my subject no matter where she walks to in the frame.

One of the best things about this behavior is how it handles scenes with multiple people, a common occurrence at weddings, events, or even in your household. Although Eye AF was incredibly sticky and tracked the eyes of the subject you initiated AF upon, sometimes it would wander to another subject, particularly if it looked away from the camera long enough (as toddlers often do). Real-time tracking will simply transition from Eye AF to general subject tracking if the subject looks away, meaning as soon as they look back, the camera’s ready to focus on the eye and take the shot with minimal lag or fuss. The camera won’t jump to another person simply because your subject looked away; instead, it’ll stick to it as long as you tell it to, by keeping the shutter button half-depressed.

Performance-wise it’s the stickiest tracking we’ve ever seen…

And performance-wise it’s the stickiest tracking we’ve ever seen, doggedly tracking your subject even if it looks different to the camera as it moves or you change your position and composition. Have a look at our real world testing with an erratic toddler, with multiple people in the scene, below. This is HDMI output from an a6400 with 24mm F1.4 GM lens, and you can see focus is actually achieved and maintained throughout most of the video by the filled-in green circle at bottom left of frame.

Real-time tracking isn’t only useful for human subjects. Rather, it simply prioritizes whatever subject you place under the autofocus point, be it people or pets, food, a distant mountain, or a nearby flower. It’s that versatile.

In a nutshell, this means that you rarely have to worry about changing autofocus modes on your camera, no matter what sort of photography you’re doing. What’s really exciting is that we’ll surely see this system implemented, and evolved, in future cameras. And while nearly all manufacturers are working toward this sort of simple subject tracking, and incorporating some elements of machine learning, our initial testing suggests Sony’s new system means you don’t have to think about how it works; you can just trust it to stick to your subject better than any system we’ve tested to date.


Addendum: do I need a dedicated Eye AF button anymore?

There’s actually not much need to assign a custom button to Eye AF anymore, since real-time tracking already uses Eye AF on your intended subject. In fact, using real-time tracking is more reliable, since if your subject looks away, it won’t jump to another face in the scene as Eye AF tends to do. If you’ve ever tried to photograph a kids’ birthday party or a wedding, you know how frustrating it can be when Eye AF jumps off to someone other than your intended subject just because he or she looked away for long enough. Real-time tracking ensures the camera stays locked on your subject for as long as your shutter button remains half-depressed, so your subject is already in focus when he or she looks back at the camera or makes that perfect expression. This allows you to nail that decisive, candid moment.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sony’s ‘Real-time tracking’ is a big leap forward for autofocus

Posted in Uncategorized

 

The iPhone XS is a leap forward in computational photography

05 Oct

Aside from folks who still shoot film, almost nobody uses the term ‘digital photography’ anymore – it’s simply ‘photography,’ just as we don’t keep our food in an ‘electric refrigerator.’ Given the changes in the camera system in Apple’s latest iPhone models, we’re headed down a path where the term ‘computational photography’ will also just be referred to as ‘photography,’ at least by the majority of photographers.

The iPhone XS and iPhone XS Max feature the same dual-camera and processing hardware; the upcoming iPhone XR also sports the same processing power, but with only a single camera: the same wide-angle F1.8 one on the other models. The image sensor captures 12 megapixels of data, the same resolution as every previous model dating back to the iPhone 6s, but the pixels themselves are larger at 1.4 µm, compared to 1.22 µm for the iPhone X, meaning a slightly larger sensor. (For more on the camera’s specs, see “iPhone XS, XS Max, and XR cameras: what you need to know.”)

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_7780769194″,”galleryId”:”7780769194″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

More important this year is upgraded computational power and the software it enables: the A12 Bionic processor, the eight-core ‘Neural Engine,’ and the image signal processor (ISP) dedicated to the camera functions. The results include a new Smart HDR feature that rapidly combines multiple exposures for every capture, and improved depth-of-field simulation using Portrait mode. (All the examples throughout are straight out of the device.)

Smart HDR

This feature intrigued me the most, because last year’s iPhone 8, iPhone 8 Plus and iPhone X introduced HDR as an always-on feature. (See “HDR is enabled by default on the iPhone 8 Plus, and that’s a really good thing.”) HDR typically blends two or more images of varying exposures to end up with a shot with increased dynamic range, but doing so introduces time as a factor; if objects are in motion, the delay between captures makes those objects blurry. Smart HDR captures many interframes to gather additional highlight information, and may help avoid motion blur when all the slices are merged into the final product.

The iPhone XS image almost looks as if it was shot using an off-camera flash

Testing Smart HDR proved to be a challenge, because unlike with the HDR feature in earlier models, the Photos app doesn’t label Smart HDR images as such. After shooting in conditions that would be ripe for HDR – bright backgrounds and dark foreground, low-light conditions at dusk – nothing had that HDR indicator. I wasn’t initially sure if perhaps the image quality was due to Smart HDR or the larger sensor pixels; no doubt some credit is due to the latter, but it couldn’t be that much.

Comparing shots with those taken with an iPhone X reveals Smart HDR at work, though. In the following photo at dusk, I wanted to see how well the cameras performed in the fading light and also with motion in the scene (the flying sand). The iPhone X image is dark, but you still get a fair bit of detail in the girl’s face and legs, which are away from the sun. The iPhone XS image almost looks as if it was shot using an off-camera flash, likely because the interframes allow highlight retention and motion freezing even as ‘shutter speeds’ become longer.

Shot with iPhone X
Shot with iPhone XS

As another example, you can see the Smart HDR on the iPhone XS working in even darker light compared to the iPhone X shot. At this point there’s more noise in both images, but it’s far more pronounced in the iPhone X photo.

Shot with iPhone X Shot with iPhone XS

Smart HDR doesn’t seem to kick in when shooting in burst mode, or the effect isn’t as pronounced. Considering the following photo is captured at 1/1000 sec, and the foreground isn’t a silhouette, the result isn’t bad.

iPhone XS image shot in burst mode. It’s dark, but picks up the detail in the sand.
iPhone XS image shot in burst mode.
iPhone XS non-burst image captured less than a minute after the photo above.

Portrait Mode

The iPhone’s Portrait mode is a clever cheat involving a lot of processing power. On the iPhone X and iPhone 8 Plus, Apple used the dual backside cameras to create a depth map to isolate a foreground subject – usually a person, but not limited to people-shaped objects – and then blur the background based on depth. It was a hit-or-miss feature that sometimes created a nice shallow depth-of-field effect, and sometimes resulted in laughable, blurry misfires.

On the iPhone XS and iPhone XS Max, Apple augments the dual cameras with Neural Engine processing to generate better depth maps, including a segmentation mask that improves detail around the edge of the subject. It’s still not perfect, and one pro photographer I know immediately called out what he thought was a terrible appearance, but it is improved, and in some cases most people may not recognize that it’s all done in software.

The notable addition to Portrait mode in the iPhone XS and iPhone XS Max is the ability to edit the simulated depth of field within the Photos app. A depth control slider appears for Portrait mode photos, with f-stop values from F1.4 to F16. The algorithm that creates the blur also seems improved, creating a more natural effect than a simple Gaussian blur.

Apple also says it’s analyzed the optical characteristics of some “high-end lenses” and tried to mimic their bokeh. For instance, the simulated blue should produce circular discs at the center of the image but develop a ‘cats-eye’ look as you approach the edge of the image. The company says that a future update will include that control in the Camera app for real-time preview of the effect.

Portrait mode is still no substitute for optics and good glass. Sometimes objects appear in the foreground mask – note the coffee cup over the shoulder at left in the following image – and occasionally the processor just gets confused, blurring the horizontal lines of the girl’s shirt in the next example. But overall, you can see progress being made toward better computational results.

Flare and a Raw Footnote

One thing I noticed with my iPhone XS is that it produced more noticeable lens flare when catching direct light from the sun or bright sources such as playing-field lights, as in the following examples; notice the blue dot pattern in the foreground of the night image.

Since I wanted to focus on the Smart HDR and Portrait mode features for this look, I haven’t shot many Raw photos using third-party apps such as Halide or Manual (the built-in Photos app does not include a Raw capture mode). Sebastiaan de With, the developer of Halide, determined that in order to make faster captures, the camera is shooting at higher ISOs, and then de-noising the results via software. With Raw photos, however, that results in originals that aren’t as good as those created by the iPhone X, because they’re noisier and exposed brighter. You can read more at the Halide blog: iPhone XS: Why It’s a Whole New Camera.

Overall, though, the camera system in the iPhone XS and iPhone XS Max turn out to be larger improvements than they initially seemed, especially for the majority of iPhone owners who want to take good photos without fuss. Apple’s computational photography advancements in these models deliver great results most of the time, and point toward more improvements in the future.

iPhone XS sample gallery

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_7780769194_1″,”galleryId”:”7780769194″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });
Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The iPhone XS is a leap forward in computational photography

Posted in Uncategorized

 

Panasonic Lumix DMC-GX8 makes leap to 20MP

16 Jul

Panasonic has unveiled the Lumix DMC-GX8 and with it some new advancements for the Micro Four Thirds format. For the first time, a MFT body offers a 20.3MP sensor. AF sees some notable improvements thanks to Depth from Defocus technology, a new Dual I.S. system uses stabilization from the lens and camera body simultaneously, and 4K video is included. Read more

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Panasonic Lumix DMC-GX8 makes leap to 20MP

Posted in Uncategorized

 

How to Charge for Your Work: Making the Leap from “Favor” to “Job”

04 Feb

So you’ve got a passion for photography, a slew of great photos that show you’ve really got talent and the desire to transition your photography skills from something that has been strictly a hobby into something that will allow you to earn money. What happens next? If you’re like most folks, you may start by offering to take photos of Continue Reading

The post How to Charge for Your Work: Making the Leap from “Favor” to “Job” appeared first on Photodoto.


Photodoto

 
Comments Off on How to Charge for Your Work: Making the Leap from “Favor” to “Job”

Posted in Photography

 

Mike Kelley’s Leap of Faith

13 Nov

It's a truism that creative growth is nonlinear.

Which is to say that, while we (hopefully) do improve steadily over time, meaningful growth happens in fits and starts. You have an experience of some sort, and after you come out of it you realize you will never be the same photographer again.

Now, while you certainly can wait for someone to hand you that experience on a platter, doing so is putting the ball in someone else's hands. Which is fine if you are both patient and lucky.

Or, you can do what architectural photographer Mike Kelley did, and decide to make it happen on your own. Read more »


Strobist

 
Comments Off on Mike Kelley’s Leap of Faith

Posted in Photography