RSS
 

Posts Tagged ‘Difference’

Roger Cicala: the difference between sample variation and a ‘bad copy’ (Part 2)

13 Nov
I compare a lot of lenses. They aren’t all exactly the same.

In today’s article we’ll look at variation versus bad copies a bit differently to last time. Plus, I’ll explain how people get three ‘bad copies’ of a lens in a row.

Variation versus bad copy frequency

Imatest type graphs are easier to visualize so I’m going to use those today. These graphs allow us to visualize center resolution (toward the top on the y-axis of the graph) and overall resolution (toward the right on the x-axis), with individual lenses plotted as dots. Don’t worry about the numbers on the X and Y axes, all you need to know is that the sharpest lenses are plotted up and to the right, and the softest are lower and to the left.

The graph below shows plots from multiple copies of two prime lenses. Let’s call them ‘Red’ and ‘Green’. The Green lens is a fairly expensive, pro-grade optic. The Red lens is a cheaper, consumer-level prime. You’ll see that there’s one copy of each in roughly the middle of this graph, away from the main cluster at upper-right. I’d return both of these samples to the manufacturer. So would you – they’re awful.

Multiple copies of two lenses, the ‘Red’ lens and the ‘Green’ lens, plotted by center and overall sharpness. Two bad copies of each are obvious at the lower left.

But could you tell the difference between the best and the worst of the other copies, in that big cluster at upper-right? That would depend on the resolution of your camera, how carefully you pixel-peeped, which lens we are talking about, and honestly, how much you cared.

The Green lens shows less variation, which is about what we expect (but don’t always get) from a fairly expensive, high-quality lens. A perfectionist with a high resolution camera, some testing skill and enough time could tell the top third from the bottom third, but it would take effort.

The Red lens has more variation, which is typical for a consumer-grade lens. A reasonably picky photographer could tell the difference from the top third and the bottom third. None of the bottom third are awful; they’re a little fuzzier, a little more tilted, not quite as good when viewed at 100% magnification, and you might see issues if you made a large print.

With more variation, you get more ‘not as good’ lenses, but they’re still not ‘bad copies’

If you look carefully, though, the top third of the Green and Red samples are about the same. With more variation, you get more ‘not as good’ lenses, but they’re still clearly not ‘bad copies’; they’re just ‘not quite as good’ copies.

So why would we argue about these two lenses on the Internet? Because based on a graph like this, a lot of testing sites might say “Red is as good as Green and costs a lot less.” The truth is simply that the Red lens has more variation. Sure – a good copy of the Red lens might match a good copy of the Green lens. But you’re not guaranteed to get one.

A word about that yellow line and worse variation

There’s obviously a point when large variation means the lower end of the ‘acceptable group’ is unacceptable. Where that line lies is of course arbitrary, so I put an arbitrary yellow line in the graph above, to illustrate the point. Where the yellow line is for you depends on your expectations and your requirements.

The Subjective Quality Factor can theoretically decide when the low end of variation is not OK, and it can be used as a guide to where to place the yellow line. The key words, though, are ‘subjective quality’. Things like print size, camera resolution, even subject matter are variables when it comes to deciding when SQF is not OK. For example, the SQF needed for online display or 4K video is a lot lower than for a 24″ print of a detailed landscape taken with a 40 megapixel camera.

Every one of us has our own SQF; call it your PQF (Personal Quality Factor) and your yellow line might be higher or lower than the one in the graph above. Manufacturers have a Manufacturer’s Quality Factor (MQF) for each of their lenses, which is the famous ‘in spec’.

When your PQF is higher than the MQF, those lower lenses are not OK for you. They might be fine for someone else. Wherever a person’s yellow line is, that’s their demarkation line. These days, if they get a lens below the line, they go on an Internet rant. So now, as promised, I have explained the cause of 8.2% of Ranting On Online Forums (ROOFing). It’s the difference between MQF and PQF.

Put another way, it’s the difference between expectations and reality.

If you test a set of $ 5,000 lenses carefully enough, you may find some differences in image quality. The technical term for this phenomenon is ‘reality’.

It should be pretty obvious that people could screen three or four copies of the Red lens and end up with a copy that’s as good as any Green lens. I don’t find it worth my time, but I’m not judging; testing lenses is what I do.

Unfortunately, though, people don’t post online “I was willing to spend a lot of time to save some money, so I spent 20 hours comparing three copies and got a really good Red lens.” They say “I went through three bad copies before I got a good one.”

The frequency of bad copies and variation

Just so we get it out of the way, the actual, genuine ‘bad copy’ rate is way lower than I showed in the graph above. For high-quality lenses it’s about 1% out-of-the-box. This explains why I roll my eyes every time I hear “I’ve owned 14 Wonderbar lenses and they’re all perfect.” Statistics suggest you’d need to buy over 50 lenses to get a single bad one. The worst lenses we’ve ever seen have a bad copy rate of maybe 3% so even then, the chances are good you wouldn’t get a bad one out of 14.

Most of these ‘those lenses suck / I’ve never had a bad copy’ arguments are just a different way of saying ‘I have different standards than you’

What about the forum warrior ROOFing about getting several bad copies in a row? He’s probably screening his way through sample variation looking for a better than average copy. If he exchanges it, there’s a good chance he won’t get a better one, but after two or three, he’ll get a good one. So he’s really saying “I had to try three copies to find one that was better than average.” Or close to average. Something like that.

Semantics are important. Most of these “those lenses suck / I’ve never had a bad copy” arguments are just a different way of saying “I have different standards than you”. I get asked all the time what happens to the two lenses John Doe returned when he kept the third? Well, they got re-sold, and the new owners are probably happy with them.

Why are there actual bad copies?

In short – inadequate testing. Most photographers greatly overestimate the amount and quality of testing that’s actually done at the factory, particularly at the end of the assembly line.

Many companies use a test target of thick bars to set AF and give a cursory pass-fail evaluation. A target of thick bars is low-resolution; equivalent to the 10 lp/mm on an MTF bench. Some use a 20 lp/mm target to test, and 20 is higher than 10, so that’s good. The trouble is that most modern sensors with a good lens can resolve 50 lp/mm easily. This is what I mean when I say (as I do often) that you and your camera are testing to a higher standard than most manufacturers.

Why is there high variation?

Usually, it’s the manufacturer’s choice, and usually for cost reasons. Occasionally it’s because the manufacturer is living on the cutting edge of technology. I know of a couple cases where a lens had high variation because the manufacturer wanted it to be spectacularly good. They designed-in tolerances that turned out to be too tight to practically produce, but convinced themselves they could produce it. Lenses like this tend to deliver amazing test results, but then attract a whole lot of complaints from some owners and a whole lot of love from others.

What’s that? You want some examples?

This is not the bookcase mentioned below; that one is under nondisclosure. This is my bookcase. My bookcase has better optical books.

Service center testing

Years ago, we had in our possession a $ 4,000 lens that was simply optically bad. It went to the service center twice with no improvement. Finally, the manufacturer insisted I send ‘my’ camera overseas with it for adjusting. The lens and camera came back six weeks later. The lens was no better, but the camera contained a memory card with 27 pictures on it. Those pictures were of a bookshelf full of books, and each image was slightly different as the technician took test shots while they optically adjusted the lens.

This, my friends, is why we decided to start adjusting lenses ourselves. And yes – after offering to share those bookshelf images – I was eventually sent a replacement lens.

Non-adjustable lenses

Many lenses have no optical adjustments. They’re assembled, and then what you get is what you get. If in-factory QC detects a really bad one, it might be disassembled and the parts reused, in the hope that random reassortment gives a better result next time. Or it may just get thrown away; the cost of disassembling and reassembling may be greater than the saved parts.

A common type of non-adjustable lens called a stacked lens; ‘element – spacer – element – spacer, etc’ with a front and rear retaining ring holding everything together. The usual method of correcting it is to loosen the retaining rings, bang the lens on a table a few times, and tighten it back up. That probably sounds ridiculously crude, but it sometimes works.

Many fully manual lenses (not those made by Zeiss or Leica) are non-adjustable, as are some less expensive manufacturer and third-party lenses.

Minimally-adjustable lenses

A number of prime lenses have only one or two adjustable elements. This is not necessarily a bad thing; adjusting one or two elements is a lot easier than adjusting six, so the technician is more likely to get things right.

One of my favorite lenses, both to shoot with and to adjust, is the venerable Zeiss 21mm F2.8 Distagon / Milvus. The front element of this lens is adjustable for centering and we’ve done hundreds of these adjustments over the years. The fun part is doing this adjustment lets you choose what type of lens you want. You can have razor sharp in the center with soft corners or you can let the center be a little softer and the corners much sharper. It’s a great example of adjustment being a trade-off, even for relatively simple adjustments.

MTF graphs of a Zeiss 21mm F2.8 Distagon, adjusted for best center sharpness (above), and optimal edge sharpness (below).

Consumer-grade zoom lenses (manufacturer or third-party) and prime lenses with apertures smaller than F1.4 tend to be minimally or non-adjustable. A fair number of better zooms and primes are minimally adjustable, too.

Lenses with many adjustable elements

More adjustments means less variation, at least in theory. It also, however, means when something is wrong it’s far more complex and time consuming to get the adjustments right. Time, as they say, is money and complex lenses can be rather hard to adjust.

I think the most we’ve seen is nine adjustable elements. These are usually top-of the line zooms, but we’ve seen six adjustable elements in some top-end primes. That’s something we never saw even five or six years ago.

So, what’s the key takeaway?

Let’s start with my definitions. A bad copy of a lens has one or more elements so out of adjustment that its images are obviously bad at a glance. Such a lens (assuming it is optically adjustable) can usually be made as good as the rest.

Variance, on the other hand, means some lenses aren’t as good as others, usually as a result of a number of small imperfections. A simple optical adjustment isn’t likely to make them as good as average. All lenses have a little variance. Some have more. A few have a lot. How much is too much depends on the photographer who’s shooting with them.

The Canon 70-200mm F2.8 RF has (give or take one, I’m not certain I recall all of them) 8 or 9 different adjustable elements.

Reducing variation costs money. The reality is the manufacturers are doing what works for them (or at least they think they are). There is a place for $ 500 lenses with higher variation and good image quality, just like there’s a market for $ 2,000 lenses with better image quality and less variation.

Roger


Roger Cicala is the founder of Lensrentals.com. He started by writing about the history of photography a decade ago, but now mostly writes about the testing, construction and repair of lenses and cameras. He follows Josh Billings’ philosophy: “It’s better to know nothing than to know what ain’t so.”

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Roger Cicala: the difference between sample variation and a ‘bad copy’ (Part 2)

Posted in Uncategorized

 

Roger Cicala: the difference between sample variation and a ‘bad copy’ (Part 1)

03 Nov
We fix a lot of lenses, but not all lenses can be fixed.

With the next two posts, I hope to end the seventh most common forum war; the ‘lens variation is a big problem!’ vs ‘I don’t believe it exists!’ argument. Like a lot of forum wars, it comes down to semantics: Variation and bad copies aren’t the same thing (actually they’re not really related at all), but people tend to use the terms interchangeably.

Even $ 2,000 lenses must have variation

Note that I said ‘must’. I didn’t say ‘might’ or ‘could’. I certainly didn’t say ‘shouldn’t at this price’. If you expect every copy of a lens to be perfect, then a dose of reality is in order – unreasonable expectations are a down payment on disappointment.

The key point is what amount of variation is acceptable.

Of course, I define ‘unacceptable’ by my standards. My standards are probably similar to 90% of your standards (and they’re higher than most manufacturer’s standards). A few of you will consider my standards either too low or too high. That’s reasonable. You and I might be looking at the same lens, but we’re doing different things with it, and probably doing them on different cameras. Later on, we’ll talk about the difference between ‘acceptable variation’ and a genuinely bad copy that I would consider unacceptable.

Why lenses must vary

Any manufactured part, from a washer on your kitchen faucet to a component in the Hubble telescope has some variation. Generally (up to a certain point – limited by the state of the technology) you can lower the variation of a part if you are willing to pay more. Why? Because entirely new machines or manufacturing processes may be required, and all of that costs money.

But just ordering more units means you can save money, right? Well yes – in very general terms, ordering larger quantities lowers per-unit costs, but in a fairly linear fashion. Doubling your order of something usually reduces the per-unit cost by some percentage, but certainly not by half. There is never a point where if you order a large enough quantity of an item you get it for free.

This is a 15 cm diameter, 1/10 wavelength optical flat, eyeglasses for scale.

As an example, we use optical flats to calibrate our test benches. The flats come in different accuracies: 1/4 , 1/10, or 1/20 wavelength of flatness. All of those are very flat indeed, and those accuracies cost $ 800, $ 2,200, and $ 3,800 respectively. There is no quantity I could buy that would let me get the 1/20 wavelength plates for the 1/4 wavelength price. And I can’t get 1/40 wavelength of flatness at any price. The technology simply isn’t available.

What varies in a lens? Everything. The screws, helicoids, plates, and spacers vary. Every glass melt is very slightly different, giving elements a very slightly different refractive index. Lens grinding introduces variation, as does the coating process. Even the shims that we use to adjust variance, they vary. And shims don’t come in infinite thicknesses, so if your thinnest shim is 0.01mm then +/- 0.01mm is your maximum attainable accuracy.

What can manufacturers do about this?

The first thing is tolerancing the design. Optical programs let the designers punch in various tolerances for parts, showing how a given variation will affect the overall performance of the lens. For the sake of argument, let’s say that one particular glass element is very critical and even a slight variation makes a big difference in how the lens resolves, while variation among other elements matters less. The manufacturer can pay to have that critical element made more accurately. They can also change the design to make the part less critical, but often only by sacrificing performance.

In addition, manufacturers can (notice I said ‘can’, not ‘always do’) place compensating elements in the lens, allowing for slight adjustments in tilt, spacing, and centering. Emphasis is on ‘compensating’, though: These adjustments compensate for the inevitable errors that accumulate in any manufactured device. They are not called ‘adjusted for absolute perfection’ elements.

The two most common types of lens adjustments: shims and eccentric collars.

Not all lenses are equally adjustable. Some modern lenses may have five to eight different adjustable elements. Many have two or three. A fair number have none at all; what you get is what you get. Here’s a thought experiment for you: imagine you’re an optical engineer and you’ve been tasked with making an inexpensive lens. Knowing that adjustable elements are an expensive thing to put in a lens, what would you do?

I want to emphasize that optical adjustments in a modern lens are not there so that the lens can be tweaked to perfection; the adjustments are compensatory. There are trade-offs. Imagine you’re a technician working on a lens. You can correct the tilt on this element, but maybe that messes up the spacing here. Correcting the spacing issue changes centering there. Correcting the centering messes up tilt again. Eventually, in this hypothetical case, after a lot of back-and-forth you would arrive at a combination of trade-offs; you made the tilt a lot better, but not perfect. That’s the best compromise you can get.

Because many people think of distributions as the classic ‘bell curve’ or ‘normal distribution’ let’s get that particular wrongness out of the way. If you evaluate a group of lenses for resolution and graph the results it does NOT come out to be a normal distribution with a nice bell curve.

Frequency graph of two lenses. For those of you tired of reading already, this graph sums up the rest of the article. The black lens is going to have more variation than the green one. Neither the black nor green graphs are at zero over there on the softest end, bad copies happen to either one, but not frequently.

As common sense tells you it should be, lenses have a very skewed distribution. No lens is manufactured better than the perfection of theoretical design. Most come out fairly close to this theoretic perfection, and some a little less close. Some lenses are fairly tightly grouped around the sharpest area like the green curve in the graph above, others more spread out, like the black one. The big takeaway from that is you can’t say things like ‘95% of copies will be within 2 standard deviations of the mean.’

The Math of Variation

Don’t freak out, it’s not hard math and there’s no test. Plus, it has real world implications; it will explain why there’s a difference between ‘expected variation – up to spec’ and ‘unacceptable copy – out of spec’.

There are several ways to look at the math but the Root Sum Square method is the one I find easiest to understand: you square all the errors of whatever type you’re considering, add all the squares together, then take the square root of the total.

The total gives you an idea of how far off from the perfect, theoretical design a given lens is. Let’s use a simple example, a hypothetical lens with ten elements and we’ll just look at the spacing of each element in nm. (If you want to skip the math, the summary is in bold words a couple of paragraphs down.)

If we say each element has a 2 micron variation, then the formula is ?10 X 22 = 6.32. If I make a sloppier lens, say each element varies by 3 microns, then ?10 X 32 = 9.48. Nothing dramatic here, looser control of variation makes higher root sum square.

The important thing happens if everything isn’t smooth and even. Instead of 10 elements worse by 1 micron, let’s make 1 element worse by 10 microns. I’ll do the math in two steps:

? (9 X 22) + (1 X 102) = ? (36 + 100) = ?136 = 11.66

The summary is this: If you vary one element a lot you get a huge increase in root sum square. If you spread that same total variation over several elements, you get only a moderate increase in root sum square. That is basically the difference between a bad copy and higher variation.

If you have just one really bad element the performance of the lens goes all to hell

The math reflects what we see in the real world. If you let all the elements in a lens vary a little bit, some copies are a little softer than others. Pixel peepers might tell, but most people won’t care. But if you have one really bad element (it can be more than one, but one is enough) the performance of the lens goes all to hell and you’re looking at a bad copy that nobody wants.

More real world: if one element is way out of wack, we can usually find it and fix it. If ten elements are a little bit out, not so much. In fact, trying to make it better usually makes it worse. (I know this from a lot of painful experience.)

What does this look like in the lab?

If you want to look at what I do when I set standards, here are the MTF graphs of multiple copies of two different 35mm F1.4 lenses. The dotted lines show the mean of all the samples; these are the numbers I give you when I publish the MTF of a lens. The colored area shows the range of acceptability. If the actual MTF of a lens falls within that range, it meets my standards.

Mean (lines) and range (area) for two 35mm lenses. The mean is pretty similar, but the lens on the right has more variation.

For those of you who noted the number of samples, 15 samples means 60 test runs, since each lens is tested at four rotations. The calculations for variation range include things about how much a lens varies itself (how different is the right upper quadrant from the left lower, etc.) as well as how much lenses vary between themselves and some other stuff that’s beyond the scope of this article.

So, in my lab, once we get these numbers we test all lenses over and over. If it falls in the expected range, it meets our standards. The range is variation; it’s what is basically inevitable for multiple copies of that lens. You can tell me I should only keep the ones that are above average if you want. Think about that for a bit, before you say it in the comments, though.

The math suggests a bad copy, one with something really out of whack, doesn’t fall in the range. That’s correct and usually it’s not even close. When a lens doesn’t make it, it REALLY doesn’t make it.

A copy that obviously doesn’t meet standards. The vast majority of the time, one of these can be adjusted to return to expected range.

We took that copy above, optically adjusted it, and afterwards it was right back in the expected range. So an out-of-spec copy can be fixed and brought back into range; we do that several times every day.

But we can’t optically adjust a lens that’s in the lower 1/3 of the range and put it into the upper 1/3, at least not often. Trust me, we’ve tried. That makes sense; if one thing is way out of line we can put it back. If a dozen things are a tiny bit out of line, well, not so much.

I know what you’re thinking

You’re thinking, ‘Roger, you’re obviously geeking out on this stuff, but does it make one damned bit of difference to me, a real photographer who gives zero shirts about your lab stuff? I want to see something real world’. OK, fine. here you go.

A Nikon 70-200mm F2.8 VR II lens is a really good lens with very low (for a zoom) variation. But if you drop it just right, the 9th element can actually pop out of its molded plastic holder a tiny bit without causing any obvious external damage. It doesn’t happen very often, but when it does, it always pops out about 0.5mm, which, in optical terms, is a huge amount. This is the ‘one bad element’ scenario outlined in our mathematical experiment earlier.

Below are images of the element popped out (left) and popped back in (right) and below each image is the picture taken by the lens in that condition. Any questions?

On top you see the 9th element ‘popped out’ (left) and replaced (right). Below each is the picture of a test chart made with the lens in that condition.

So, what did we learn today?

We learned that variation among lenses is not the same thing as ‘good’ and ‘bad’ copies. Some of you who’ve read my stuff for a long time might remember I used to put out a Variation Number on those graphs, but I stopped doing that years ago, because people kept assuming that the higher the variation, the higher their chances were of getting a bad copy, which isn’t true. You see, bad copies are – well, bad. Variation just causes slight differences.

I’m going to do a part II that will go into detail with examples about how much you should expect lenses to vary, what the difference is between variation and a genuinely bad copy, and why some people act like jerks on forums. Well, maybe just the first two.

As a bonus, I will tell you the horrifying story of how manufacturers optically adjust a lens that’s really not optically adjustable. And for a double bonus I will show how variation means that there are actually two versions of the classic Zeiss 21mm F2.8 Distagon.

In other words, if you struggled through this article, hopefully the next one will be enough fun that you think it’s worth it. Delayed gratification and all that…

Roger


Roger Cicala is the founder of Lensrentals.com. He started by writing about the history of photography a decade ago, but now mostly writes about the testing, construction and repair of lenses and cameras. He follows Josh Billings’ philosophy: “It’s better to know nothing than to know what ain’t so.”

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Roger Cicala: the difference between sample variation and a ‘bad copy’ (Part 1)

Posted in Uncategorized

 

Canon EOS R5 vs. R6: What’s the difference?

13 Jul

Introduction

Canon’s EOS R5 and R6 are designed to act as mirrorless versions of the hugely popular EOS 5D and EOS 6D series of DSLRs. The relationship is very similar, with the R5 offering list of capabilities that will appeal to a variety of professional as well as enthusiast photographers.

But the R6 offers a strong feature set in its own right. We thought it’d be helpful to delve into the differences, to see just what you gain and give up by choosing between the two.

Sensors

The most obvious difference between the two camera is resolution. The R5 is based around a new 45 megapixel sensor, meaning it offers more than enough resolution for all but the most demanding of tasks. We’ve not had a chance to test the sensor fully yet, but there’s no question that it delivers in terms of detail. We’ll know more about things like dynamic range once we have full Raw support, but the last generations of Canon sensor have done well in this regard.

The R6, meanwhile, is based around a 20MP sensor, said to be closely-related to the one in the 1D X III. It’s a chip we saw perform well when we tested that camera, and we suspect Canon chose to use it here to let the R6 keep up with the R5’s burst shooting speeds. 20MP is sufficient for a wide range of photography, but it may be a deciding factor if you shoot for print publication or demanding clients.

Both cameras include anti-aliasing filters. And, while these have somewhat fallen out of favor as pixel counts have risen and the need for them has been reduced, Canon still clearly believes they have a valuable role to play. Whether it’s their faith in the sharpness in the new RF lens designs, or experience with providing tools to wedding photographers who can’t risk moiré creeping into critical images,

In-body image stabilization

Despite the price difference, both cameras are said to have the same image stabilization system, rated at up to 8EV, depending on the lens it’s used with.

It’s a five-axis sensor-shift system that works collaboratively with the IS system in RF lenses. Canon says the unprecedentedly high rating is achieved by the in-body and in-lens systems constantly communicating with one-another.

Canon hasn’t commented (and we’ve not yet had time to test) how well the IS systems work with EF lenses, that don’t have the greater communication bandwidth of the RF mount.

Continuous shooting

Another specification common to the two cameras is their continuous shooting rate. Both cameras can shoot at 12 fps with their mechanical shutters or 20 fps in e-shutter modes.

The R5’s higher pixel count makes this harder to maintain. At 20 fps, it’ll write at least 84 Raws + Large/Fine JPEG files to a CFexpress card, a number that increases to 170 shots if you just shoot JPEG. In 12 fps mode you’ll get 90 Raw + Large/Fine HEIFs, 160 Raws with JPEGS and 180 Raw files. Moving to C-Raw boosts most of these numbers by around 50%. HEIF and JPEG figures are similar whether you use CFexpress or a UHS-II card, but Raw shooting definitely benefits from the faster card format.

The R6’s most limited burst with a fast UHS-II card is 140 Raw + Large/Fine HEIF files. Move to Raw+JPEG and it increases to 160 shots in a burst, or 240 if you just shoot Raw. If you move across to C-Raw the numbers more than double, and shooting C-Raw, HEIF or JPEG only will see you get over 1000 shots in a go. Not shabby for the more basic model.

Viewfinder and screens

One area in which Canon has decided to differentiate between the two models is display resolution. The R5, commensurate with its higher price tag, has the latest 5.76M dot OLED EVF, paired with a 2.1M dot rear LCD. The R6 has a 3.68M dot EVF and slightly smaller 3.0″ 1.68M dot LCD.

However, it’s noticeable that both use the same viewfinder optics to give a solid 0.76x magnification and 23mm eye-point. And both viewfinders can be run at 120Hz for a more OVF-like shooting experience.

We’ve seen some criticism of this decision, but the 3.68M dot panel in the R6 is still very good. It’s comparable to the one in the Nikon Z6 and higher resolution than the viewfinder in the Sony a7 III. Only the Panasonic S1 (which shares the R6’s launch price) gets a 5.76M dot display in this class, so it’s up to you to decide whether foregoing the best available finder is a fair trade-off for the areas in which the R6 out-specs the Panasonic.

Video

Video is one of the biggest areas of difference. The R5’s sensor is designed to shoot 8K video at up to 30p (though this can also be output as perfectly 2:1 oversampled 4K footage, if that makes more sense for your workflow). It also includes the option to internally record Raw video, which means 8K to avoid the need for sub-sampling or cropping. It can also shoot 4K/120p from the full width of the sensor, but this doesn’t use all the available pixels, so is likely to be less detailed.

The R6, meanwhile, shoots 4K footage at up to 60p. It uses what is effectively a 16:9 crop from what would be a full-width DCI capture, which means it’s slightly cropped-in (it’s a 1.07x crop). But again it benefits from the same impressive stabilization capabilities.

Canon hasn’t withheld any video tools from R6 users: both cameras have headphone and mic sockets and offer both focus peaking and zebra exposure indicators. Like the R5, the R6 can capture C-Log or HDR PQ video as 10-bit 4:2:2 H.265 files and has view assist modes for both.

Video

There are major differences to the video-shooting experience, though: the R5 offers a full range of video exposure modes, including Shutter Priority, Aperture Priority and Custom modes, whereas the R6 only shoots in Program or full Manual mode. That said, the R6 does let you use Auto ISO in manual mode and lets you adjust the aperture in 1/8th EV steps, so you can get a decent degree of control or automation, if you need it.

Beyond the resolution differences and Raw capture, Canon has clearly decided that the wider DCI aspect ratio and All-I encoding are higher-end requirements, so they’re only available on the R5.

Body

The bodies look similar at first glance but the differences stack up the closer you look at them.

The most visible difference is that the R5 has a top-plate settings display, whereas the R6 has a conventional exposure mode dial. The R5 has a full-size ‘N3’ three-pin screw-in remote release socket on its front plate, whereas the R6 has a simpler ‘E3’ three-pole 2.5mm headphone-style connector as one of the ports on the camera’s left flank. On the R5 that space is taken up with a flash sync port.

The construction of the two cameras is different, too. Both have primarily metal construction with a polycarbonate rear plate, but the components themselves are different. The R5’s body is slightly more angular in places and the camera as a whole is heavier than the R6. Canon says the R5 is sealed in a way that’s up to the standards of the 5D series of DSLRs, while the R6’s weather-proofing is a match for the 6D cameras.

Autofocus

Autofocus is another area in which the two cameras are essentially matched. Both have the latest iteration of Canon’s Dual Pixel AF system, with 100% coverage both horizontally and vertically across the frame.

Both use AF systems which have been trained by machine learning. This provides the subject recognition capabilities that underpin their Human and Animal detection modes. The snappily titled ‘EOS iTR AF X’ system can detect human eyes, faces and heads, and the eyes, faces and bodies of animals including cats, dogs and birds. You can tell the camera whether to prioritize focus on humans or animals (or show no preference) and it will maintain focus on the subject, even if a person looks away, and switching from body-AF to eye-AF as an animal gets closer.

The R6’s AF is rated as working in light as low a -6.5EV when used with an F1.2 lens, or -5EV in video mode. The R5 is rated down to -6EV in stills and -4EV in video, again with an F1.2 lens attached.

Battery life

The different internals have an impact on the cameras’ respective battery life figures. Both share the latest 16Wh LP-E6NH battery and can use older LP-E6-type batteries if you have them.

The R5 is rated at 320 shots per charge through the viewfinder and 490 shots using the LCD, in default mode. Shifting to the higher refresh rate mode sees these drop by around 30% to 220 and 320 shots, respectively.

The R6 posts slightly better results: 380 shots per charge using the viewfinder in standard mode and 510 via the LCD. Again there’s about a 30% reduction if you engage the faster viewfinder mode, with the endurance dropping to 250 and 350 shots per charge for the EVF and LCD.

Both cameras can be recharged if you have a high-current USB-C charger or power bank.

Cards

The R5’s 8K video and 45MP stills produce a lot more data than the R6, so Canon has equipped the camera with a CFexpress slot, in addition to a UHS-II SD card slot. As we’ve seen, the SD card slot can’t clear the buffer as fast during burst shooting, and can only record IPB-encoded 8K video, so it’s worth buying some CFexpress cards if you need to make full use of the R5’s capabilities.

The R6’s lower pixel count means a fast UHS-II card is sufficient for both stills and video. The use of the SD format not only means you’re more likely to already own some compatible memory cards, but also that you can fill your pockets with a single card type, if you ever expect to fill both.

Wi-Fi

Both cameras have built-in Wi-Fi for transferring video and stills, either to a smart device, a computer or even over FTP. The R5 has both 2.4 and 5GHz radios, while the R6 is only compatible with the slower (and often more congested) 2.4GHz networks.

Both cameras let you separately select which files to transfer when you’re shooting Raw + JPEG and Raw + HEIF, so you can set it to upload just the JPEGs for standard DR images but upload the Raws for when you’re shooting HDR PQ HEIFs, for instance.

As well as the difference in frequency bands the cameras can communicate over, it’s also only the R5 that can be used with the WFT-R10A wireless grip accessory. This adds more powerful Wi-Fi transmission and has an Ethernet connection for dependable fast file transfer.

Dual Pixel Raw

One notable R5-only feature is Dual Pixel Raw. This separately retains data from both halves of the pixel, meaning that it’s possible to reconstruct some depth information about the scene, even after the photo has been taken.

This opens up various processing options, both in-camera and when using Canon’s Digital Photo Professional software. DPP already offers a focus-shift feature but the new camera adds the internal options ‘Portrait Relighting’ which selectively brightens parts of the image, based on depth and face recognition data. There’s also a Background Clarity option that we haven’t yet had a chance to use.

Price

And there you have it. Other than the resolution differences, the R6 has a lot in common with the more expensive R5. Of course there’s a price to be paid for the R5’s extras: specifically a recommended retail price of $ 3899, compared to the R6’s $ 2499.

Which camera interests you more is likely to depend a lot on what kinds of photos you take and how you plan to use the video. But we hope we’ve teased-out enough of the differences to help you understand whether there are any unexpected differences or omissions you might have overlooked.

If you are about to reach for your credit card, there might be one more factor to consider: the R5 is available in late July, whereas you have to wait until late August for the R6.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon EOS R5 vs. R6: What’s the difference?

Posted in Uncategorized

 

Books that made a difference: Camera Lucida (Roland Barthes, 1980)

19 Apr

If you’ve never heard of Roland Barthes, congrats – clearly you were never forced to study structuralism, post-structuralism, deconstructionism or semiotics. Lucky you.

It was as a semiologist that Barthes (b 1915 – d 1980) was best known, and in simple terms, semiotics is the study of signs, symbols and their meaning. For obvious reasons, academic texts that deal with semiotics (and structuralism, and post-structuralism, and deconstructivism) tend towards the abstruse. When the king of the deconstructionists Jacques Derrida (of whose work ‘abstruse’ would count as a highly charitable description) passed away in 2004, satirical website The Onion ran a single sentence headline: Jacques Derrida “dies”. That joke (and variations on it) are, trust me, the only funny thing that has ever come out of semiotics, structuralism, post-structuralism or deconstructionism. Reading the work of certain semiologists is like trying to argue with a hungry 3-year old who has an MA.

The reason I’m writing about Roland Barthes on DPReview is that Barthes was fascinated by photography, and wrote one of my all-time favorite books about it – ‘Camera Lucida’, published in 1980. Photography didn’t attract much academic interest until the 1970s and 80s, and ‘Camera Lucida’, alongside Susan Sontag’s ‘On Photography’ is among the most influential (and enjoyable) books of its period to deal with photography as a cultural phenomenon, not just in the obvious way, as an art and practise. You do not need to know anything about philosophy to read ‘Camera Lucida’ and you might actually enjoy it more if you don’t.

Photography is an odd kind of art-form. You can’t ‘read’ a photograph like you can text (which is the kind of thing that annoys the hell out of semiologists), and being by its nature infinitely reproducible, a photograph doesn’t have the uniqueness of a painting. Consider also that to ‘make’ a photograph takes no training. In many circles, photography is still considered the poor cousin of ‘real’ art and it’s easy to understand why. Just remember Kodak’s famous slogan: “You push the button, we do the rest”.

As Louis Daguerre said, the photograph “gives Nature an ability to reproduce herself”

Barthes thought that photography is actually closer to theatre than to painting (because of its direct line of connection to life). He was not a photographer – “too impatient for that” – and had no interest in investigating photography as an activity. He wanted to get to grips with what photographs are and what makes them unique.

In perhaps his most famous statement on photography (made before he wrote ‘Camera Lucida’) he suggests that the photograph is a semiotically unique, paradoxical artifact – unique because it is a “message without a code”. It doesn’t need a code (or shouldn’t) because in theory, the message of a photograph is reality itself. This is the photograph as a purely representational artifact – the product of light rays, entering a camera from the surface of a tiny corner of reality. As Louis Daguerre said, the photograph “gives Nature an ability to reproduce herself”. And he ought to know.

That’s the theory, at least. The problem (the paradox) of course is that despite the fact that a photograph is a mechanically-created object, it’s very hard to imagine a photograph that isn’t highly coded. Everything from how a portrait subject is posed, to the photographers’ choice of background, or camera angle etc., can affect how we feel about a photograph, and ultimately what we take away from it. It’s actually very difficult to conceive of an example of what Barthes calls the ‘brute image’; a hypothetical photograph free from any kind of connoted meaning.

One of a collection of images taken by a relative of my grandmother and grandfather on a honeymoon trip around England in late summer 1939 (you can read more about the project and see more images here).

Because of when they were taken (just weeks before the outbreak of WWII) and how (they were shot on then-rare color film) they’re all rich in what Barthes called ‘Studium’. For me, the ‘punctum’ in this shot is my grandparents’ cat (bottom of the photograph, in front of the tent, facing the camera) which – apparently – traveled with them.

In ‘Camera Lucida’, Barthes suggests that there are two elements to every photograph. Borrowing from latin, he calls these the studium (‘study’ – think application or commitment) and the punctum (‘point’ – think puncture or prick).

In simple terms, the studium is all the information which can be gleaned from a photograph which derives from the cultural context in which it exists. As such, the studium is experienced according to the viewer’s personal, political and cultural viewpoint. A good example of a kind of photography which is rich in studium would be traditional western photojournalism. Assuming you’re familiar with the culture in which they were taken, such photographs are pretty easy to ‘decode’ when we see them in our daily newspapers. We know what they are ‘of’.

The punctum, on the other hand, is an element (or elements) of a photograph which don’t necessarily contribute to their overall meaning or intended message, but which grab or ‘prick’ us for some reason. Barthes gives the example of a 1924 photograph by Lewis Hine of a developmentally disabled child in a New Jersey institution, with a bandage on her finger. For Barthes, the ‘punctum’ is the bandage – an “off-center detail” which catches his attention and which provokes a “tiny shock”. The studium, in contrast, is “liking, not […] loving” – a “slippery, irresponsible interest one takes in [things] one finds ‘all right’ “. The bandage has nothing to do with the studium of the Hine photograph, but it interests him more.

Most of us take pictures of places, people and things, without spending a lot of time thinking about their content beyond whether it appeals to us aesthetically

This might all sound very abstract, but it’s actually a really useful way of thinking about how we take photographs. Try categorizing your own work by Barthes’ definitions. Are you someone whose photography is all about the studium? I suspect that most of us are. Most of us take pictures of places, people and things, without spending a lot of time thinking about their content beyond whether it appeals to us aesthetically. We can learn from photographs like this, but it’s generally (literally) surface-level stuff.

The punctum is more valuable, says Barthes, because it’s unexpected. Uncoded, and more interesting. And to return to the comparison with painting, a punctum of the kind that Barthes describes could only exist in a photograph, because of the unique way in which photographs are created.

By the time I was able to really know my grandparents they were old (and my grandfather died when I was in my early teens). For me, working on these images offered an amazing opportunity to encounter them them as young people. In Barthes words, I was “gradually moving back in time” with these people, both of whom are now dead.

Thanks to a DPReview reader, I even know what happened to the car.

Even in translation. Barthes is a great writer. He’s smart (obviously) but also funny. He’s wonderfully catty about types of photographs and photographers that he doesn’t like, and he correctly identifies one of the most creatively destructive traps that you can fall into as a photographer: thinking that just because you took a picture of something, it must be important. Ouch.

To me, the main appeal of ‘Camera Lucida’ is that it’s much more than just an academic dissertation – it’s a deeply personal, very emotional book. Less philosophy in many places, and more biography.

The latter part of the book, especially, contains some quite beautiful writing. This is highly unusual in a work of philosophy (trust me). Perhaps the reason for the switch to a less academic and more personal mode of writing is that while he was working on ‘Camera Lucida’, Barthes’ beloved mother Henriette died. And after she died he went looking for her. Not literally, but emotionally, hoping to find the essence of her in family photographs.

He talks about this process in terms of a “painful labor”, “gradually moving back in time with her, looking for the truth of the face I had loved”. He describes “straining towards the essence of her identity, […] struggling among images partially true, and therefore totally false”. What he was finding in the photographs, to his frustration, were merely “fragments”.

And then, finally, he made a breakthrough. He found what he was looking for in a single photograph of his mother as a young girl. Among a mass of pictures of Henriette as an adult, it was in this photograph of a five year-old child – a child of course who he never met in life – that he truly recognised the person he had known and loved.

Barthes doesn’t exactly admit defeat in ‘Camera Lucida’, but he does concede that maybe things are a little more complicated than he once thought.

In the final chapters of ‘Camera Lucida’ (it’s a very short book, most chapters are little more than a single page) Barthes revisits his central premise of the studium and the punctum, and revises it, suggesting a third element. Specifically, another type of punctum, not of form, “but of intensity”. This second punctum is Time.

In ‘Camera Lucida’, Barthes the famous philosopher gives way to Barthes the grieving son. Yes, much of the first half of the book is more or less standard fare for someone with his academic preoccupations (and indeed it picks up from his earlier work on the same subject, exploring the photograph’s potential as a purely representational object) but he’s not just flexing his intellectual muscles for the sake of it. Barthes is writing about time (he has a wonderful description of cameras as ‘clocks for seeing’), memory, and death. When it comes to the ultimate challenge of ‘penetrating’ photographs to find their meaning, Barthes doesn’t exactly admit defeat in ‘Camera Lucida’, but he does concede that maybe things are a little more complicated than he once thought.

A girl bathing by Stiffkey bridge, in Norfolk. August 1939. Looking at this picture I can’t help thinking who she is, what kind of life she had, and whether she’s still alive (if so, she must be in her late 80s or 90s now).

‘Camera Lucida’ may not make you a better photographer (it might actually make you pause before picking up your camera again!) but it will probably make you a more thoughtful one. There is a reasonably good chance, too, that it will make you cry. There’s a a lot of post-war Continental philosophy that might have the same effect, but for very different reasons.

I hope that after reading my incredibly shallow analysis of it, you do read ‘Camera Lucida’. And if you do, I hope that it will remind you of the unique role that photography has in our lives, and of its power. Photographs let us travel back in time, and in that way they enable us to maintain relationships with people that we’ve lost. In the end, it’s a book about love.


Is there a particular book which made a difference to your life as a photographer? We’d love to hear from you – and you might even get featured on the DPReview homepage. Leave us a short note in the comments and if you have a longer story to tell, send it to us, and we’ll take it from there.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Books that made a difference: Camera Lucida (Roland Barthes, 1980)

Posted in Uncategorized

 

Street Portraits vs Street Photography: What is the Difference?

10 Feb

The post Street Portraits vs Street Photography: What is the Difference? appeared first on Digital Photography School. It was authored by Simon Bond.

street-photography-vs-street-portraits

One of the most popular and accessible forms of photography you can practice is street photography. In this article, you’ll learn about one of the key questions that get asked in this area of photography. That is when do you ask a person’s permission to take their photo? The answer to that question divides street portraits vs street photography.

Read on to find out about both areas of street photography, and how they relate to each other.

What is street photography?

This can be a difficult area of photography to define because street portraits can easily be mistaken for street photography.

It’s also true that it’s possible to practice street photography and still have permission from your subjects.

So what sets this area of photography apart?

Well, the simple answer is that street photos should be natural and not staged. So what does it take to get a good street photo?

Image: YOu can even use a fisheye lens for street photography. People’s faces aren’t rea...

YOu can even use a fisheye lens for street photography. People’s faces aren’t really shown here, yet there is a story.

The equipment

The camera body you use here is important for street photography that occurs in low light situations, where you’ll want to use a higher ISO.

The choice between DSLRs or mirrorless cameras is a personal one. However, the smaller size of mirrorless cameras is an advantage.

You really want to keep to one lens, so you can keep things lightweight while you’re on the move. There is an understandable desire to use different focal lengths, though, so consider returning to the same location twice, and with different lenses.

So which is the ideal lens for street photography?

Image: A lens of 135mm means you need to stand quite far away to include the context in your scene.

A lens of 135mm means you need to stand quite far away to include the context in your scene.

  • 50mm – This is many street photographers’ lens of choice. That’s because it has a similar field of view to a person’s eye. That field of view is also wide enough to give your scene context, and you have a large enough aperture with a prime lens to photograph in low light. Keep in mind the crop factor for DSLR cameras that have a crop sensor, as it will change the effective focal distance of your 50mm lens.
  • Wide-angle – Then there are those photographers who like to have even more stories in their scene and will look to use even wider lenses. That might even mean a wide-angle zoom lens. You’ll now be getting very close to the people you photograph, making it harder to avoid them noticing you.
  • Telephoto – On the other end of the spectrum are those who prefer to photograph from a distance. This allows you to photograph the scene without the chance of people posing, as they’re much less likely to see you. On the other hand, you’ll compress the scene. If you don’t stand even further back, you won’t show very much context in your photo.
Image: Market’s make great locations for street photography. They are even better at night, wh...

Market’s make great locations for street photography. They are even better at night, when there is more atmosphere.

The location

Street photography is the exploration of your urban environment, so it needs to happen in this setting. The photo might happen away from the street itself, for instance, in an indoor market, but this would still be considered street photography.

The best place to practice this will be a place that allows for plenty of moments of capture. With that in mind, locations like markets, train stations, or high streets would work.

The subject

Now you know the location for street photography, the next thing to think about is the subject. There are plenty of photos you can take from the location suggested above that aren’t street photos.

A photo that shows only fruit is more of a food detail photo than a street photo. That said, does every street photo need to include a person’s face? The answer to that is, no, it doesn’t. But there does need to be a narrative element to it.

A photo that just shows people’s feet can certainly still contain a story. However, in most cases, you’ll want to see a person going about their daily life, and that means including their face.

Image: A street portrait will see your subject fully engaging with the camera.

A street portrait will see your subject fully engaging with the camera.

What is a street portrait?

A street portrait is one that shows the person’s face. It’s almost certainly posed, and it will be taken on the street. There is an authentic element to it. You’re not taking a model out with you, and you never know if the person you ask will give you permission to take their photo.

Once granted permission, you’ll be able to control many elements of the photo. You might be able to ask your subject to stand in front of an interesting background, turn their face towards a light source, or control their facial expression.

The equipment

This type of photo, once again, will be taken with a good quality mirrorless or DSLR camera. The lens should be a prime lens with a large aperture to give you the choice of blurring the background. However, you don’t have to use bokeh when you can control where your subject stands.

The type of lens you could use would be the same ones portrait photographers use with a model. So a 50mm, 85mm, or 135mm prime lens is ideal.

You might even consider using off-camera flash to have further control over your photo – this is, after all, a posed photo now.

The location

This will be a location where people congregate and go about their daily lives. It’s likely you’ll take a mixture of street portraits and street photos in the same location. With that in mind, refer to the advice given above for locations for street photography, since this is broadly the same for street portraits.

Image: In this photo I asked the man to move to a better position for the light.

In this photo I asked the man to move to a better position for the light.

The subject

Now you’re looking for people who have personality in their appearance. Look for people who really tell the story of the place they are in. Do this through the clothes they’re wearing, the imperfections on their face, and the backgrounds you can find to place behind them.

One crucial aspect of this type of photo is gaining permission.

You’ll need to decide which types of personality are most likely to give you a positive response. You’ll also need to adapt the way you approach people, as different people may respond differently to varying ways you could break the ice. However you do this, always remain professional, and courteous. Perhaps bring a portfolio of your work and a business card with you to give yourself added weight.

Model releases

It’s worth mentioning model releases when it comes to photographing people. While it’s true that in many countries you’re allowed to photograph people in public places, you can then only use those photos for editorial and personal use. There may come a time you wish to use your photos more commercially.

If that’s the case, then you’ll need a model release. Even if you don’t use the photos for commercial reasons, getting a model release is always good practice.

In the case of street portraits, this should be easier to do since you’ll already be in conversation with the person in question.

Image: You can use the background for a street portrait, so it adds context to the rest of the photo...

You can use the background for a street portrait, so it adds context to the rest of the photo.

Street portraits vs street photography, time to decide.

Now you know your street portraits vs street photography.

Which form of photography do you prefer, both as a photographer and a viewer? How often do you ask people on the street for their permission before taking the photo?

Do you have a favorite set of equipment for either of these photography genres?

Here at digital photography school, we like hearing your opinions, so please share them in the comments.

Likewise, please share your photos that show street photography or street portraits in the comments section.

 

 

The post Street Portraits vs Street Photography: What is the Difference? appeared first on Digital Photography School. It was authored by Simon Bond.


Digital Photography School

 
Comments Off on Street Portraits vs Street Photography: What is the Difference?

Posted in Photography

 

RAW vs DNG: What’s the Difference and Why Does it Matter?

08 Feb

The post RAW vs DNG: What’s the Difference and Why Does it Matter? appeared first on Digital Photography School. It was authored by Simon Ringsmuth.

raw-vs-dng-files

As a photographer, you have no doubt heard people talk about file formats, specifically RAW and JPG. Some people shoot only in RAW, others like JPG, and many photographers use both. Each format has benefits and drawbacks, but if you want the most amount of control over your pictures, you probably shoot in RAW. However, there is a third option you might not even know about: Digital Negative, or DNG. With this other format in the mix, the issue isn’t so much RAW vs JPG, but RAW vs DNG.

Image: DNGs can speed up your Lightroom workflow, but there are some tradeoffs to be aware of.

DNGs can speed up your Lightroom workflow, but there are some tradeoffs to be aware of.

Understanding RAW

RAW files, unlike JPG files, store all of the light and color data used to capture an image. That means you can recapture blown-out highlights, make better white balance corrections, and have a great deal of editing freedom you don’t get with JPG.

Nikon, Canon, Sony, and others all let photographers shoot in RAW, but each of their RAW files is different. For example, the file extension for a Nikon RAW file is NEF, Canon is CRW, and Sony uses ARW.

As a result of this, cameras from these manufacturers process and store RAW data a little differently. Third-party editing software has to interpolate and reverse-engineer the method used to create the RAW files.

This is great for camera makers because they can tweak their hardware and software to work really well with their own RAW formats. However, it’s not always the best for photographers and editors.

Image: RAW and DNG files give you plenty of editing room that JPG does not offer. Nikon D500, 85mm,...

RAW and DNG files give you plenty of editing room that JPG does not offer. Nikon D500, 85mm, f/1.8, 1/4000 second, ISO 100

Digital Negative

Adobe developed the Digital Negative (DNG) format in 2004 as an open-source alternative to the proprietary RAW formats that most camera manufacturers used.

What Adobe did was essentially level the playing field by giving everyone access to the same format for working with RAW files.

DNG is open-source, which means anyone can use it without paying licensing fees. A few manufacturers like Pentax and Leica support DNG natively. However, for everyone else, there are easy ways to convert RAW files to DNG and get all the benefits of the latter without the hassles of the former.

DNG is particularly useful if you use Adobe products, like Lightroom and Photoshop, but other editing software support it too.

RAW vs DNG

The photo information in each file is identical, but there might be some reasons to choose one over the other.

When looking at the RAW vs DNG issue, there are some important benefits as well as drawbacks that you might want to consider before you switch.

However, please don’t look at this as a matter of which format is better.

Neither RAW nor DNG is objectively superior; both have advantages and disadvantages. The point is to give you enough information to make an informed choice about which format works for you.

DNG benefits

1. Faster workflow

The main reason many people use DNG files is related to editing efficiency when using Lightroom. Since DNG and Lightroom are both made by Adobe, it stands to reason that they would work well together.

If you have ever found doing some simple operations with RAW files in Lightroom frustrating, like switching photos or zooming in to check focus, you will be shocked at how fast things like this are when using DNG files.

Switching from RAW to DNG has made a huge difference for me in speeding up my Lightroom workflow.

Image: Nikon D750, 40mm, f/1.4, ISO 360, 1/180 second.

Nikon D750, 40mm, f/1.4, ISO 360, 1/180 second.

2. Smaller file sizes

File size is another area where DNG has an edge in the RAW vs DNG debate. Although, it might not be quite as important now with storage so cheap compared to ten or twenty years ago.

DNG files are typically about 20% smaller than a RAW file, which means you can store more of them on your computer. If you are limited in storage space, DNG just might be a good option for you.

Image: I converted a folder of RAW files to DNG. Both contain the exact same data for each photo, bu...

I converted a folder of RAW files to DNG. Both contain the exact same data for each photo, but the DNGs are much smaller. The entire folder of RAW files is 1.75GB, whereas the folder of DNG files is 1.5GB.

3. Wide support

Because DNG doesn’t require a proprietary decoding algorithm, like RAW files from major manufacturers do, there is wider support from a variety of editing software. Various archival organizations, such as the Library of Congress, even use this format. That means it should work just fine for most photographers too. Personally, knowing this helped settle the RAW vs DNG debate for me, but you might prefer another solution.

4. Wide support

One additional benefit of DNG has to do with editing metadata and how it is stored. Lightroom is non-destructive, meaning that any changes you make to an image, you can alter at any point in the future. The original file remains untouched, and a record of your edits is stored separately.

When working with RAW files, these edits are written to a very small file called a sidecar. However, if you use DNG, all your edits are stored in the DNG file itself. Most people consider this an advantage since it requires fewer files to store and manage, but it can be a drawback which I explore later in this article.

RAW vs DNG

Nikon D750, 40mm, f/1.4, ISO 1000, 1/3000 second

DNG Drawbacks

1. File conversion

Since most cameras don’t natively shoot in DNG format, you need to convert your RAW files if you want to use it.

Lightroom can do this automatically for you when importing, but it does come with a drawback that may be significant. Depending on the speed of your computer and the number of RAW files you import, the conversion to DNG can take anywhere from a few minutes to a few hours.

This could be problematic for some people in high-speed workflows such as sports and other action photography. Personally, I don’t mind. I just do the import/convert operation before dinner or at another time when I don’t need to start editing immediately.

I like to think of this initial conversion time as the culmination of all the seconds I used to spend waiting for RAW files to render, but all rolled into one lump sum. It’s a tradeoff I’m happy to make, but some people might find this a dealbreaker and stick with traditional RAW formats.

Image: Converting lots of RAW files to DNG can take a great deal of time. And this is time that some...

Converting lots of RAW files to DNG can take a great deal of time. And this is time that some photographers don’t have.

2. RAW metadata loss

Another drawback to the DNG format is that some of the RAW metadata gets lost during conversion. All the usual metadata you would expect is intact such as exposure, camera information, focal length, and more. But some information like GPS data, copyright information, and exact focus point don’t always transfer over.

Additionally, the built-in JPG preview gets discarded in favor of a smaller preview, which is another trick Adobe uses to bring down the size of DNG files.

Whether this information matters is up to you. Personally, I find none of the lost metadata a dealbreaker.

3. Multiple editors

One other issue you might want to consider is whether your workflow involves having multiple editors work on the same RAW file.

If that’s the case, then the lack of a sidecar file could be problematic. Essentially, the sidecar acts as a storage locker for all your edits. The RAW file is untouched, but the sidecar stores a record of your edits. This means that if you have two people working on the same RAW file, you can share your edits just by copying the sidecar files.

RAW vs DNG

Edits to RAW images get stored as sidecar files. You can send these sidecar files to other editors to share your RAW edits (as long as they have the original RAW files).

If you use DNG, you have to share the entire DNG files, which can be problematic compared to the ease of copying a tiny sidecar file.

For most people, this probably won’t matter, but for those who work in editing rooms or production houses that rely on sidecar files to store edits, DNG might not be the best option.

Finally, if you research this issue long enough, you will hear some trepidation about the longevity of DNG since the biggest camera makers, like Canon, Nikon, and Sony, do not officially support it. Personally, I’m not too worried about this since DNG is a widely-adopted industry standard, and if it’s good enough for the Library of Congress, then it’s good enough for me.

How to use DNG

If you want to give DNG a try, you can start by converting some of your existing RAW files. In your Lightroom Library module, select the RAW files you want to convert and then choose Photo->Convert Photo to DNG.

RAW vs DNG

I recommend checking the values you see here, though if you are ready to go all-in, you can also select the option to delete originals. The “Embed Fast Load Data” option is what really speeds things up in Lightroom.

Un-check the option to use lossy compression if you want to retain all the data from the RAW file instead of having Lightroom toss out some in favor of a smaller file size. Also, you don’t need to embed the RAW file since doing so will more than double the file size of your DNG.
Another option is to use the Copy as DNG setting when importing photos from your memory card. This will add a great deal of time during the import process since Lightroom converts every one of your RAW files to DNG.

However, for me, the tradeoff is worth it since DNGs are so much faster to work with in Lightroom compared to traditional RAW files.

RAW vs DNG

Conclusion

As with many aspects of photography, the answer here isn’t black and white, and there is not a one-size-fits-all solution. The question of RAW vs DNG isn’t about which format is better, but which format suits your needs.

There is no data loss when working with DNGs, but there are some issues compared to RAW files, and it’s important that you make an informed choice.

If you have experience working with DNG files and would like to share your thoughts, I would love to have them in the comments below!

The post RAW vs DNG: What’s the Difference and Why Does it Matter? appeared first on Digital Photography School. It was authored by Simon Ringsmuth.


Digital Photography School

 
Comments Off on RAW vs DNG: What’s the Difference and Why Does it Matter?

Posted in Photography

 

iPhone 11 vs. iPhone XR: What’s the difference?

20 Sep

iPhone XR vs. iPhone 11

Let’s start with the obvious difference between the latest iPhone and the last-generation XR: the XR has a single, standard wide-angle camera. The new iPhone 11, on the other hand, has a dual camera system – one standard wide and one ultra-wide. The 11 gets an updated front-facing camera too: a 12MP sensor compared to the XR’s 7MP, and 4K/60p video versus HD video. And of course, it’s capable of the infamous ‘slofie.’

How much of a difference that extra camera makes depends on what you like to take pictures of. In our experience, having that ultra-wide lens as an option is very handy.

All images are courtesy Apple

Portrait Mode

The iPhone 11’s additional rear-facing camera also provides an advantage when shooting in Portrait Mode. It uses the slightly different perspectives of the ultra-wide and wide lenses to help create a more accurate depth map than the XR is capable of with its single camera, which only uses depth data generated from its dual pixel sensor combined with machine-learning assisted image segmentation. This should translate to better Portrait Mode images, with improved separation between subjects and their backgrounds.

Plus, the iPhone 11 is better suited for pet Portrait Mode photos like the one above, and who can resist those eyes?

Other camera features

There’s a lot more to a smartphone camera than just hardware these days, and that’s especially true of the camera in the iPhone 11. Apple has included a new Night Mode which is automatically enabled in low light levels, combining data from multiple image captures to produce a brighter more detailed image – very similar to Google’s Night Sight. The 11’s Smart HDR mode has also been improved – it’s able to identify human and pet subjects, and render them appropriately while applying different processing to the rest of the image.

And later this fall, Apple will add a Deep Fusion mode via software update. While it also uses data from multiple frames, the end result is a larger 24MP file. That’s quite useful if you’d like to make larger prints from phone images. We’ll reserve judgement until we’re able to test this feature of course, but it’s potentially a big step forward for Apple’s camera system and we’re glad to see it in this sub-$ 1000 device in addition to the flagship Pro models.

These added features are powered by a new A13 Bionic processor, one of the key hardware advantages that the 11 offers over the A12-powered XR.

Display

The XR and 11 are identical in size and both offer a 6.1″ ‘Liquid Retina HD’ display, which is Apple-speak for ‘LCD.’ Stepping up to the 11 Pro will of course get you a nicer OLED display with better contrast and brightness, but that’s not a differentiating factor between the XR and iPhone 11. Interestingly, you’ll need to step up to the 5.8″ 11 Pro if you want a smaller phone.

Weatherproofing

The XR is rated IP67 and the 11 is IP68, meaning both are fully protected against dust, but the iPhone 11 offers better protection against moisture. Apple states that the phone can withstand up to 30 minutes in depths of up to 2 meters; the XR can safely be submerged for the same amount time in depths up to 1 meter.

If you plan on taking your phone into the pool that extra waterproofing could make a difference depending on how deep you swim. But if you’re more worried about everyday scenarios like, say, a tumble to the bottom of the toilet, then it’s safe to say both phones would survive just fine.

Battery life

The iPhone 11 offers slightly better battery life. According to Apple, it will deliver one hour of extra performance compared to the XR – up to 17 hours of non-streaming video playback vs. 16 hours, for example. If you’re a power user who watches a lot of video on your phone that hour might make a difference, but if you’re just looking for a phone that will get you through a typical day then either will likely suffice.

Wrap-up

So who should buy the iPhone 11, and who should save the extra cash and get the XR? If photo-taking is any kind of priority, then we think the 11 is worth the extra money. Its use of more sophisticated photo processing will make a noticeable difference to photo quality, especially in low light, and an additional ultra-wide angle lens could prove a huge benefit when shooting landscapes or group photos, or in tight quarters.

The iPhone XR is still a perfectly capable camera though, with color rendering that we prefer over the Google Pixel 3. If you aren’t one to push the limits with its capabilities in low light, and you don’t need the ultra-wide lens of the 11, the XR will serve you quite well.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on iPhone 11 vs. iPhone XR: What’s the difference?

Posted in Uncategorized

 

Sony a6000, a6100, a6300, a6400, a6500, a6600: what’s the difference and which should I buy?

11 Sep

Introduction

The a6000-series cameras all look very similar, which can give a confused picture of a lineup that is designed to appeal to photographers from beginners up to enthusiast users

The a6000 has been one of the world’s best selling cameras but it’s only the first rung on a ladder of cameras trying to cater to a range of photographers. If you go online you’ll probably end up be confronted with the a6000, a6100, a6300, a6400, a6500 and a6600. We’re going to try to make sense of the lineup: which ones are current, how they all compare and which ones are worth a look.

We’ve used all these a6x00 series cameras we’re going to talk about here, listened to Sony’s explanation of its intent and have been around the block enough times to be able to cut through the, er, let’s call it ‘marketing speak.’

The lineup as it stands

For everything from snapshots and upwards, Sony’s latest JPEG color is an appreciable upgrade, but that’s only the start of it

At its simplest, the current lineup is the a6100 as the entry-level model, the a6400 as the slightly more enthusiast-friendly one and the a6600 as the range-topping, image stabilized version. All three cameras are based around the same sensor, so the image and video quality ends up being identical but the spec differences between the cameras may make a difference to how well they suit your needs.

Interestingly, Sony insists that the a6000 remains in the lineup. This may be the case, or it could simply be that there’s inventory still floating around the market that Sony doesn’t want to devalue by declaring the camera ‘dead.’ Whichever it turns out to be, we wouldn’t recommend buying one, no matter how inexpensive, for reasons we’ll come to.

Real-time Tracking AF

Even the entry-level a6100 has an AF system that confidently maintains focus on your chosen subject (particularly human subjects), whatever’s going on in front of it

The biggest change in the refreshed Sony lineup is ‘Real-time Tracking’, an autofocus system that’s been trained to recognize people and pets so that it tracks them doggedly (or, by logical extension, ‘cattedly’). It’s present in the a6100, a6400 and a6600.

Oddly, Real-time Tracking isn’t switched on by default (or ever referred to as such on the cameras). But, once the cameras have been switched across to AF-C mode and one of the ‘tracking’ AF area modes has been chosen, the system is really impressive. Point the camera at your subject, half press the shutter and it’ll dependably follow it, wherever it moves in the scene. This makes it one of the simplest and most effective AF systems we’ve ever used.

‘Real-time Tracking’ is present in the a6100, a6400 and a6600

We don’t say this lightly (and we’re not easily impressed) but, having experienced it, we think it helps the latest models stand out, even though some other aspects of their spec aren’t particularly exciting.

However, while the system is really impressive in the daytime, we found it’s less effective in low light. We’ll be testing this in more detail as part of our a6600 and a6100 reviews, so don’t take this as an unalloyed recommendation until we’ve completed that testing.

In with the new

The a6100, a6400 and a6600 all now offer touchscreens that tilt all the way up, to facilitate selfies and vlogging

This ‘Real Time Tracking’ AF system is good enough to make it awkward to go back and use the earlier models and Sony appears to recognize this. The a6300 and a6500 are, we understand, discontinued and replaced by the a6400 and a6600 respectively.

There’ll no doubt be some last-minute sell-offs of any remaining stock, so we’d suggest thinking how dependent your photography (or videography) is on autofocus, and whether you need any of the other improvements, before deciding whether to try to grab a bargain.

For instance, the new generation of cameras all gain touchsceens, which only the a6500 previously had. They also promise improved color rendering in their JPEGs. These improvements add up.

They add up most noticeably when you compare the a6100 to the generations-old a6000, which is why we’d suggest side-stepping the older model at this point.

Sony a6100

The a6100 looks a lot like the a6000 but gains improved AF and more attractive JPEGs, as well as features such as a mic socket

The a6100 is the most basic of the models. It is built from an engineering plastic and has a lower-resolution viewfinder [800 x 600 pixels] than the rest of the models.

The most recent JPEG engine gives it much more attractive color than the a6000

The most obvious change over the (we suspect) outgoing a6000 is the vastly improved AF system. This in itself makes it a much more capable camera. In addition it gains the ability to shoot 4K video (albeit with very noticeable rolling shutter) and, also pretty significantly, it has the most recent version of Sony’s JPEG engine, which gives it much more attractive JPEG color than the a6000 produced.

Capable but entry-level

The a6100’s screen is touch-sensitive and flips all the way up, neither of which was true of the a6000

Unlike the a6000 and in common with the other new a6x00 cameras, it’s got a touchscreen that flips up by 180 degrees, for vlogging or selfie shooting and a mic socket.

The a6100 doesn’t have the full capabilities of its more expensive siblings, though. It can’t shoot Log video, and loses some subtle features such as the ability to let you specify the shutter speed at which Auto ISO mode changes ISO and to let you set up different AF points and modes for portrait- and landscape-orientation shooting.

Also, while you can customize the camera’s ‘Fn’ menu, you can’t define separate versions for stills and video shooting: something the a6400 and a6600 let you do. It’s really useful if you switch back and forth between the two types of shooting.

These are small changes but they add up. For example, we regularly assign a button to access ‘Auto ISO Min Shutter Speed’ so that we can change the camera between 1/focal length and something faster, depending on whether we’re more concerns about camera shake or subject movement.

Sony a6400

From the outside, it’s only really the switch around the AEL button that distinguishes the a6400 from its more basic sibling

The next model up from the a6100 is the a6400. You get a higher-res viewfinder, giving 1024 x 768 pixels from its 2.36m dots. You also get ‘moisture and dust resistant’ magnesium alloy construction (though, as is all too common, this resistance comes with no guarantee or substantive claims of effectiveness).

The a6400 offers a customizable AF/MF switch on the back of the body, which the a6100 lacks, but that’s about the extent of the physical handling differences. On the software side you gain a handful of menu options, including the ability to set the Auto ISO shutter threshold, define different AF areas and area modes by camera orientation and set up custom features such as ‘My Dial.’ These all make a difference if you like to define the fine detail of the camera’s handling.

Mid-level option

The a6400 lets you take more fine control of its operation than with the a6100

Video shooters gain the ability to shoot S-Log and HLG video footage over the a6100, which opens up opportunities for color grading or output to high dynamic range televisions. However, this is only in 8-bit and is still subject to significant rolling shutter in the cropped 30p mode which gets even worse in the full-width 24p mode. We were very impressed with this highly detailed footage when the a6300 was launched, back in 2016 but the likes of Fujifilm’s X-T30 will now offer better results.

The a6400 offers some benefits over the a6100 but you may find better options from other camera makers

So, while the a6400 offers some benefits over the a6100, you may not find the difference worth the cost. And, if you’re looking for a more advanced camera, and want to take more control, you may find better options from other camera makers.

Sony a6600

Not only does the a6600 offer image stabilization over its sister models, it also adds a headphone socket and much larger ‘Z-type’ battery. There’s no built-in flash, though

The range-topping model is the a6600. The main thing you gain over the lower models is in-body image stabilization, which is a major benefit for both stills and video shooting. A new feature of the a6600 is the inclusion of a much larger NP-FZ100 battery. This significantly boosts the camera’s endurance and will all-but eliminate concerns about keeping the camera charged, while you’re shooting.

The space demands of this larger battery have prompted the design of more substantial, more comfortable hand grip than on other a6x00 models and these ergonomic improvements are supplemented by the addition of an extra custom button, which leaves the a6600 with one more than its predecessor and two more than its current siblings.

Steady endurance

The a6600 offers two more custom buttons than the others in the lineup: one on the top plate and a second, marked ‘C3’ on the back

However, while these improvements make the a6600 stand out from its own sister models, it looks like a half-generational update of the a6500. Its autofocus is, without question, best-in-class and its battery life is the best of any of its peers. But its video isn’t especially competitive, either in terms of specifications (Fujifilm’s X-T3 can shoot much more gradable 10-bit footage), or in terms of appearance (the rolling shutter is likely to limit the way you shoot, if you don’t want it to be visible in your videos).

The a6600 also gains a headphone socket for monitoring audio, which is a first for the series

And, in sharing the same 2.36m dot EVF panel with the a6400, its viewfinder is noticeably lower resolution than the Fujifilm.

The option to pair the a6600 with a sensibly-sized 16-55mm F2.8 lens makes the camera more attractive for enthusiasts, but puts it worryingly close in price to an a7 III with the more flexible 24-105mm F4, which will offer similar output and access to a potentially larger performance envelope (or even Tamron’s 28-75mm F2.8).

Impressive AF but…

The latest a6x00 cameras may seem like minor upgrades in some respects, but the improved AF system makes them significantly easier to shoot with

Sony’s latest cameras have an AF system that out-performs anything we’ve experienced before and, importantly, makes it relatively easy to exploit this potential (though we’d prefer it to be engaged by default, especially on the a6100). They also produce more attractive JPEGs than the older models, particularly when compared to the elderly a6000.

But, as we’re sure the comments below will remind you, none of these camera is cutting-edge in terms of hardware such as sensor or viewfinder tech. Sony’s touchscreen implementation is still oddly limited (why isn’t the Fn menu touch-sensitive?), they all shoot only lossy Raws that limit the processing flexibility and they still have one of the most difficult-to-navigate menu systems on the market. This means they’re beginning to slip behind the best of their peers in some respects, particularly in terms of video.

Compared to their peers

Between Canon’s EF-M cameras, Fujifilm’s X-series and various Micro Four Thirds options, the new a6x00 models have a lot to live up to

In the absence of a replacement for the image stabilized Fujifilm X-H1, the a6600 looks pretty strong at the top end, so long as the shortcomings and omissions listed on the last slide don’t affect your shooting too much. Its video isn’t as good as the X-T3’s, but the better AF in both stills and video mode, along with built-in stabilization may be more appealing for some people.

The 16-55mm F2.8 lens makes APS-C E-mount look more photographer-friendly

The a6100’s simple autofocus and relatively low price immediately make it a strong contender against other entry level APS-C and Micro Four Thirds rivals. It’s the a6400, which has to square up against the Fujifilm X-T30 and Canon’s new EOS M6 Mark II that’s probably the least compelling of this trio.

That said, it should also be recognized that Sony has made some effort to address previous criticisms and that its latest models will produce nicer images than its older APS-C cameras and will do so more easily than ever before. And, perhaps more than this, the arrival of the 16-55mm F2.8 lens makes APS-C E-mount look more photographer-friendly than it’s previously appeared.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sony a6000, a6100, a6300, a6400, a6500, a6600: what’s the difference and which should I buy?

Posted in Uncategorized

 

Texture and Clarity Sliders in Lightroom Classic CC: What’s the difference?

10 Aug

The post Texture and Clarity Sliders in Lightroom Classic CC: What’s the difference? appeared first on Digital Photography School. It was authored by Adam Welch.

Throughout the last couple of years, Adobe has released an absolute tsunami of updates for their photo editing platforms. Adobe Lightroom Classic went through a plethora of upgrades and changes, with new (and sometimes major) add-on’s seemingly incorporated with each new build. One of these sizable fresh additions to the Lightroom Classic toolkit came in May of 2019 with the release of v8.3. It’s called the Texture slider.

Texture and Clarity Sliders in Lightroom Classic CC: What's the difference?

Yep, that little guy right there.

You’ll find the texture slider nestled comfortably in the Presence section of the basic panel alongside the now veteran Clarity and Dehaze adjustments. These Presence sliders are extremely interesting in their effects and how they each accomplish their separate actions. Clarity, Dehaze, and now Texture, all perform similar adjustments. They each tweak contrast within our photos to varying degrees with wholly different results.

Texture and Clarity are particularly interesting. Both perform quite similarly, while at the same, remaining their own animals…if that makes any sense? In this article, we’re going to have a closer look at the Clarity and Texture sliders.

I’ll explain how they work and show the different effects each of these powerful sliders can have on your photos.

Texture vs Clarity

All right, so what’s the difference between Clarity and Texture?

We’ve already surmised they are similar in that they function to bring out detail within a photo. However, you’ll notice some very obvious differences as soon as you view the effects of each slider side by side. Have a look at this. Here’s the original photo:

Texture and Clarity Sliders in Lightroom Classic CC: What's the difference?

And now a side-by-side comparison of some Clarity and Texture Slider adjustments.

Texture and Clarity Sliders in Lightroom Classic CC: What's the difference?

In the photo on the left, I’ve increased the Clarity slider to +100. I’ve applied +100 Texture to the photo on the right. The difference is apparent, but what exactly is happening here? First, let me remind you what our beloved Clarity slider actually does.

A refresher on Clarity

In short, Clarity interacts with our photos by increasing or decreasing the contrast between midtone luminance values. This essentially gives the illusion of our image becoming clearer. However, in reality, all that is happening is the application of more or less contrast to the light and dark areas which fall as midtones (between highlights and shadow).

You’ll also notice that the photo is perceptively brighter and that the color saturation diminishes slightly when increasing Clarity. On the other end of the spectrum, decreasing clarity adds in a soft-focus effect. This can sometimes work extremely well, depending on your subject. For a little more of a breakdown on Clarity check out my other article, How to Make Your Photos Shine Using Clarity, Sharpening, and Dehaze in Lightroom. You’ll also learn some great tips on using Clarity along with the Sharpening and Dehaze sliders.

What is Texture?

Now let’s talk about the new kid on the block, the Texture slider.

Ironically enough, the idea for the Texture slider was born not from the goal of increasing the textures (positive) within an image but rather decreasing them (negative) thereby essentially smoothing out a photo. The Texture slider was initially named the “Smoothing slider” in the early stages of its development.

The team at Adobe were aiming to migrate into Lightroom (at least to some extent) the skin retouching capabilities of Photoshop. Their goal was to offer a feature that packed a less drastic punch than the Clarity slider. All while still being able to increase (or decrease) the apparent contrasts in the photo to give the illusion of enhanced textures within the images.*

Image: +69 texture added globally

+69 texture added globally

The Texture slider lands somewhere between Clarity and Sharpening in Lightroom. A good way to think about Texture is that it is much less harsh than Clarity and offers more subtle results without affecting absolute brightness or color saturation.

Texture focuses it’s smoothing or clearing effects on areas of a photo which possess “mid-frequency” features. You can think of these as medium detail areas. For reference, a cloudless sky would be considered a low-frequency feature while a cluster of trees would be considered a high-frequency feature.

It is also worth mentioning that like many of the tools found in Lightroom Classic, you can apply the texture effect both globally (the entire photo) and locally to specific areas. Local negative texture adjustments work wonders for smoothing out skin wrinkles and blemishes in your portraits.

Image: Before localized skin smoothing

Before localized skin smoothing

Image: After some retouching using a negative texture with Lightroom’s adjustment brush. Now I...

After some retouching using a negative texture with Lightroom’s adjustment brush. Now I only look nominally haggard…

*Note: This is an extremely basic explanation of the Texture slider. If you’re feeling truly adventurous and want to learn more about the technical makeup of the Texture slider, I highly recommend this post over on the Adobe Blog.

Should I use Clarity or Texture Slider?

The looming question is, “When should I use Texture, and when should I use Clarity?” Unlike most commentary I offer on the absolutes of post-processing, which often borders on a Zen-like existentialist approach of “it all depends on the image,” there are some relatively straightforward things to look for when deciding which adjustment will work best for your particular photo.

Try the Clarity slider if:

  • Your image consists of high-frequency features
  • The effect is needed on a more global scale
  • Your image is a landscape
  • The image is black and white

Try the Texture slider if:

  • Your image has large areas of mid to low-frequency features
  • A more subtle enhancement is needed
  • The image is a portrait
  • Your image has extreme color contrasts/saturation

Of course, these are just guidelines, and I hope you experiment with both the Clarity and Texture sliders.

Also, nothing is stopping you from using a combination of the two – especially when you are applying them using local adjustment tools.

Closing thoughts on Texture and Clarity Sliders

You’ve heard me say time and time again that less is generally more when it comes to applying adjustments in post-processing. Just because a tool is available doesn’t always mean you have to use it to its full strength.

Perhaps this is no truer than when it comes to using the tools found in the Presence section of Lightroom, in this case, the Texture and Clarity sliders. These nifty little adjustments can yield amazing results for your photos.

In fact, I use both local and global Clarity and Texture slider adjustments in virtually all of my photos to one extent or another.

With that said, it’s a good practice not to over-process your images. Some judicious use of negative Texture can shave years off your clients face. However, go too far, and they might end up looking like a wax doll.

Adding positive Texture can bring out the subtle beauty of tree bark, however, use too much, and you’ll end up with…well, you get the idea.

What are your thoughts on the new Texture slider in Lightroom Classic CC? Is it a feature you will use regularly? Sound off in the comments below!

 

texture-and-clarity-sliders-in-lightroom-classic-cc

The post Texture and Clarity Sliders in Lightroom Classic CC: What’s the difference? appeared first on Digital Photography School. It was authored by Adam Welch.


Digital Photography School

 
Comments Off on Texture and Clarity Sliders in Lightroom Classic CC: What’s the difference?

Posted in Photography

 

Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images

09 Mar

The post Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images appeared first on Digital Photography School. It was authored by Herb Paynter.

Understanding Imaging Techniques

Three distinct post-production processes alter the appearance of digital photographs: Retouching, Manipulating, and Optimizing. These terms may sound similar enough to be synonymous at first glance, but they are entirely different operations. Once you understand the difference between these three processes, your image editing will take on new meaning, and your images will deliver powerful results.

Image retouching

Photo retouching is image alteration that intends to correct elements of the photograph that the photographer doesn’t want to appear in the final product. This includes removing clutter from the foreground or background and correcting the color of specific areas or items (clothing, skies, etc.). Retouching operations make full use of cloning and “healing” tools in an attempt to idealize real life. Unfortunately, most retouching becomes necessary because we don’t have (or take) the time to plan out our shots.

Our brain tends to dismiss glare from our eyes, but the camera sees it all. A slight change of elevation and a little forethought can save a lot of editing time.

Planning a shot in advance will alleviate much of these damage control measures but involves a certain amount of pre-viewing; scouting out the area and cleaning up items before the camera captures them. This includes “policing” of the area… cleaning mirrors and windows of fingerprints, dusting off surfaces, and general housekeeping chores. This also includes putting things away (or in place), previewing and arranging the lighting available and supplementing the lighting with flash units and reflectors where required, checking for reflections, etc.

Benjamin Franklin coined the phrase “an ounce of prevention is worth a pound of cure,” which pretty much sums up the cleanup chores. We also use the phrase “preventative maintenance;” fixing things before they break and need repair.

Admittedly, we don’t often have the luxury of time required to primp and polish a scene before we capture it, and retouching is our only option. However, sometimes all we need to do is evaluate the scene, move around and see the scene from another angle, or wait for the distraction to move out of the scene.

Sometimes a small reposition can lessen the amount of touchup and repair needed.

We can’t always avoid chaos, but we could limit the retouching chore with a little forethought. It takes just a fraction of a second to capture an image, but it can take minutes-to-hours to correct problems captured.

Image manipulation

Manipulation is a bit different, though it occasionally is a compounded chore with retouching. When we manipulate a photo, we truly step out of reality and into fantasyland. When we manipulate an image we override reality and get creative; moving, adding elements to a scene or changing the size and dimension. When we manipulate an image, we become a “creator” rather than simply an observer of a scene. This is quite appropriate when creating “art” from a captured image, and is ideal for illustrations but perhaps shouldn’t be used as a regular post-capture routine.

Photo-illustration is an excellent use of serious manipulation, and can be quite effective for conveying abstract concepts and illustrations.

Earlier in my career, I worked as a photoengraver in a large trade shop in Nashville Tennessee during the early days of digital image manipulation. The shop handled the pre-press chores for many national accounts and international publications. On one occasion in 1979, we were producing a cover for one of these magazines. On the cover was a picture of Egypt’s President Anwar Sadat set against one of the great pyramids. Unfortunately, the pyramid was in a position that interfered with the titles on the magazine’s cover.

While this is not the exact picture used in the magazine, you see the challenge.

The Art Director for the magazine sent instructions for us to shift the pyramid in the picture so that the titles would not interfere with it. Moving that thing was an amazing feat back then. Normal airbrushing would have left obvious evidence of visual trickery, but digital manipulation opened a whole new potential for near-perfect deception. We were amazed at the potential but a bit nervous about the moral implications of using this power.

This venture was accomplished (over a decade before Photoshop) on an editing machine called a SciTex Response, a workstation supported by a very powerful minicomputer. Nobody outside that small building knew that from Nashville, we pushed an Egyptian pyramid across the desert floor until revealed years later. Shortly thereafter, digitally altered images were prohibited from use as evidence in a court of law by the Supreme Court of the United States. Today, this level of manipulation lets you routinely alter reality and play god on a laptop, sitting on a park bench.

Manipulation is powerful stuff and should be used with serious restraint; not so much for legal reasons, but because of diminishing regard for nature and reality. Fantasyland is fun, but reality is where we live. We quite regularly mask skies and replace boring clouds with blue skies and dramatic clouds, and even sunsets – all without hesitation. We can move people around a scene and clone them with ease using popular photo editing software. Reality has become anything but reality. Photo contests prohibit photo manipulation in certain categories, though a skillful operator can cover their digital tracks and fool the general public. However, savvy judges can always tell the difference.

Typical manipulation consisting of a clouded sky to replay lost detail.

Personal recommendation: keep the tricks and photo optics to a minimum. Incorporating someone else’s pre-set formulas and interpretation into your photos usually compromises your personal artistic abilities. Don’t define your style by filtering your image through someone else’s interpretation. Be the artist, not the template. Take your images off the assembly line and deal with them individually.

Image optimization

Photo optimization is an entirely different kind of editing altogether and the one that I use in my professional career. I optimize photos for several City Magazines in South Florida. Preparing images for the printed page isn’t the same as preparing them for inkjet printing. Printing technology uses totally different inks, transfer systems, papers, and production speeds than inkjet printers. Each process requires a different distribution of tones and colors.

Since my early days in photoengraving, I’ve sought to squeeze every pixel for all the clarity and definition it can deliver. The first rule (of my personal discipline) is to perform only global tonal and color adjustments. Rarely should you have to rely on pixel editing to reveal the beauty and dynamic of a scene. Digital photography is all about light. Think of light as your paintbrush and the camera as nothing more than the canvas that your image is painted on. Learn to control light during the capture and your post-production chores will diminish significantly. Dodging, burning and other local editing should be required rarely, if at all.

Both internal contrast and color intensity (saturation) were adjusted to uncover lost detail.

Even the very best digital camera image sensors cannot discern what is “important” information within each image’s tonal range. The camera’s sensors capture an amazing range of light from the lightest and the darkest areas of an image, but all cameras lack the critical element of artistic judgment concerning the internal contrast of that light range.

If you capture your images in RAW format, all that amazing range packed into each 12-bit image (68,000,000,000 shade values between the darkest pixel and the lightest) can be interpreted, articulated, and distributed to unveil the critical detail hiding between the shadows and the highlights. I’ve edited tens of thousands of images over my career, and very few cannot reveal additional detail with just a little investigation. There are five distinct tonal zones (highlight, quarter-tones, middle-tones, three-quarter-tones, and shadows) in every image, and each can be individually pushed, pulled, and contorted to reveal the detail contained therein. While a printed image is always distilled down to 256 tones per color, this editing process lets you, the artist, decide how the image is interpreted.

Shadow (dark) tones quite easily lose their detail and print too dark if not lightened selectively by internal contrast adjustment. The Shadows slider (Camera Raw and Lightroom) was lightened.

The real artistry of editing images is not accomplished by the imagination, but rather by investigation and discernment. No amount of image embellishment can come close to the beauty that is revealed by merely uncovering reality. The reason most photos don’t show the full dynamic of natural light is that the human eye can interpret detail in a scene while the camera can only record the overall dynamic range. Only when we (photographers/editors/image-optimizers) take the time to uncover the power and beauty woven into each image can we come close to producing what our eyes and our brain’s visual cortex experience all day, every day.

Personal Challenge

Strive to extract the existing detail in your images more than you paint over and repair the initial appearance. There is usually amazing detail hiding there just below the surface. After you capture all the potential range with your camera capture (balancing your camera’s exposure between the navigational beacons of your camera’s histogram), you must then go on an expedition to explore everything that your camera has captured. Your job is to discover the detail, distribute the detail, and display that detail to the rest of us.

Happy hunting.

The post Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images appeared first on Digital Photography School. It was authored by Herb Paynter.


Digital Photography School

 
Comments Off on Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images

Posted in Photography