RSS
 

Posts Tagged ‘Between’

Hubble captures stunning gravitational interaction between a trio of galaxies

05 Aug

NASA has published a stunning image captured by the Hubble Space Telescope that shows a ‘three-way gravitational tug-of-war between interacting galaxies.’ The galaxies in the show are in system Arp 195, a system featured in the Atlas of Peculiar Galaxies, a list of the ‘weirder and more wonderful galaxies in the universe.’

Arp 195, otherwise known as UGC 4653, is a galaxy with material ejected from nuclei. It’s one of 15 Arp-numbered galaxies with this characteristic. All but one of these galaxies are interacting or have recently interacted with other celestial objects. The trademark tidal features of the galaxies, including Arp 195, appear to be the result of gravitational interactions.

Credit: ESA/Hubble & NASA, J. Dalcanton. Click to enlarge.

Hubble’s new sighting is in the Lynx constellation, about 747 million light-years from Earth. It’s fantastic to see new images from Hubble, as the venerable space telescope suffered significant downtime following a computer glitch earlier this summer. Hubble recently returned to service and celebrated by publishing a pair of stunning monochromatic images last month.

‘These images, from a program led by Julianne Dalcanton of the University of Washington in Seattle, demonstrate Hubble’s return to full science operations. [Left] ARP-MADORE2115-273 is a rarely observed example of a pair of interacting galaxies in the southern hemisphere. [Right] ARP-MADORE0002-503 is a large spiral galaxy with unusual, extended spiral arms. While most disk galaxies have an even number of spiral arms, this one has three.’ Text and image credit: Science: NASA, ESA, STScI, Julianne Dalcanton (UW) Image processing: Alyssa Pagan (STScI). Click to enlarge.

The hiatus aside, Hubble’s observational time is valuable. NASA writes, ‘Observing time with Hubble is extremely valuable, so astronomers don’t want to waste a second. The schedule for Hubble observations is calculated using a computer algorithm which allows the spacecraft to occasionally gather bonus snapshots of data between longer observations. This image of the clashing triplet of galaxies in Arp 195 is one such snapshot. Extra observations such as these do more than provide spectacular images – they also help to identify promising targets to follow up with using telescopes such as the upcoming NASA/ESA/CSA James Webb Space Telescope.’

Hubble offers a unique look into distant space, and it’s great to see that the telescope is working well following its concerning issue. If you’d like to see more of what Hubble is up to, you can check out an image gallery here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Hubble captures stunning gravitational interaction between a trio of galaxies

Posted in Uncategorized

 

What are the Differences Between Canon EF, EF-S, EF-M, RF lenses

20 Apr

When you’re beginning to learn about photography, there is a lot to understand. Apart from the basics of cameras and photography, if there is one thing that trips most beginner photographers, it’s the different kinds of lenses available for a single brand of camera body. I’ve been there myself, so I thought I’ll clear up some basics. Let’s talk about Continue Reading
Photodoto

 
Comments Off on What are the Differences Between Canon EF, EF-S, EF-M, RF lenses

Posted in Photography

 

Roger Cicala: the difference between sample variation and a ‘bad copy’ (Part 2)

13 Nov
I compare a lot of lenses. They aren’t all exactly the same.

In today’s article we’ll look at variation versus bad copies a bit differently to last time. Plus, I’ll explain how people get three ‘bad copies’ of a lens in a row.

Variation versus bad copy frequency

Imatest type graphs are easier to visualize so I’m going to use those today. These graphs allow us to visualize center resolution (toward the top on the y-axis of the graph) and overall resolution (toward the right on the x-axis), with individual lenses plotted as dots. Don’t worry about the numbers on the X and Y axes, all you need to know is that the sharpest lenses are plotted up and to the right, and the softest are lower and to the left.

The graph below shows plots from multiple copies of two prime lenses. Let’s call them ‘Red’ and ‘Green’. The Green lens is a fairly expensive, pro-grade optic. The Red lens is a cheaper, consumer-level prime. You’ll see that there’s one copy of each in roughly the middle of this graph, away from the main cluster at upper-right. I’d return both of these samples to the manufacturer. So would you – they’re awful.

Multiple copies of two lenses, the ‘Red’ lens and the ‘Green’ lens, plotted by center and overall sharpness. Two bad copies of each are obvious at the lower left.

But could you tell the difference between the best and the worst of the other copies, in that big cluster at upper-right? That would depend on the resolution of your camera, how carefully you pixel-peeped, which lens we are talking about, and honestly, how much you cared.

The Green lens shows less variation, which is about what we expect (but don’t always get) from a fairly expensive, high-quality lens. A perfectionist with a high resolution camera, some testing skill and enough time could tell the top third from the bottom third, but it would take effort.

The Red lens has more variation, which is typical for a consumer-grade lens. A reasonably picky photographer could tell the difference from the top third and the bottom third. None of the bottom third are awful; they’re a little fuzzier, a little more tilted, not quite as good when viewed at 100% magnification, and you might see issues if you made a large print.

With more variation, you get more ‘not as good’ lenses, but they’re still not ‘bad copies’

If you look carefully, though, the top third of the Green and Red samples are about the same. With more variation, you get more ‘not as good’ lenses, but they’re still clearly not ‘bad copies’; they’re just ‘not quite as good’ copies.

So why would we argue about these two lenses on the Internet? Because based on a graph like this, a lot of testing sites might say “Red is as good as Green and costs a lot less.” The truth is simply that the Red lens has more variation. Sure – a good copy of the Red lens might match a good copy of the Green lens. But you’re not guaranteed to get one.

A word about that yellow line and worse variation

There’s obviously a point when large variation means the lower end of the ‘acceptable group’ is unacceptable. Where that line lies is of course arbitrary, so I put an arbitrary yellow line in the graph above, to illustrate the point. Where the yellow line is for you depends on your expectations and your requirements.

The Subjective Quality Factor can theoretically decide when the low end of variation is not OK, and it can be used as a guide to where to place the yellow line. The key words, though, are ‘subjective quality’. Things like print size, camera resolution, even subject matter are variables when it comes to deciding when SQF is not OK. For example, the SQF needed for online display or 4K video is a lot lower than for a 24″ print of a detailed landscape taken with a 40 megapixel camera.

Every one of us has our own SQF; call it your PQF (Personal Quality Factor) and your yellow line might be higher or lower than the one in the graph above. Manufacturers have a Manufacturer’s Quality Factor (MQF) for each of their lenses, which is the famous ‘in spec’.

When your PQF is higher than the MQF, those lower lenses are not OK for you. They might be fine for someone else. Wherever a person’s yellow line is, that’s their demarkation line. These days, if they get a lens below the line, they go on an Internet rant. So now, as promised, I have explained the cause of 8.2% of Ranting On Online Forums (ROOFing). It’s the difference between MQF and PQF.

Put another way, it’s the difference between expectations and reality.

If you test a set of $ 5,000 lenses carefully enough, you may find some differences in image quality. The technical term for this phenomenon is ‘reality’.

It should be pretty obvious that people could screen three or four copies of the Red lens and end up with a copy that’s as good as any Green lens. I don’t find it worth my time, but I’m not judging; testing lenses is what I do.

Unfortunately, though, people don’t post online “I was willing to spend a lot of time to save some money, so I spent 20 hours comparing three copies and got a really good Red lens.” They say “I went through three bad copies before I got a good one.”

The frequency of bad copies and variation

Just so we get it out of the way, the actual, genuine ‘bad copy’ rate is way lower than I showed in the graph above. For high-quality lenses it’s about 1% out-of-the-box. This explains why I roll my eyes every time I hear “I’ve owned 14 Wonderbar lenses and they’re all perfect.” Statistics suggest you’d need to buy over 50 lenses to get a single bad one. The worst lenses we’ve ever seen have a bad copy rate of maybe 3% so even then, the chances are good you wouldn’t get a bad one out of 14.

Most of these ‘those lenses suck / I’ve never had a bad copy’ arguments are just a different way of saying ‘I have different standards than you’

What about the forum warrior ROOFing about getting several bad copies in a row? He’s probably screening his way through sample variation looking for a better than average copy. If he exchanges it, there’s a good chance he won’t get a better one, but after two or three, he’ll get a good one. So he’s really saying “I had to try three copies to find one that was better than average.” Or close to average. Something like that.

Semantics are important. Most of these “those lenses suck / I’ve never had a bad copy” arguments are just a different way of saying “I have different standards than you”. I get asked all the time what happens to the two lenses John Doe returned when he kept the third? Well, they got re-sold, and the new owners are probably happy with them.

Why are there actual bad copies?

In short – inadequate testing. Most photographers greatly overestimate the amount and quality of testing that’s actually done at the factory, particularly at the end of the assembly line.

Many companies use a test target of thick bars to set AF and give a cursory pass-fail evaluation. A target of thick bars is low-resolution; equivalent to the 10 lp/mm on an MTF bench. Some use a 20 lp/mm target to test, and 20 is higher than 10, so that’s good. The trouble is that most modern sensors with a good lens can resolve 50 lp/mm easily. This is what I mean when I say (as I do often) that you and your camera are testing to a higher standard than most manufacturers.

Why is there high variation?

Usually, it’s the manufacturer’s choice, and usually for cost reasons. Occasionally it’s because the manufacturer is living on the cutting edge of technology. I know of a couple cases where a lens had high variation because the manufacturer wanted it to be spectacularly good. They designed-in tolerances that turned out to be too tight to practically produce, but convinced themselves they could produce it. Lenses like this tend to deliver amazing test results, but then attract a whole lot of complaints from some owners and a whole lot of love from others.

What’s that? You want some examples?

This is not the bookcase mentioned below; that one is under nondisclosure. This is my bookcase. My bookcase has better optical books.

Service center testing

Years ago, we had in our possession a $ 4,000 lens that was simply optically bad. It went to the service center twice with no improvement. Finally, the manufacturer insisted I send ‘my’ camera overseas with it for adjusting. The lens and camera came back six weeks later. The lens was no better, but the camera contained a memory card with 27 pictures on it. Those pictures were of a bookshelf full of books, and each image was slightly different as the technician took test shots while they optically adjusted the lens.

This, my friends, is why we decided to start adjusting lenses ourselves. And yes – after offering to share those bookshelf images – I was eventually sent a replacement lens.

Non-adjustable lenses

Many lenses have no optical adjustments. They’re assembled, and then what you get is what you get. If in-factory QC detects a really bad one, it might be disassembled and the parts reused, in the hope that random reassortment gives a better result next time. Or it may just get thrown away; the cost of disassembling and reassembling may be greater than the saved parts.

A common type of non-adjustable lens called a stacked lens; ‘element – spacer – element – spacer, etc’ with a front and rear retaining ring holding everything together. The usual method of correcting it is to loosen the retaining rings, bang the lens on a table a few times, and tighten it back up. That probably sounds ridiculously crude, but it sometimes works.

Many fully manual lenses (not those made by Zeiss or Leica) are non-adjustable, as are some less expensive manufacturer and third-party lenses.

Minimally-adjustable lenses

A number of prime lenses have only one or two adjustable elements. This is not necessarily a bad thing; adjusting one or two elements is a lot easier than adjusting six, so the technician is more likely to get things right.

One of my favorite lenses, both to shoot with and to adjust, is the venerable Zeiss 21mm F2.8 Distagon / Milvus. The front element of this lens is adjustable for centering and we’ve done hundreds of these adjustments over the years. The fun part is doing this adjustment lets you choose what type of lens you want. You can have razor sharp in the center with soft corners or you can let the center be a little softer and the corners much sharper. It’s a great example of adjustment being a trade-off, even for relatively simple adjustments.

MTF graphs of a Zeiss 21mm F2.8 Distagon, adjusted for best center sharpness (above), and optimal edge sharpness (below).

Consumer-grade zoom lenses (manufacturer or third-party) and prime lenses with apertures smaller than F1.4 tend to be minimally or non-adjustable. A fair number of better zooms and primes are minimally adjustable, too.

Lenses with many adjustable elements

More adjustments means less variation, at least in theory. It also, however, means when something is wrong it’s far more complex and time consuming to get the adjustments right. Time, as they say, is money and complex lenses can be rather hard to adjust.

I think the most we’ve seen is nine adjustable elements. These are usually top-of the line zooms, but we’ve seen six adjustable elements in some top-end primes. That’s something we never saw even five or six years ago.

So, what’s the key takeaway?

Let’s start with my definitions. A bad copy of a lens has one or more elements so out of adjustment that its images are obviously bad at a glance. Such a lens (assuming it is optically adjustable) can usually be made as good as the rest.

Variance, on the other hand, means some lenses aren’t as good as others, usually as a result of a number of small imperfections. A simple optical adjustment isn’t likely to make them as good as average. All lenses have a little variance. Some have more. A few have a lot. How much is too much depends on the photographer who’s shooting with them.

The Canon 70-200mm F2.8 RF has (give or take one, I’m not certain I recall all of them) 8 or 9 different adjustable elements.

Reducing variation costs money. The reality is the manufacturers are doing what works for them (or at least they think they are). There is a place for $ 500 lenses with higher variation and good image quality, just like there’s a market for $ 2,000 lenses with better image quality and less variation.

Roger


Roger Cicala is the founder of Lensrentals.com. He started by writing about the history of photography a decade ago, but now mostly writes about the testing, construction and repair of lenses and cameras. He follows Josh Billings’ philosophy: “It’s better to know nothing than to know what ain’t so.”

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Roger Cicala: the difference between sample variation and a ‘bad copy’ (Part 2)

Posted in Uncategorized

 

Roger Cicala: the difference between sample variation and a ‘bad copy’ (Part 1)

03 Nov
We fix a lot of lenses, but not all lenses can be fixed.

With the next two posts, I hope to end the seventh most common forum war; the ‘lens variation is a big problem!’ vs ‘I don’t believe it exists!’ argument. Like a lot of forum wars, it comes down to semantics: Variation and bad copies aren’t the same thing (actually they’re not really related at all), but people tend to use the terms interchangeably.

Even $ 2,000 lenses must have variation

Note that I said ‘must’. I didn’t say ‘might’ or ‘could’. I certainly didn’t say ‘shouldn’t at this price’. If you expect every copy of a lens to be perfect, then a dose of reality is in order – unreasonable expectations are a down payment on disappointment.

The key point is what amount of variation is acceptable.

Of course, I define ‘unacceptable’ by my standards. My standards are probably similar to 90% of your standards (and they’re higher than most manufacturer’s standards). A few of you will consider my standards either too low or too high. That’s reasonable. You and I might be looking at the same lens, but we’re doing different things with it, and probably doing them on different cameras. Later on, we’ll talk about the difference between ‘acceptable variation’ and a genuinely bad copy that I would consider unacceptable.

Why lenses must vary

Any manufactured part, from a washer on your kitchen faucet to a component in the Hubble telescope has some variation. Generally (up to a certain point – limited by the state of the technology) you can lower the variation of a part if you are willing to pay more. Why? Because entirely new machines or manufacturing processes may be required, and all of that costs money.

But just ordering more units means you can save money, right? Well yes – in very general terms, ordering larger quantities lowers per-unit costs, but in a fairly linear fashion. Doubling your order of something usually reduces the per-unit cost by some percentage, but certainly not by half. There is never a point where if you order a large enough quantity of an item you get it for free.

This is a 15 cm diameter, 1/10 wavelength optical flat, eyeglasses for scale.

As an example, we use optical flats to calibrate our test benches. The flats come in different accuracies: 1/4 , 1/10, or 1/20 wavelength of flatness. All of those are very flat indeed, and those accuracies cost $ 800, $ 2,200, and $ 3,800 respectively. There is no quantity I could buy that would let me get the 1/20 wavelength plates for the 1/4 wavelength price. And I can’t get 1/40 wavelength of flatness at any price. The technology simply isn’t available.

What varies in a lens? Everything. The screws, helicoids, plates, and spacers vary. Every glass melt is very slightly different, giving elements a very slightly different refractive index. Lens grinding introduces variation, as does the coating process. Even the shims that we use to adjust variance, they vary. And shims don’t come in infinite thicknesses, so if your thinnest shim is 0.01mm then +/- 0.01mm is your maximum attainable accuracy.

What can manufacturers do about this?

The first thing is tolerancing the design. Optical programs let the designers punch in various tolerances for parts, showing how a given variation will affect the overall performance of the lens. For the sake of argument, let’s say that one particular glass element is very critical and even a slight variation makes a big difference in how the lens resolves, while variation among other elements matters less. The manufacturer can pay to have that critical element made more accurately. They can also change the design to make the part less critical, but often only by sacrificing performance.

In addition, manufacturers can (notice I said ‘can’, not ‘always do’) place compensating elements in the lens, allowing for slight adjustments in tilt, spacing, and centering. Emphasis is on ‘compensating’, though: These adjustments compensate for the inevitable errors that accumulate in any manufactured device. They are not called ‘adjusted for absolute perfection’ elements.

The two most common types of lens adjustments: shims and eccentric collars.

Not all lenses are equally adjustable. Some modern lenses may have five to eight different adjustable elements. Many have two or three. A fair number have none at all; what you get is what you get. Here’s a thought experiment for you: imagine you’re an optical engineer and you’ve been tasked with making an inexpensive lens. Knowing that adjustable elements are an expensive thing to put in a lens, what would you do?

I want to emphasize that optical adjustments in a modern lens are not there so that the lens can be tweaked to perfection; the adjustments are compensatory. There are trade-offs. Imagine you’re a technician working on a lens. You can correct the tilt on this element, but maybe that messes up the spacing here. Correcting the spacing issue changes centering there. Correcting the centering messes up tilt again. Eventually, in this hypothetical case, after a lot of back-and-forth you would arrive at a combination of trade-offs; you made the tilt a lot better, but not perfect. That’s the best compromise you can get.

Because many people think of distributions as the classic ‘bell curve’ or ‘normal distribution’ let’s get that particular wrongness out of the way. If you evaluate a group of lenses for resolution and graph the results it does NOT come out to be a normal distribution with a nice bell curve.

Frequency graph of two lenses. For those of you tired of reading already, this graph sums up the rest of the article. The black lens is going to have more variation than the green one. Neither the black nor green graphs are at zero over there on the softest end, bad copies happen to either one, but not frequently.

As common sense tells you it should be, lenses have a very skewed distribution. No lens is manufactured better than the perfection of theoretical design. Most come out fairly close to this theoretic perfection, and some a little less close. Some lenses are fairly tightly grouped around the sharpest area like the green curve in the graph above, others more spread out, like the black one. The big takeaway from that is you can’t say things like ‘95% of copies will be within 2 standard deviations of the mean.’

The Math of Variation

Don’t freak out, it’s not hard math and there’s no test. Plus, it has real world implications; it will explain why there’s a difference between ‘expected variation – up to spec’ and ‘unacceptable copy – out of spec’.

There are several ways to look at the math but the Root Sum Square method is the one I find easiest to understand: you square all the errors of whatever type you’re considering, add all the squares together, then take the square root of the total.

The total gives you an idea of how far off from the perfect, theoretical design a given lens is. Let’s use a simple example, a hypothetical lens with ten elements and we’ll just look at the spacing of each element in nm. (If you want to skip the math, the summary is in bold words a couple of paragraphs down.)

If we say each element has a 2 micron variation, then the formula is ?10 X 22 = 6.32. If I make a sloppier lens, say each element varies by 3 microns, then ?10 X 32 = 9.48. Nothing dramatic here, looser control of variation makes higher root sum square.

The important thing happens if everything isn’t smooth and even. Instead of 10 elements worse by 1 micron, let’s make 1 element worse by 10 microns. I’ll do the math in two steps:

? (9 X 22) + (1 X 102) = ? (36 + 100) = ?136 = 11.66

The summary is this: If you vary one element a lot you get a huge increase in root sum square. If you spread that same total variation over several elements, you get only a moderate increase in root sum square. That is basically the difference between a bad copy and higher variation.

If you have just one really bad element the performance of the lens goes all to hell

The math reflects what we see in the real world. If you let all the elements in a lens vary a little bit, some copies are a little softer than others. Pixel peepers might tell, but most people won’t care. But if you have one really bad element (it can be more than one, but one is enough) the performance of the lens goes all to hell and you’re looking at a bad copy that nobody wants.

More real world: if one element is way out of wack, we can usually find it and fix it. If ten elements are a little bit out, not so much. In fact, trying to make it better usually makes it worse. (I know this from a lot of painful experience.)

What does this look like in the lab?

If you want to look at what I do when I set standards, here are the MTF graphs of multiple copies of two different 35mm F1.4 lenses. The dotted lines show the mean of all the samples; these are the numbers I give you when I publish the MTF of a lens. The colored area shows the range of acceptability. If the actual MTF of a lens falls within that range, it meets my standards.

Mean (lines) and range (area) for two 35mm lenses. The mean is pretty similar, but the lens on the right has more variation.

For those of you who noted the number of samples, 15 samples means 60 test runs, since each lens is tested at four rotations. The calculations for variation range include things about how much a lens varies itself (how different is the right upper quadrant from the left lower, etc.) as well as how much lenses vary between themselves and some other stuff that’s beyond the scope of this article.

So, in my lab, once we get these numbers we test all lenses over and over. If it falls in the expected range, it meets our standards. The range is variation; it’s what is basically inevitable for multiple copies of that lens. You can tell me I should only keep the ones that are above average if you want. Think about that for a bit, before you say it in the comments, though.

The math suggests a bad copy, one with something really out of whack, doesn’t fall in the range. That’s correct and usually it’s not even close. When a lens doesn’t make it, it REALLY doesn’t make it.

A copy that obviously doesn’t meet standards. The vast majority of the time, one of these can be adjusted to return to expected range.

We took that copy above, optically adjusted it, and afterwards it was right back in the expected range. So an out-of-spec copy can be fixed and brought back into range; we do that several times every day.

But we can’t optically adjust a lens that’s in the lower 1/3 of the range and put it into the upper 1/3, at least not often. Trust me, we’ve tried. That makes sense; if one thing is way out of line we can put it back. If a dozen things are a tiny bit out of line, well, not so much.

I know what you’re thinking

You’re thinking, ‘Roger, you’re obviously geeking out on this stuff, but does it make one damned bit of difference to me, a real photographer who gives zero shirts about your lab stuff? I want to see something real world’. OK, fine. here you go.

A Nikon 70-200mm F2.8 VR II lens is a really good lens with very low (for a zoom) variation. But if you drop it just right, the 9th element can actually pop out of its molded plastic holder a tiny bit without causing any obvious external damage. It doesn’t happen very often, but when it does, it always pops out about 0.5mm, which, in optical terms, is a huge amount. This is the ‘one bad element’ scenario outlined in our mathematical experiment earlier.

Below are images of the element popped out (left) and popped back in (right) and below each image is the picture taken by the lens in that condition. Any questions?

On top you see the 9th element ‘popped out’ (left) and replaced (right). Below each is the picture of a test chart made with the lens in that condition.

So, what did we learn today?

We learned that variation among lenses is not the same thing as ‘good’ and ‘bad’ copies. Some of you who’ve read my stuff for a long time might remember I used to put out a Variation Number on those graphs, but I stopped doing that years ago, because people kept assuming that the higher the variation, the higher their chances were of getting a bad copy, which isn’t true. You see, bad copies are – well, bad. Variation just causes slight differences.

I’m going to do a part II that will go into detail with examples about how much you should expect lenses to vary, what the difference is between variation and a genuinely bad copy, and why some people act like jerks on forums. Well, maybe just the first two.

As a bonus, I will tell you the horrifying story of how manufacturers optically adjust a lens that’s really not optically adjustable. And for a double bonus I will show how variation means that there are actually two versions of the classic Zeiss 21mm F2.8 Distagon.

In other words, if you struggled through this article, hopefully the next one will be enough fun that you think it’s worth it. Delayed gratification and all that…

Roger


Roger Cicala is the founder of Lensrentals.com. He started by writing about the history of photography a decade ago, but now mostly writes about the testing, construction and repair of lenses and cameras. He follows Josh Billings’ philosophy: “It’s better to know nothing than to know what ain’t so.”

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Roger Cicala: the difference between sample variation and a ‘bad copy’ (Part 1)

Posted in Uncategorized

 

MegaX ultra-slow-motion 4D camera records pulse of light bouncing between mirrors

01 Aug

Using an ultra-slow-motion camera that records at 24,000 frame-per-second, researchers with the Swiss Federal Institute of Technology in Lausanne (EPLF) were able to capture a pulse of light as it bounced between a series of aligned mirrors.

According to EPFL School of Engineering’s Advanced Quantum Architecture Laboratory head Edoardo Charbon, the MegaX camera behind this new video is the by-product of around 15 years of single-photon avalanche diodes (SPADs) research.

Ordinarily speaking, light is not visible during flight, but the photons do shed particles into the air that, using the right hardware and software, can be captured as in the video shown above. The light was recorded using MegaX, a camera that can produce 3D representations and ‘perform in-depth segmentation of those representations,’ EPFL explains.

The camera likewise has a very fast shutter speed — up to 3.8 nanoseconds — plus it has a very large dynamic range. As well, the pixel size offered by the MegaX is around 10 times larger than a standard digital camera’s pixel size at 9 µm — though the team is working to reduce this size down to 2.2 µm.

When talking about MegaX earlier this year, Charbon explained that the camera works by converting photons into electrical signals. Of note, this camera is able to measure how long it takes a photon to strike its sensor, giving it distance information; this feature is commonly known as time-of-flight.

By combining the typical three dimensions with time-of-flight, MegaX is something of a 4D camera, giving it capabilities beyond that of the average camera.

A new study published on July 18 builds upon this past research, detailing the first time scientists have captured 4D light-in-flight imagery using the time-gated megapixel SPAD camera technology. This is in contrast to 3D light-in-flight capture, which has been achieved using different varieties of camera hardware.

The study explains that to capture the bouncing pulse of light, a machine learning technique took the place of other functions that may have otherwise been utilized, such as dark noise subtraction and interpolation. The process involved using time-of-flight and trajectory data combined with machine learning algorithms to plot the 3D path of the light.

Charbon recently explained to Digital Trends that this new study details the use of machine learning and the 4D data to reconstruct the position of the light pulses. Though this may be something of a novelty to the average person, the technology could eventually be utilized in everything from robotic vision to physics and virtual reality systems.

Of note, the researcher explained that all of the processes involved in capturing the bouncing light pulse were done on the MegaX camera. An abstract of the study is available here; the public can also access the full PDF of the study here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on MegaX ultra-slow-motion 4D camera records pulse of light bouncing between mirrors

Posted in Uncategorized

 

Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images

09 Mar

The post Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images appeared first on Digital Photography School. It was authored by Herb Paynter.

Understanding Imaging Techniques

Three distinct post-production processes alter the appearance of digital photographs: Retouching, Manipulating, and Optimizing. These terms may sound similar enough to be synonymous at first glance, but they are entirely different operations. Once you understand the difference between these three processes, your image editing will take on new meaning, and your images will deliver powerful results.

Image retouching

Photo retouching is image alteration that intends to correct elements of the photograph that the photographer doesn’t want to appear in the final product. This includes removing clutter from the foreground or background and correcting the color of specific areas or items (clothing, skies, etc.). Retouching operations make full use of cloning and “healing” tools in an attempt to idealize real life. Unfortunately, most retouching becomes necessary because we don’t have (or take) the time to plan out our shots.

Our brain tends to dismiss glare from our eyes, but the camera sees it all. A slight change of elevation and a little forethought can save a lot of editing time.

Planning a shot in advance will alleviate much of these damage control measures but involves a certain amount of pre-viewing; scouting out the area and cleaning up items before the camera captures them. This includes “policing” of the area… cleaning mirrors and windows of fingerprints, dusting off surfaces, and general housekeeping chores. This also includes putting things away (or in place), previewing and arranging the lighting available and supplementing the lighting with flash units and reflectors where required, checking for reflections, etc.

Benjamin Franklin coined the phrase “an ounce of prevention is worth a pound of cure,” which pretty much sums up the cleanup chores. We also use the phrase “preventative maintenance;” fixing things before they break and need repair.

Admittedly, we don’t often have the luxury of time required to primp and polish a scene before we capture it, and retouching is our only option. However, sometimes all we need to do is evaluate the scene, move around and see the scene from another angle, or wait for the distraction to move out of the scene.

Sometimes a small reposition can lessen the amount of touchup and repair needed.

We can’t always avoid chaos, but we could limit the retouching chore with a little forethought. It takes just a fraction of a second to capture an image, but it can take minutes-to-hours to correct problems captured.

Image manipulation

Manipulation is a bit different, though it occasionally is a compounded chore with retouching. When we manipulate a photo, we truly step out of reality and into fantasyland. When we manipulate an image we override reality and get creative; moving, adding elements to a scene or changing the size and dimension. When we manipulate an image, we become a “creator” rather than simply an observer of a scene. This is quite appropriate when creating “art” from a captured image, and is ideal for illustrations but perhaps shouldn’t be used as a regular post-capture routine.

Photo-illustration is an excellent use of serious manipulation, and can be quite effective for conveying abstract concepts and illustrations.

Earlier in my career, I worked as a photoengraver in a large trade shop in Nashville Tennessee during the early days of digital image manipulation. The shop handled the pre-press chores for many national accounts and international publications. On one occasion in 1979, we were producing a cover for one of these magazines. On the cover was a picture of Egypt’s President Anwar Sadat set against one of the great pyramids. Unfortunately, the pyramid was in a position that interfered with the titles on the magazine’s cover.

While this is not the exact picture used in the magazine, you see the challenge.

The Art Director for the magazine sent instructions for us to shift the pyramid in the picture so that the titles would not interfere with it. Moving that thing was an amazing feat back then. Normal airbrushing would have left obvious evidence of visual trickery, but digital manipulation opened a whole new potential for near-perfect deception. We were amazed at the potential but a bit nervous about the moral implications of using this power.

This venture was accomplished (over a decade before Photoshop) on an editing machine called a SciTex Response, a workstation supported by a very powerful minicomputer. Nobody outside that small building knew that from Nashville, we pushed an Egyptian pyramid across the desert floor until revealed years later. Shortly thereafter, digitally altered images were prohibited from use as evidence in a court of law by the Supreme Court of the United States. Today, this level of manipulation lets you routinely alter reality and play god on a laptop, sitting on a park bench.

Manipulation is powerful stuff and should be used with serious restraint; not so much for legal reasons, but because of diminishing regard for nature and reality. Fantasyland is fun, but reality is where we live. We quite regularly mask skies and replace boring clouds with blue skies and dramatic clouds, and even sunsets – all without hesitation. We can move people around a scene and clone them with ease using popular photo editing software. Reality has become anything but reality. Photo contests prohibit photo manipulation in certain categories, though a skillful operator can cover their digital tracks and fool the general public. However, savvy judges can always tell the difference.

Typical manipulation consisting of a clouded sky to replay lost detail.

Personal recommendation: keep the tricks and photo optics to a minimum. Incorporating someone else’s pre-set formulas and interpretation into your photos usually compromises your personal artistic abilities. Don’t define your style by filtering your image through someone else’s interpretation. Be the artist, not the template. Take your images off the assembly line and deal with them individually.

Image optimization

Photo optimization is an entirely different kind of editing altogether and the one that I use in my professional career. I optimize photos for several City Magazines in South Florida. Preparing images for the printed page isn’t the same as preparing them for inkjet printing. Printing technology uses totally different inks, transfer systems, papers, and production speeds than inkjet printers. Each process requires a different distribution of tones and colors.

Since my early days in photoengraving, I’ve sought to squeeze every pixel for all the clarity and definition it can deliver. The first rule (of my personal discipline) is to perform only global tonal and color adjustments. Rarely should you have to rely on pixel editing to reveal the beauty and dynamic of a scene. Digital photography is all about light. Think of light as your paintbrush and the camera as nothing more than the canvas that your image is painted on. Learn to control light during the capture and your post-production chores will diminish significantly. Dodging, burning and other local editing should be required rarely, if at all.

Both internal contrast and color intensity (saturation) were adjusted to uncover lost detail.

Even the very best digital camera image sensors cannot discern what is “important” information within each image’s tonal range. The camera’s sensors capture an amazing range of light from the lightest and the darkest areas of an image, but all cameras lack the critical element of artistic judgment concerning the internal contrast of that light range.

If you capture your images in RAW format, all that amazing range packed into each 12-bit image (68,000,000,000 shade values between the darkest pixel and the lightest) can be interpreted, articulated, and distributed to unveil the critical detail hiding between the shadows and the highlights. I’ve edited tens of thousands of images over my career, and very few cannot reveal additional detail with just a little investigation. There are five distinct tonal zones (highlight, quarter-tones, middle-tones, three-quarter-tones, and shadows) in every image, and each can be individually pushed, pulled, and contorted to reveal the detail contained therein. While a printed image is always distilled down to 256 tones per color, this editing process lets you, the artist, decide how the image is interpreted.

Shadow (dark) tones quite easily lose their detail and print too dark if not lightened selectively by internal contrast adjustment. The Shadows slider (Camera Raw and Lightroom) was lightened.

The real artistry of editing images is not accomplished by the imagination, but rather by investigation and discernment. No amount of image embellishment can come close to the beauty that is revealed by merely uncovering reality. The reason most photos don’t show the full dynamic of natural light is that the human eye can interpret detail in a scene while the camera can only record the overall dynamic range. Only when we (photographers/editors/image-optimizers) take the time to uncover the power and beauty woven into each image can we come close to producing what our eyes and our brain’s visual cortex experience all day, every day.

Personal Challenge

Strive to extract the existing detail in your images more than you paint over and repair the initial appearance. There is usually amazing detail hiding there just below the surface. After you capture all the potential range with your camera capture (balancing your camera’s exposure between the navigational beacons of your camera’s histogram), you must then go on an expedition to explore everything that your camera has captured. Your job is to discover the detail, distribute the detail, and display that detail to the rest of us.

Happy hunting.

The post Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images appeared first on Digital Photography School. It was authored by Herb Paynter.


Digital Photography School

 
Comments Off on Understanding Imaging Techniques: the Difference Between Retouching, Manipulating, and Optimizing Images

Posted in Photography

 

Side-by-side comparison between reflectors and diffusers for portraits

07 Oct

Comparison between reflectors and diffusers for portraits

If you’ve been looking into portrait photography, whether it be a casual read, in depth, as a hobby, to improve your portraits, or as a pro, you would have come across the use of reflectors. Reflectors come in various sizes, shapes and colours. My favourites are the 5-in-1 circular foldaway reflectors and the rectangular panels you can lean on or clip to a stand.

Have you ever wondered what a side-by-side comparison using different reflector colors would look like? Wonder no more. Below, you can see the different types I used. These photos all share the same white balance and editing as I wanted the colors to be as true a reflection of the effects of the reflectors used as much as possible. I’ve also kept the edits as clean as possible.

#1 Three reflectors, light shirt

dps-comparison-between-reflectors_0000

Top left is a natural light portrait lit only by window light, half-clear and half-frosted (diffused) but with no other reflectors used. The window is large enough for a big spread illuminating both face and background. The portrait on its right shows a rather obvious warm glow all over. I used two gold reflectors: camera right and in the front underneath the face. This setup warmed everything up – shirt, teeth, face and even the background.

Compare the effect of the gold reflector to the bottom left. This one had two silver reflectors positioned in the same places. Notice how cool the color temperature has become. Next to it on the right I once again had the same setup, but this time using two white reflectors. Notice the color temperature is still cool but softer, less sharp and less edgier than the silver one. Look carefully and this difference is more apparent on the teeth and skin tone being just a touch warmer.

#2 Two reflectors, dark shirt

I thought I’d do the same comparison, this time with the subject wearing black. The difference is more dramatic. With the gold reflector, the black is richer and darker, whereas with the silver it’s a little more washed out.

For me these are both a bit extreme, with the gold reflector being too warm and the silver being too cold. If I were to edit these photos without considering the true effects of the reflector I’d tone down the warmth of the gold reflector by half and I’d be good with that. Similarly, I’d warm up the one with the silver reflector in post, both using the white balance slider. I’d then get the happy warm tone that I’m after.

#3 Diffuser reflector and flash

If you’re familiar with the 5-in-1 reflector, you’ll know there are four colored sides: white, silver, gold and black. These sides are made of fabric all stitched together as one zipped wrap. This fabric wraps around a middle standalone piece that’s translucent. This is the diffuser and a super versatile tool. Strictly speaking, the black side doesn’t actually reflect light but rather absorbs it, and is good for cutting light out or using as flags.

This diffuser is great when shooting in harsh outdoor sunlight and you just want to cut the light down or tone it down by placing the diffuser between the sunlight and the subject. In effect, you are creating a slightly shadowed area for the subject, which makes it ideal for portrait lighting outdoors.

I thought I’d try this same technique for my indoor portraits by using this diffuser to cut down light from a flash, thereby acting like a big softbox but without the bounce.

Here are the results. The left photo was lit with one flash on camera right positioned behind the diffuser, which was pretty big and placed close to the subject for a smooth, soft light. The right photo had two flashes, again diffused, with one light overhead to provide hair light and light the background for more separation.

#4 Diffused natural light vs diffused flash

The final comparison is between diffused window light on the left photo and one diffused flash on the right photo. Window light produced softer shadows here, with less contrast and a bigger spread of light. In contrast, the diffused flash had more defined shadows. It’s smaller than the window with less spread of light, but sculpted the face better.

If I were to mimic the higher contrast and shadows produced by the diffused flash, one trick would be to cut out the light by using the black reflector side as a flag. I wasn’t able to do this, however, as I didn’t want my subject to be waiting too long for my experiment. (He only came in for one headshot.)

I hope you found this little comparison exercise fun and enlightening. It’s amazing what the 5-in-1 reflector, a small and inexpensive tool, can do to your portraits.

The post Side-by-side comparison between reflectors and diffusers for portraits appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on Side-by-side comparison between reflectors and diffusers for portraits

Posted in Photography

 

Light reportedly has phones with ‘between 5 and 9 lenses’ due out later this year

03 Jul

In 2015, Light burst onto the photography scene with the introduction of the L16, a portable camera that promised ‘DSLR quality’ photos in a pint-sized package thanks to a unique design that featured 16 different lenses and sensors.

It wasn’t until 2017 that we were able to get a peek at the first full-res image samples. Since then, there hasn’t been a lot of positive feedback surrounding the functionality and image quality of the camera, but it seems they’re far from done working on its multi-camera setups.

According to a report from Geoffrey A. Fowler of The Washington Post, Light has shown off concepts and working prototypes of phones that include between five and nine lenses. ‘[Light] says its phone design is capable of capturing 64 megapixel shots, better low-light performance and sophisticated depth effects,’ says Fowler in the article.

We’re not exactly sure what Fowler means when he says ‘its phone design,’ as Light has never stated intentions of creating its own phone. What Light has said in the past is that it’s working with manufacturers to put its cameras and software into future devices.

In speaking to Wired in December 2017, Senior Vice President of Marketing and Product Design at Light, Bradley Lautenback, said ‘one manufacturer is already at work on a Light-enabled phone, and more are in the works.’

According to The Washington Post report, Light says ‘a smartphone featuring its multi-lens array will be announced later this year.’ There’s no word on what manufacturer it’ll be from, but considering Foxconn is an investor in Light, it wouldn’t be a surprise to see it from a company with ties to the Taiwanese manufacturer.

The list of smartphone manufacturers who are customers of Foxconn include: Apple, Blackberry, Huawei, Microsoft, Motorola, Xiaomi and others.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Light reportedly has phones with ‘between 5 and 9 lenses’ due out later this year

Posted in Uncategorized

 

Tips for Choosing Between RAW Versus JPEG File Format

28 May

Perhaps one of the most commonly asked questions in digital photography is around which file type to use when shooting – JPEG or RAW file format. Don’t worry if you don’t know much about these two formats or whether your camera supports them. My goal, by the end of this article, is to help you understand what these two types are and help you pick the one that is right for you.

sunset image - RAW Versus JPEG File Format

RAW Versus JPEG File Format

At the very basic level, both JPEG and RAW are types of files that the camera produces as its output. Most of the newer cameras today have both these options along with a few others like M-RAW, S-RAW, Large format JPEG, Small format JPEG, etc. – all of which determines the size of the final output file.

The easiest way to see which file formats are supported by your camera is to review your camera user manual – look for a section on file formats. Or you can go through the menu options of your camera and select Quality (for Nikon) or Image Quality (Canon) to select the file format.

Each file format has its advantages and disadvantages so choose the right option that works best for you. JPEGs are, in reality, RAW files that are processed in camera and compressed into that format. Some of the decisions the camera makes in processing the image may be difficult to change later, but the JPEG file sizes tend to be much smaller. 

Let’s look at the advantages and disadvantages of both these file formats in greater detail.

Advantages of shooting RAW files

  • It is easier to correct exposure mistakes with RAW files than with JPEGs and overexposed highlights can sometimes be rescued. For people like me who tend to always photograph at least 1/2 stop to 1 stop overexposed (based on my style of photography), this is really beneficial in saving many great images in post-production.
  • The higher dynamic range means better ability to preserve both highlights and shadow details in a high contrast scene when the image is being recorded.
  • White Balance corrections are easier to make.
  • Decisions about sharpening, contrast, and saturation can be deferred until the image is processed on the computer.
  • All the original image data is preserved. In fact, when RAW files are opened in post-production software like Lightroom, a virtual copy is made and used. Edits are made in a non-destructive format so the original RAW file is always available for changes at a later stage. This is very useful when you want to edit images in different ways at different times in your photographic career.
RAW Versus JPEG File Format - before and after with a raw file

Left is the RAW file straight out of the camera. On the right is the finished edited image from the same file.

The image on the left (above) was completely blown out because I was in the car and did not have any of my settings correct. But because I photographed in RAW I was able to salvage so much detail in the image. This would not have been possible with a JPG file.

RAW Versus JPEG File Format - underexposed image

An image that was not properly exposed but photographed in RAW.

RAW Versus JPEG File Format - corrected version of the dark file

The edited image that was corrected in post-processing for exposure issues.

Disadvantages of RAW files

  • RAW files tend to be much larger in size compared to JPEGs thereby requiring more storage, not just in camera but also on external storage devices or your computer hard drives.
  • RAW images take longer to write to your memory card which means shorter bursts of continuous shooting. For example, my Canon 5D MIII can write about 12 RAW files continuously and about 30+ JPEG files in the continuous (burst) shooting mode. Check your camera manual for specifics around your own camera’s burst mode (a.k.a continuous photography mode).
  • Not all programs can read RAW files. This used to be an issue, but now there are lots of great programs that can work directly with Raw files such as Adobe Lightroom, Canon’s Camera RAW, Luminar, On1 Raw, ACDSee Photo Studio Ultimate, and other such programs.

Advantages of shooting JPEGs

  • JPEG files are much smaller in size compared to RAW files and hence need less storage space – both in camera memory and on your computer hard drives.
  • JPEG images write to disk more quickly which means longer bursts of continuous shooting opportunities especially during wildlife photography, fast action sports, or even dealing with little kids that are always on the move.
  • These JPEG files can be instantly viewed with many programs including common web browsers, powerpoint, and other such common applications.

Disadvantages of JPEG files

  • It is harder to fix exposure mistakes in post-production with JPEG files.
  • JPEG files tend to have a smaller dynamic range of information that is stored and this often means less ability to preserve both highlights and shadow details in the image.
  • White Balance corrections are more difficult with JPEG files.
  • Decisions about sharpness, contrast, and saturation are set in the camera itself and in most cases, these are difficult to change later in post-production without destroying the image quality.
  • Since a JPEG image is essentially a RAW image compressed in-camera, the camera’s computer makes decisions on what data to retain and which to toss out when compressing the file.
RAW Versus JPEG File Format - jpg edited file

The same image when edited as a JPEG for exposure issues becomes a lot grainier than an underexposed RAW image. You cannot pull them as far as a RAW file.

Another old-school way to think about these two file types is as slides and negatives. JPEGs are like slides or transparencies and RAW files are like negatives. With JPEGs, most of the decisions about how the image will look are made before the shutter is pressed and there are fewer options for changes later. But RAW files almost always require further processing and adjustments – just like negatives.

Which format to choose?

Now that you understand the difference between RAW and JPEG images, deciding which one to use is dependent on a few different factors.

  • Do you want to spend time in post-processing your images to your taste and photography style?
  • Are there any issues with limited space on your camera’s memory card and/or computer hard drives?
  • Do you have software and/or editing programs that will read RAW files easily?
  • Do you intend to print your images or even share images online in a professional capacity?

Some photographers are intimidated by RAW images. I was as well when I had just gotten started in photography because I did not know the true power of a RAW image. However, once I started photographing in RAW there was no going back.

Even everyday snapshots are shot in RAW now because of the great flexibility I have in correcting any mistakes that I make. One of the most common mistakes that many photographers make is around image exposure and that is relatively easy to fix with RAW files. 

RAW Versus JPEG File Format - overexposed sun or sky

I accidentally overexposed the setting sun and lost some of that golden warmth hitting the tree.

Karthika Gupta Photography - Memorable Jaunts DPS Article-Raw verses JPEG file formats -07

One of my favorite San Francisco cityscapes at sunset. I accidentally overexposed and lost the sun flare but was able to edit it and bring back that sunset warmth in post-production because it is a RAW file.

It’s getting easier to use RAW files

Traditionally the two main issues with RAW files seem to be fading every day:

  1. The cost of memory to store or backup these RAW files is getting cheaper and cheaper by the day.
  2. Software that can read RAW files is more readily available. In fact, there is even inexpensive and free software that can read these RAW files now.

There is still the issue of write speed for your camera. If you focus on fast-moving subjects like wildlife or sports photography then perhaps write speed is a key factor in deciding whether to photograph in RAW versus JPEG. So for fast moving objects and/or wildlife and birding photos, JPEG may be a better choice.

Another thing to note is that most of the newer cameras have the ability to capture both JPEG and RAW images at the same time. But this takes up even more storage space and might not be the best use of memory. You are better off just picking one option and sticking with that.

RAW Versus JPEG File Format - photo of a stream and moving water

Waterfall images using a slow shutter speed tend to blow out the background but editing a RAW image in Lightroom helps bring back some of the highlights.

Conclusion

I hope this was helpful in not only understanding the differences between RAW versus JPEG file formats but also in helping you decide which one to choose and why. So tell me, do you belong to the RAW or the JPEG camp?!

The post Tips for Choosing Between RAW Versus JPEG File Format appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on Tips for Choosing Between RAW Versus JPEG File Format

Posted in Photography

 

Video: The difference between Saturation and Vibrance explained

29 Sep

You’ve probably heard this question once or twice from a novice, or maybe even asked it yourself: what exactly is the difference between the Vibrance and Saturation sliders? Well, fortunately, Jesus Ramirez of Photoshop Training Channel has put together a quick, simple, and thorough explanation that you can reference from here on out.

At the most basic level, both options increase color intensity—the difference lies in which colors they affect and how.

Saturation impacts all color intensity equally, which is why it’s so easy to go overboard so quickly. Vibrance, on the other hand, only increases the intensity of the less saturated colors in an image while simultaneously trying to avoid skin tones and prevent the gaudy posterization that happens when you crank your saturation up to the max.

Jesus covers this difference in his video—with appropriate demos of course—but he also goes a bit further by diving into how the Saturation slider differs between the HSL panel and the Vibrance panel, and showing how the two options, Vibrance and Saturation, can be combined to achieve pleasing results that don’t look like you puked a rainbow all over your image.

Check out the full 5-minute video above to see the useful rundown for yourself, and then head over to the Photoshop Training Channel for even more handy tutorials like this one.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: The difference between Saturation and Vibrance explained

Posted in Uncategorized