RSS
 

Shooting in the Snow

09 Jun

For years now, I’ve had this concept of a shoot at night, under the streetlamps with snow coming down… dramatic light and environment… the whole nine yards

Unfortunately I live in Philly, and the only “snow” we get here is usually slush, rain, and more slush…

Course this week was different… practically couldn’t have asked for better conditions.   It was coming down pretty hard, and I think I put the weather sealing on my gear to the test (watching piles of snow accumulate on your lights is a little disconcerting…)  By the time we finished my camera bag on the ground was merely a small white lump.  Luckily no gear casualties though, and some great shots came out…

snowshoot (1 of 7) snowshoot (2 of 7) snowshoot (3 of 7) snowshoot (4 of 7) snowshoot (5 of 7) snowshoot (6 of 7) snowshoot (7 of 7)

Tweet This Post Stumble This Post


F/1.0

 
Comments Off on Shooting in the Snow

Posted in Photography

 

Visionox is ready to mass produce under-display front cameras

08 Jun

Chinese smartphone manufacturers Xiaomi and Oppo first showcased new technology that allows for the front camera in smartphones to be installed under, and capture images through the display in the middle of last year. The main benefit of this new technology is the eradication of unsightly large display bezels, notches or punch holes for housing the front camera.

The technology eventually made it into prototype devices but in January 2020 Xiaomi VP Lu Weibing said it should not be expected to arrive in a production device any time soon, as there were still a number of challenges to overcome.

The principal reason given at the time was the high pixel-density of modern smartphone displays which blocked too much of the incoming light. In combination with the small image sensors of most front cameras, this meant severely limited light gathering capabilities and ultimately sub-par image quality, especially in low light conditions. In addition diffraction from the protective glass could lead to color issues.

However, it appears it’s taken less time than expected to solve this issue. According to reports in Chinese media, Visionox, a major OLED manufacturer, is ready to start mass production of displays with under-screen cameras.

Visionox claims it has been able to increase light transmittance and reduce diffraction by using different organic and non-organic film materials that offer higher transparency. On the software side of things, a new algorithm is capable of correcting brightness and color casts as well as viewing angle issues. It also removes the glare that could be seen in sample images from early prototypes.

The pixel density on the portion of the display covering the camera lens has also been modified to allow for better light transmission. On a Full-HD display, the resolution in the specific area where the camera is located could be reduced to HD or even SD levels. We’ll have to wait for the first production devices to see if the change in resolution will be noticeable on the display and if image quality is comparable to more conventional front camera implementations.

Visionox says hundreds of new technologies have been applied in order to get to the mass production stage. Even with the company in a position to manufacture the new type of displays now it’ll still be a while before we can expect devices equipped with the technology. The first models with under-display cameras are expected to see the light of day in Q1 2021.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Visionox is ready to mass produce under-display front cameras

Posted in Uncategorized

 

Negotiating for photographers

08 Jun

One of the issues that always comes up for emerging photographers is dealing with contracts and negotiation.  Let’s face it – most of us are more “artist” than “businessman”.  we just want to make pictures and leave the legalese to someone else. 

Of course the reality is that to be a successful *artist* you must be a successful *businessman* as well. 

Go to any photography forum on the web and you will invariably find questions such as “how do I make a contract/terms for such and such a job?” or “the client sent me this crazy contract to sign, what do I do?”

In this piece, Bill Cramer of Wonderful Machine, Inc shows an actual contract negotiation he had with an editorial client, including exchanges, contracts and revisions.   This is a fantastic read for any photographer and a perfect example of how to do it right.  In particular, notice how he responds to the disagreement in contract terms – guiding the exchange to a mutually satisfactory agreement, rather than stonewalling and confronting. 

This is great stuff folks!

http://www.wonderfulmachine.com/wmideabox/pricing_and_negotiating_01.html

Tweet This Post Stumble This Post


F/1.0

 
Comments Off on Negotiating for photographers

Posted in Photography

 

Apple patent shows how you may one day be able to capture ‘synthetic group selfies’

08 Jun

In the age of physical distancing, taking selfies with friends has become challenging to say the least, due to the worldwide suggestion to keep six feet apart to prevent the spread of the novel coronavirus. A recently-discovered patent from Apple, however, shows how we might one day be able to take a group selfie without needing to be next to each other — or even in the same room for that matter.

First discovered by Patently Apple, the ‘synthetic group selfie’ patent wasn’t created in response to the COVID-19 pandemic, as it was originally filed back in July 2018. However, its usefulness is more valuable than ever, as the desire to feel connected in an age of physical distancing is growing.

A pair of illustrations from the patent showing how the layers within the composited scene could be moved around to better frame people within the selfie.

According to the patent, you could create a ‘synthetic group selfie’ by inviting friends and family to a shared photo session. The group selfie mode would then place those invited to the session next to one another in the image to give the appearance that everyone is right there in the frame. The patent also notes this mode could be used for video and livestreaming options with other options for changing the arrangement of people within the frame.

An illustration from the patent showing how the composition process would work.

As with all patents, it’s unclear if we’ll ever see this option in a forthcoming iOS update. But it would be a convenient option now more than ever, making virtual interactions more social while still staying distanced.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Apple patent shows how you may one day be able to capture ‘synthetic group selfies’

Posted in Uncategorized

 

Tripod Buying Guide – 6 Vital Features to Look For

08 Jun

The post Tripod Buying Guide – 6 Vital Features to Look For appeared first on Digital Photography School. It was authored by Suzi Pratt.

dps-tripod-buying-guide

A tripod is one of the first accessories people like to buy when they get a new camera. But there are hundreds of thousands of tripods out there, all with different features and price points. How do you go about choosing the best tripod for you? This tripod buying guide will highlight 6 features to consider before purchasing a new tripod.

Best tripod for beginners
Canon 5D Mark III with Canon 100mm f/2.8 – 1/160 sec, f/7.1, ISO 400

Why use a tripod?

There are a couple of reasons why you might need a tripod in the first place.

First, you should use a tripod if you plan to shoot slow shutter speeds or high f-stops (apertures). Conditions like this are typically real estate, interior, architectural, and landscape photography, where you need your scene to be as sharp as possible, often in low light conditions.

You should also use a tripod when shooting bracketed photos for compositing or HDR in post-production, or when taking selfies or group photos that you want to be a part of.

There are certainly more reasons to consider using a tripod, but hopefully, these give you good examples to start thinking about.

Waterfall tripod photo
Fujifilm X-T3 with Carl Zeiss Touit 12mm f2.8 WITHOUT Tripod – 1/75 sec, f/2.8, ISO 2500
Waterfall tripod photo
Fujifilm X-T3 with Zeiss 12mm WITH Tripod – 0.8 sec, f/9, ISO 400

1. Payload (or load capacity)

The very first feature to consider when researching tripods is its payload or maximum load capacity. In other words, how much weight is it able to support? The payload is typically found in the product description of each tripod. To come up with this number, consider the heaviest and largest camera setup that you plan to use on the tripod. Camera and lens weights can easily be found via a Google search or examining their respective product descriptions.

For example, my Sony A7rIII camera body alone weighs 23.2 oz (657 grams). My heaviest lens, the Sony 70-200mm f/2.8 weighs 52.16 oz (1480 grams). So together, my heaviest camera setup would be 75.36 ounces (2137 grams). That means I should find a tripod with a payload of at least that amount.

It is also important to look at the payload of the tripod head or the piece that attaches your camera to the tripod legs. Some tripods come with a head included, or you can replace it with a head that you buy separately. Many tripod heads have their own payloads specified, so be sure to consider that number as well.

Best tripod for beginners
MeFoto Roadtrip travel tripod. Canon 5D Mark III with Canon 100mm f/2.8 – 1/160 sec, f/7.1, ISO 400

2. Tripod minimum and maximum height

All tripods have a minimum and maximum height expressed in their product descriptions. Some tripods can get ultra-tall, others can get ultra-low to the ground. Think about what kind of subjects you will be photographing, and the optimal height you would want your tripod to be.

If you are tall or plan to shoot tall subjects, aim to get the tallest tripod you can find. However, if you shoot subjects that are lower to the ground, you may want to consider tripods with a low minimum height. There are even new tripods like the upcoming Peak Design Travel Tripod designed to get extremely low, down to 5.5 inches.

Best tripod for beginners
Manfrotto 055 tripod. Canon 5D Mark III with Canon 100mm f/2.8 – 1/160 sec, f/7.1, ISO 400

3. How stable is the tripod?

The next quality to consider is how stable the tripod is. First, consider the payload or weight capacity mentioned above – this will give you a good idea of whether the tripod can support your camera and lens combination. But there are other features that can enhance tripod stability.

Some tripods come with retractable or removable spikes in the tripod feet. These provide extra stability by sticking into the dirt or soft ground if you happen to be shooting outside.

Tripods can also come with a retractable hook in the center column of the tripod, allowing you to hang weight to stabilize the tripod. Attaching a heavy sandbag to the hook is often the optimal option, but you can also get creative by using other items like a heavy water bottle or even your camera bag.

Best tripod for beginners
MeFoto Roadtrip Travel Tripod.
Canon 5D Mark III with Canon 100mm f/2.8 – 1/160 sec, f/7.1, ISO 400
Best tripod for beginners
Legs of the Manfrotto 055 tripod.
Canon 5D Mark III with Canon 100mm f/2.8 – 1/160 sec, f/7.1, ISO 400

4. How easy is it to carry the tripod?

If you plan to travel a lot with your tripod, or use it on the go, it’s important to consider the overall weight and folded length of the tripod. If you opt for a heavy, large tripod, you might get optimal stability, but you will likely struggle to carry that tripod around.

Consider the material the tripod is made from. Most tripods are made of aluminum (cheaper, but heavier) or carbon fiber (lightweight, but more expensive). Many tripod models are available in either construction material, so think about your budget and how important the weight saving is to you.

Also, look at the overall ease of folding the tripod up. Most tripod legs are three sections meaning they get taller with each section you open, but some can be two sections or even five sections. The more leg sections you have to deploy equates to a longer time to set up and put away. Along the lines of tripod legs, look at the mechanism they use to deploy. Most tripods use a twist-lock mechanism, which can get confusing about which direction locks or unlocks the legs. Meanwhile, other tripods have a simple clip lock that is much easier to unlock and lock.

5. Tripod head quality?

Some tripods come with a tripod head, and others require that you buy it separately. In some cases, you may even want to buy your own tripod head if you have a preference in the best type to use.

A ball head is the most common type of tripod head, allowing for 360-degree rotation to position the camera where you want it. However, many ball heads, especially cheap or low-quality ones, will slip over time and be less stable. Thus, it may be worth buying a high-end ball head or looking at another type of head to use on your tripod.

Examples include the Manfrotto 3-Way (my favorite), or a pistol grip tripod head. Pretty much every large tripod allows you to replace the tripod head with one of your choosing.

Best tripod for beginners
Standard Arca Swiss type tripod ball head.
Canon 5D Mark III with Canon 100mm f/2.8 – 1/160 sec, f/7.1, ISO 400

The final piece of the tripod head to consider is the tripod plate or the piece that mounts directly to your camera. Arca-Swiss type plates are among the most common and universal, but they often come with the need to use an Allen wrench to tighten the plate to your camera.

On the other hand, there are other tripod plates such as those made by Manfrotto or Joby that includes a twist screw that you can easily secure without the need for an extra tool.

6. Extra features?

The last things to consider are any extra features or bells and whistles that come with the tripod. Here are a few examples to look out for:

Tripod to monopod conversion

Some tripods such as those made by MeFoto, allow you to easily convert the tripod into a monopod by simply removing one leg and attaching it to the center column. This is a handy feature if you anticipate needing a monopod.

Tripods with a column that can be positioned at 90 degrees

If you have the need to shoot with a 90-degree column, look for a tripod that offers this feature. My Manfrotto 055 has this feature and it comes in very handy for product or flat lay photography.

Built-in bubble leveler

While many cameras have a built-in leveler, it always helps to have a physical bubble leveler to make sure your camera is straight. Some tripods have bubble levelers built-into the tripod head or the center column of the tripod.

Carrying case

Some tripods come with a carrying case to aid in transportation, and others require the carrying case to be purchased separately.

Best tripod for beginners
Manfrotto 3-Way tripod head.
Canon 5D Mark III with Canon 100mm f/2.8 – 1/160 sec, f/2.8, ISO 2000

In conclusion

There are a plethora of tripods out there and it is not an easy task to find the right one for you. Ultimately, this tripod buying guide is intended to help you think of all of the situations in which you plan to use a tripod and encourage you to carefully research all six features above. And while there are plenty of cheap tripods out there, consider investing in a high-quality tripod to begin with. Your camera equipment is expensive, and you don’t want to risk dropping or damaging it due to placing it on a cheap tripod.

Video

The post Tripod Buying Guide – 6 Vital Features to Look For appeared first on Digital Photography School. It was authored by Suzi Pratt.


Digital Photography School

 
Comments Off on Tripod Buying Guide – 6 Vital Features to Look For

Posted in Photography

 

Pinup with Brittany

08 Jun

Just a fun shoot testing out some concepts for a classic “Vargas style” pinup…  thought they came out nicely…

brittany-pinup-005








Tweet This Post Stumble This Post


F/1.0

 
Comments Off on Pinup with Brittany

Posted in Photography

 

darkly inspiring…

08 Jun

If you’re going to try, go all the way. Otherwise don’t even start.

This could mean losing girlfriends, wives, relatives, jobs, and maybe your mind.

It could mean not eating for three or four days.
It could mean freezing on a park bench.
It could mean jail. It could mean derision.
It could mean mockery, isolation.

Isolation is the gift. All the others are a test of your endurance.
Of how much you really want to do it.

And you’ll do it, despite rejection in the worst odds.
And it will be better than anything else you can imagine.

If you’re going to try, go all the way.

There is no other feeling like that.
You will be alone with the gods. And the nights will flame with fire.

You will ride life straight to perfect laughter.

It’s the only good fight there is.

Henry Charles Bukowski
1920 – 1994

Tweet This Post Stumble This Post


F/1.0

 
Comments Off on darkly inspiring…

Posted in Photography

 

the solution to the elinchrom quadra umbrella mount

08 Jun

When I was looking at reviews of the Elinchrom Ranger Quadra, one of the most common complaints people seemed to have was the fact that they couldn’t mount standard umbrellas on it (Elinchrom uses a 7mm umbrella shaft, while most others in the US use an 8mm shaft).

Now this seemed kind of silly to me – first off, the Quadra head is extremely small and lightweight – hanging anything larger than a *tiny* umbrella off it just seems like a recipe for disaster.   Secondly I thought to myself “I mount umbrellas on speedlights all the time and they have no umbrella holder whatsoever – why is the quadra any different?”  The solution:

The plain ol’ vanilla Umbrella Swivel.  Beloved of “Strobists” everywhere, it provides a secure slot/mount for the umbrella, placing the weight & torque on itself rather than the strobe head.  Screw in a post on top, plop the Quadra head on that and good to go.  You can even still use the angle of the quadra head itself to hit the sweet spot of the umbrella.

Personally this is just fine for me.  works great for my umbrellas, Apollo softboxes and Softliters…but let’s say you need the light more “on axis” with the umbrella shaft (maybe to fit the hole in one of the new PLM diffusion screens for instance…)  I found the easiest thing is to simply take a second swivel, and use it to “hang” the head off the umbrella shaft itself.  The head is light enough that it doesn’t put undue strain on the shaft (or the swivel).

Kinda kludgy but it works.  Personally I don’t bother – mounting it on the swivel itself is quick, easy and gets the job done with hardware that I’m already carrying anyway for my speedlights.

So there you have it:  Mounting umbrellas on the Quadra made easy!

Tweet This Post Stumble This Post


F/1.0

 
Comments Off on the solution to the elinchrom quadra umbrella mount

Posted in Photography

 

Video: Netflix makes its Platon documentary episode free on Youtube

08 Jun

Although Netflix uploaded the video back in April, we’ve only now discovered the Platon episode of its ‘Abstract: The Art of Design’ miniseries is now available to view for free on YouTube.

The 2017 docuseries consists of two series and features a total of 14 45-minute episodes that cover the work of some of the best artists across the globe in their respective fields, from architecture and automotive design to stage design and typography. For its ‘Photography’ episode, Netflix features Platon Antoniou, more commonly known by his mononym Platon, a renowned portrait photographer whose portfolio features some of the most prominent and powerful figures the world over.

While the other episodes require a Netflix subscription, the Platon episode is now free to view on YouTube. Throughout the 45-minute episode, we get an inside look at the man behind the camera and follow along as he captures a portrait of Colin Powell, former National Security Advisor and retired four-star general.

It’s a fantastic watch from beginning to end. If you have a Netflix subscription, we also suggest watching some of the other episodes in the series.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: Netflix makes its Platon documentary episode free on Youtube

Posted in Uncategorized

 

Computational photography part II: Computational sensors and optics

08 Jun

Editor’s note: This is the second article in a three-part series by guest contributor Vasily Zubarev. The first and third parts can be found here:

  • Part I: What is computational photography?
  • Part III: Computational lighting, 3D scene and augmented reality (coming soon)

You can visit Vasily’s website where he also demystifies other complex subjects. If you find this article useful we encourage you to give him a small donation so that he can write about other interesting topics.


Computational Sensor: Plenoptic and Light Fields

Well, our sensors are crap. We simply got used to it and trying to do our best with them. They haven’t changed much in their design from the beginning of time. Technical process was the only thing that improved — we reduced the distance between pixels, fought read noise, increased readout speeds and added specific pixels for phase-detection autofocus systems. But even if we take the most expensive camera to try to photograph a running cat in the indoor light, the cat will win.

  • Video link: The Science of Camera Sensors

We’ve been trying to invent a better sensor for a long time. You can google a lot of research in this field by “computational sensor” or “non-Bayer sensor” queries. Even the Pixel Shifting example can be referred to as an attempt to improve sensors with calculations.

The most promising stories of the last twenty years, though, come to us from plenoptic cameras.

To calm your sense of impending boring math, I’ll throw in the insider’s note — the last Google Pixel camera is a little bit plenoptic. With only two pixels in one, there’s still enough to calculate a fair optical depth of field map without having a second camera like everyone else.

Plenoptics is a powerful weapon that hasn’t fired yet.

Plenoptic Camera

Invented in 1994. For the first time assembled at Stanford in 2004. The first consumer product — Lytro, released in 2012. The VR industry is now actively experimenting with similar technologies.

Plenoptic camera differs from the normal one by only one modification. Its sensor is covered with a grid of lenses, each of which covers several real pixels. Something like this:

If we place the grid and sensor at the right distance, we’ll see sharp pixel clusters containing mini-versions of the original image on the final RAW image.

  • Video link: Muted video showing RAW editing process

Apparently, if you take only one central pixel from each cluster and build the image only from them, it won’t be any different from one taken with a standard camera. Yes, we lose a bit in resolution, but we’ll just ask Sony to stuff more megapixels in the next sensor.

That’s where the fun part begins. If you take another pixel from each cluster and build the image again, you again get a standard photo, only as if it was taken with a camera shifted by one pixel in space. Thus, with 10×10 pixel clusters, we get 100 images from “slightly” different angles.

The more the cluster size, the more images we have. Resolution is lower, though. In the world of smartphones with 41-megapixel sensors, everything has a limit, although we can neglect resolution a bit. We have to keep the balance.

  • Link: plenoptic.info – about plenoptics, with python code samples

Alright, we’ve got a plenoptic camera. What can we do with it?

Fair refocusing

The feature that everyone was buzzing about in the articles covering Lytro is the possibility to adjust focus after the shot was taken. “Fair” means we don’t use any deblurring algorithms, but rather only available pixels, picking or averaging in the right order.

A RAW photo taken with a plenoptic camera looks weird. To get the usual sharp JPEG out of it, you have to assemble it first. The result will vary depending on how we select the pixels from the RAW.

The farther the cluster is from the point of impact of the original ray, the more defocused the ray is. Because the optics. To get the image shifted in focus, we only need to choose the pixels at the desired distance from the original — either closer or farther.

The picture should be read from right to left as we are sort of restoring the image, knowing the pixels on the sensor. We get a sharp original image on top, and below we calculate what was behind it. That is, we shift the focus computationally.

The process of shifting the focus forward is a bit more complicated as we have fewer pixels in these parts of the clusters. In the beginning, Lytro developers didn’t even want to let the user focus manually because of that — the camera made a decision itself using the software. Users didn’t like that, so the feature was added in the late versions as “creative mode”, but with very limited refocus for exactly that reason.

Depth Map and 3D using a single lens

One of the simplest operations in plenoptics is to get a depth map. You just need to gather two different images and calculate how the objects are shifted between them. The more the shift — the farther away from the camera the object is.

Google recently bought and killed Lytro, but used their technology for its VR and… Pixel’s camera. Starting with the Pixel 2, the camera became “a little bit” plenoptic, though with only two pixels per cluster. As a result, Google doesn’t need to install a second camera like all the other cool kids. Instead, they can calculate a depth map from one photo.

Images which top and bottom subpixels of the Google Pixel camera see. The right one is animated for clarity (click to enlarge and see animation). Source: Google
The depth map is additionally processed with neural networks to make the background blur more even. Source: Google
  • Link: Portrait mode on the Pixel 2 and Pixel 2 XL smartphones

The depth map is built on two shots shifted by one sub-pixel. This is enough to calculate a rudimentary depth map and separate the foreground from the background to blur it out with some fashionable bokeh. The result of this stratification is still smoothed and “improved” by neural networks which are trained to improve depth maps (rather than to observe, as many people think).

The trick is that we got plenoptics in smartphones almost at no charge. We already put lenses on these tiny sensors to increase the luminous flux at least somehow. Some patents from Google suggest that future Pixel phones may go further and cover four photodiodes with a lens.

Slicing layers and objects

You don’t see your nose because your brain combines a final image from both of your eyes. Close one eye, and you will see a huge Egyptian pyramid at the edge.

The same effect can be achieved in a plenoptic camera. By assembling shifted images from pixels of different clusters, we can look at the object as if from several points. Same as our eyes do. It gives us two cool opportunities. First is we can estimate the approximate distance to the objects, which allows us easily separate the foreground from the background as in life. And second, if the object is small, we can completely remove it from the photo since we can effectively look around the object. Like a nose. Just clone it out. Optically, for real, with no photoshop.

Using this, we can cut out trees between the camera and the object or remove the falling confetti, as in the video below.

“Optical” stabilization with no optics

From a plenoptic RAW, you can make a hundred of photos with several pixels shift over the entire sensor area. Accordingly, we have a tube of lens diameter within which we can move the shooting point freely, thereby offsetting the shake of the image.

Technically, stabilization is still optical, because we don’t have to calculate anything — we just select pixels in the right places. On the other hand, any plenoptic camera sacrifices the number of megapixels in favor of plenoptic capabilities, and any digital stabilizer works the same way. It’s nice to have it as a bonus, but using it only for its sake is costly.

The larger the sensor and lens, the bigger window for movement. The more camera capabilities, the more ozone holes from supplying this circus with electricity and cooling. Yeah, technology!

Fighting with Bayer filter

Bayer filter is still necessary even with a plenoptic camera. We haven’t come up with any other way of getting a colorful digital image. And using a plenoptic RAW, we can average the color not only by the group of nearby pixels, as in classic demosaicing, but also using dozens of its copies in neighboring clusters.

It’s called “computable super-resolution” in some articles, but I would question it. In fact, we reduce the real resolution of the sensor in these some dozen times first in order to proudly restore it again. You have to try hard to sell it to someone.

But technically it’s still more interesting than shaking the sensor in a pixel shifting spasm.

Computational aperture (bokeh)

Those who like to shoot bokeh hearts will be thrilled. Since we know how to control the refocus, we can move on and take only a few pixels from the unfocused image and others from the normal one. Thus we can get an aperture of any shape. Yay! (No)

Many more tricks for video

So, not to move too far away from the photo topic, everyone who’s interested should check out the links above and below. They contain about half a dozen other interesting applications of a plenoptic camera.

  • Video link: Watch Lytro Change Cinematography Forever

Light Field: More than a photo, less than VR

Usually, the explanation of plenoptics starts with light fields. And yes, from the science perspective, the plenoptic camera captures the light field, not just the photo. Plenus comes from the Latin “full”, i.e., collecting all the information about the rays of light. Just like a Parliament plenary session.

Let’s get to the bottom of this to understand what a light field is and why we need it.

Traditional photos are two-dimensional. When a ray hits a sensor there will be a corresponding pixel in the photo that records simply its intensity. The camera doesn’t care where the ray came from, whether it accidentally fell from aside or was reflected off of another object. The photo captures only the point of intersection of the ray with the surface of the sensor. So it’s kinda 2D.

Light field images are similar, but with a new component — the origin and angle of each ray. The microlens array in front of the sensor is calibrated such that each lens samples a certain portion of the aperture of the main lens, and each pixel behind each lens samples a certain set of ray angles. And since light rays emanating from an object with different angles fall across different pixels on a light field camera’s sensor, you can build an understanding of all the different incoming angles of light rays from this object. This means the camera effectively captures the ray vectors in 3D space. Like calculating the lighting of a video game, but the other way around — we’re trying to catch the scene, not create it. The light field is the set of all the light rays in our scene — capturing both the intensity and angular information about each ray.

There are a lot of mathematical models of light fields. Here’s one of the most representative.

The light field is essentially a visual model of the space around it. We can easily compute any photo within this space mathematically. Point of view, depth of field, aperture — all these are also computable; however, one can only reposition the point of view so much, determined by the entrance pupil of the main lens. That is, the amount of freedom with which you can change the field of view depends upon the breadth of perspectives you’ve captured, which is necessarily limited.

I love to draw an analogy with a city here. Photography is like your favorite path from your home to the bar you always remember, while the light field is a map of the whole town. Using the map, you can calculate any route from point A to B. In the same way, knowing the light field, we can calculate any photo.

For an ordinary photo it’s overkill, I agree. But here comes VR, where light fields are one of the most promising areas of development.

Having a light field model of an object or a room allows you to see this object or a room from multiple perspectives, with motion parallax and other depth cues like realistic changes in textures and lighting as you move your head. You can even travel through a space, albeit to a limited degree. It feels like virtual reality, but it’s no longer necessary to build a 3D-model of the room. We can ‘simply’ capture all the rays inside it and calculate many different pictures from within that volume. Simply, yeah. That’s what we’re fighting over.

  • Link: Google AR and VR: Experimenting with Light Fields

Vasily Zubarev is a Berlin-based Python developer and a hobbyist photographer and blogger. To see more of his work, visit his website or follow him on Instagram and Twitter.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Computational photography part II: Computational sensors and optics

Posted in Uncategorized