RSS
 

Posts Tagged ‘Part’

Posing Guide: 21 Sample Poses to Get You Started with Photographing Women – Part I

18 Jul

The post Posing Guide: 21 Sample Poses to Get You Started with Photographing Women – Part I appeared first on Digital Photography School. It was authored by Guest Contributor.

This is the first in a series of Posing Guides with suggested starting poses for photographing different subjects. We are starting with the female posing guide.

Also in the series check out our posing guides for posing children, posing couples, posing groups and posing weddings.

Sample Poses to Get You Started with Photographing Women

If you ever run out of ideas, get stuck in creativity or simply need some guidance when shooting female subjects, you may use following posing samples as a “posing cheat sheet”. Many pro photographers use such a technique when preparing for and during the photo shoot.

The poses in this article are selected as an initial reference. I would advise you to look at the poses together with your subject, especially if she’s inexperienced. During a photo shoot, don’t hesitate to discuss with the subject which pose is or isn’t working in any particular situation. It’s usually very productive and you both will feel more confident in what you are doing.

OK, let’s start, one by one.

posing-photographing-female-models01.png

1. Very simple portrait pose to start with. Have the model look over her shoulder. Note how unusual and interesting a portrait might look, if shot simply from a different angle.

posing-photographing-female-models02.png

2. In portrait photography, hands are usually not visible or at least not dominant. However, you might get creative by asking the model to play around with her hands trying different positions around her head or face. Keep in mind, though: No flat palms, and the hands should only show their sides!

posing-photographing-female-models03.png

3. You might be familiar with composition rules like the rule of thirds. In a similar way, pleasing effects can be created by using diagonals. Also remember that you don’t need to always hold your camera on a perfectly even level. Don’t be afraid to tilt it, you might achieve some interesting and unusual perspectives.

posing-photographing-female-models04.png

4. A really nice and lovely pose with a model sitting. The knees have to touch each other. Shoot slightly from above.

posing-photographing-female-models05.png

5. Another open and inviting pose with the model lying on the ground. Get down and take your shot nearly from the ground level.

posing-photographing-female-models06.png

6. Just a variation for a pose with the model lying on the ground. Both hands might as well be resting on the ground. Works very well outdoors, on the grass or in a wild flower meadow, for example.

posing-photographing-female-models07.png

7. A basic easy pose, yet looks absolutely stunning. Get down and shoot nearly from a ground level. Then try to move gradually around the model while making shots. Also ask your model to change head and hand positions.

posing-photographing-female-models08.png

8. Another easy yet gorgeous pose for all body types. Try different hand and leg positioning. And remember to focus on the model’s eyes!

posing-photographing-female-models09.png

9. A really lovely pose. Works well in different surface settings: The model, for example, might lie on a bed, on the ground, in the grass, or on a sandy beach. Shoot from a very low angle and focus on the eyes.

posing-photographing-female-models10.png

10. Gorgeous and easy pose for a model sitting on the ground.

posing-photographing-female-models11.png

11. Another simple and friendly pose for a model sitting on the ground. Try different directions and angles.

posing-photographing-female-models12.png

12. A wonderful way to demonstrate the beauty of a model’s physique. Works very well as a silhouette when shooting against a bright background.

posing-photographing-female-models13.png

13. A simple and casual looking pose. Lots of variations are possible. Ask the model to twist her body, experiment with hand positioning and try different head turns.

posing-photographing-female-models14.png

14. Another very simple and elegant pose. The model is turned slightly to the side, hands in back pockets.
posing-photographing-female-models15.png
15. Leaning slightly forward can be a very attractive gesture. It is a subtle way to emphasize upper body shapes.

posing-photographing-female-models16.png

16. A sensual pose. By holding the hands above the head body curves are emphasized. Works with fit body types.

posing-photographing-female-models17.png

17. Endless variations are possible for posing in full height. This pose is just the starting point. Ask the model to slightly turn her body, change hand positioning, change head and eye directions etc.

posing-photographing-female-models18.png
18. A relaxed pose with the model standing upright and supporting her back against a wall. Remember that the model may use a wall not only to support her back, but also to put her hands on, or resting a leg against it.

posing-photographing-female-models19.png

19. Note that full height settings are very demanding and work well only with slim to athletic body types. Posing guidelines are simple: The body should be arched in an S shape, hands should be relaxed, while the weight finds support on just one leg.

posing-photographing-female-models20.png

20. An exquisite pose for slim to athletic models. Many variations are possible. In order to find the best posture, tell the model to slowly move her hands and twist her body constantly. When you see a good variant, ask your model to hold still and take some pictures. Repeat for a full set.

posing-photographing-female-models21.png

21. An absolutely romantic and delicate pose. Any kind of cloth (even a curtain) can be used. Note that the back doesn’t need to be completely bare. Sometimes as little as a bare shoulder could work pretty well.

So, there’s something for you to start with. Hope you will find at least couple of poses to work with in different shooting scenarios! Keep in mind that each of the initial sample poses is meant to be only a starting point. Each pose has endless variations! Just be creative and adjust the pose as needed (for example, try different shooting angles and ask your subject to change hand, head and leg positioning etc.)

Check out our other Posing Guides:

  • Posing Guide: Sample poses for photographing Women Part 1
  • Posing Guide: Sample posees for photographing Women Part 2
  • Posing Guide: Sample poses for photographing Men
  • Posing Guide: Sample Poses for photographing Children
  • Posing Guide: Sample Poses for Photographing Couples
  • Posing Guide: Sample Poses for Photographing Groups of People
  • Posing Guide: Sample Poses for Photographing Weddings

Grab Our Guide to Portrait Posing

Posing Guide: 21 Sample Poses to Get You Started with Photographing Women – Part I

Kaspars Grinvalds is a photographer working and living in Riga, Latvia. He is the author of Posing App where more poses and tips about people photography are available.

The post Posing Guide: 21 Sample Poses to Get You Started with Photographing Women – Part I appeared first on Digital Photography School. It was authored by Guest Contributor.


Digital Photography School

 
Comments Off on Posing Guide: 21 Sample Poses to Get You Started with Photographing Women – Part I

Posted in Photography

 

Computational photography part III: Computational lighting, 3D scene and augmented reality

09 Jun

Editor’s note: This is the third article in a three-part series by guest contributor Vasily Zubarev. The first two parts can be found here:

  • Part I: What is computational photography?
  • Part II: Computational sensors and optics

You can visit Vasily’s website where he also demystifies other complex subjects. If you find this article useful we encourage you to give him a small donation so that he can write about other interesting topics.

The article has been lightly edited for clarity and to reflect a handful of industry updates since it first appeared on the author’s own website.


Computational Lighting

Soon we’ll go so goddamn crazy that we’ll want to control the lighting after the photo was taken too. To change the cloudy weather to sunny, or to change the lights on a model’s face after shooting. Now it seems a bit wild, but let’s talk again in ten years.

We’ve already invented a dumb device to control the light — a flash. They have come a long way: from the large lamp boxes that helped avoid the technical limitations of early cameras, to the modern LED flashes that spoil our pictures, so we mainly use them as a flashlight.

Programmable Flash

It’s been a long time since all smartphones switched to Dual LED flashes — a combination of orange and blue LEDs with brightness being adjusted to the color temperature of the shot. In the iPhone, for example, it’s called True Tone and controlled by a small ambient light sensor and a piece of code with a hacky formula.

  • Link: Demystifying iPhone’s Amber Flashlight

Then we started to think about the problem of all flashes — the overexposed faces and foreground. Everyone did it in their own way. iPhone got Slow Sync Flash, which made the camera increase the shutter speed in the dark. Google Pixel and other Android smartphones started using their depth sensors to combine images with and without flash, quickly made one by one. The foreground was taken from the photo with the flash while the background remained lit by ambient illumination.

The further use of a programmable multi-flash is vague. The only interesting application was found in computer vision, where it was used once in assembly schemes (like for Ikea book shelves) to detect the borders of objects more accurately. See the article below.

  • Link: Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering using Multi-Flash Imaging

Lightstage

Light is fast. It’s always made light coding an easy thing to do. We can change the lighting a hundred times per shot and still not get close to its speed. That’s how Lighstage was created back in 2005.

  • Video link: Lighstage demo video

The essence of the method is to highlight the object from all possible angles in each shot of a real 24 fps movie. To get this done, we use 150+ lamps and a high-speed camera that captures hundreds of shots with different lighting conditions per shot.

A similar approach is now used when shooting mixed CGI graphics in movies. It allows you to fully control the lighting of the object in post-production, placing it in scenes with absolutely random lighting. We just grab the shots illuminated from the required angle, tint them a little, done.

Unfortunately, it’s hard to do it on mobile devices, but probably someone will like the idea and execute it. I’ve seen an app from guys who shot a 3D face model, illuminating it with the phone flashlight from different sides.

Lidar and Time-of-Flight Camera

Lidar is a device that determines the distance to the object. Thanks to a recent hype of self-driving cars, now we can find a cheap lidar in any dumpster. You’ve probably seen these rotating thingys on the roof of some vehicles? These are lidars.

We still can’t fit a laser lidar into a smartphone, but we can go with its younger brother — time-of-flight camera. The idea is ridiculously simple — a special separate camera with an LED-flash above it. The camera measures how quickly the light reaches the objects and creates a depth map of the scene.

The accuracy of modern ToF cameras is about a centimeter. The latest Samsung and Huawei top models use them to create a bokeh map and for better autofocus in the dark. The latter, by the way, is quite good. I wish every device had one.

Knowing the exact depth of field will be useful in the coming era of augmented reality. It will be much more accurate and effortless to shoot at the surfaces with lidar to make the first mapping in 3D than analyzing camera images.

Projector Illumination

To finally get serious about computational lighting, we have to switch from regular LED flashes to projectors — devices that can project a 2D picture on a surface. Even a simple monochrome grid will be a good start for smartphones.

The first benefit of the projector is that it can illuminate only the part of the image that needs to be illuminated. No more burnt faces in the foreground. Objects can be recognized and ignored, just like laser headlights of some modern cars don’t blind the oncoming drivers but illuminate pedestrians. Even with the minimum resolution of the projector, such as 100×100 dots, the possibilities are exciting.

Today, you can’t surprise a kid with a car with a controllable light.

The second and more realistic use of the projector is to project an invisible grid on a scene to build a depth map. With a grid like this, you can safely throw away all your neural networks and lidars. All the distances to the objects in the image now can be calculated with the simplest computer vision algorithms. It was done in Microsoft Kinect times (rest in peace), and it was great.

Of course, it’s worth remembering here the Dot Projector for Face ID on iPhone X and above. That’s our first small step towards projector technology, but quite a noticeable one.

Dot Projector in iPhone X.

Vasily Zubarev is a Berlin-based Python developer and a hobbyist photographer and blogger. To see more of his work, visit his website or follow him on Instagram and Twitter.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Computational photography part III: Computational lighting, 3D scene and augmented reality

Posted in Uncategorized

 

Computational photography part II: Computational sensors and optics

08 Jun

Editor’s note: This is the second article in a three-part series by guest contributor Vasily Zubarev. The first and third parts can be found here:

  • Part I: What is computational photography?
  • Part III: Computational lighting, 3D scene and augmented reality (coming soon)

You can visit Vasily’s website where he also demystifies other complex subjects. If you find this article useful we encourage you to give him a small donation so that he can write about other interesting topics.


Computational Sensor: Plenoptic and Light Fields

Well, our sensors are crap. We simply got used to it and trying to do our best with them. They haven’t changed much in their design from the beginning of time. Technical process was the only thing that improved — we reduced the distance between pixels, fought read noise, increased readout speeds and added specific pixels for phase-detection autofocus systems. But even if we take the most expensive camera to try to photograph a running cat in the indoor light, the cat will win.

  • Video link: The Science of Camera Sensors

We’ve been trying to invent a better sensor for a long time. You can google a lot of research in this field by “computational sensor” or “non-Bayer sensor” queries. Even the Pixel Shifting example can be referred to as an attempt to improve sensors with calculations.

The most promising stories of the last twenty years, though, come to us from plenoptic cameras.

To calm your sense of impending boring math, I’ll throw in the insider’s note — the last Google Pixel camera is a little bit plenoptic. With only two pixels in one, there’s still enough to calculate a fair optical depth of field map without having a second camera like everyone else.

Plenoptics is a powerful weapon that hasn’t fired yet.

Plenoptic Camera

Invented in 1994. For the first time assembled at Stanford in 2004. The first consumer product — Lytro, released in 2012. The VR industry is now actively experimenting with similar technologies.

Plenoptic camera differs from the normal one by only one modification. Its sensor is covered with a grid of lenses, each of which covers several real pixels. Something like this:

If we place the grid and sensor at the right distance, we’ll see sharp pixel clusters containing mini-versions of the original image on the final RAW image.

  • Video link: Muted video showing RAW editing process

Apparently, if you take only one central pixel from each cluster and build the image only from them, it won’t be any different from one taken with a standard camera. Yes, we lose a bit in resolution, but we’ll just ask Sony to stuff more megapixels in the next sensor.

That’s where the fun part begins. If you take another pixel from each cluster and build the image again, you again get a standard photo, only as if it was taken with a camera shifted by one pixel in space. Thus, with 10×10 pixel clusters, we get 100 images from “slightly” different angles.

The more the cluster size, the more images we have. Resolution is lower, though. In the world of smartphones with 41-megapixel sensors, everything has a limit, although we can neglect resolution a bit. We have to keep the balance.

  • Link: plenoptic.info – about plenoptics, with python code samples

Alright, we’ve got a plenoptic camera. What can we do with it?

Fair refocusing

The feature that everyone was buzzing about in the articles covering Lytro is the possibility to adjust focus after the shot was taken. “Fair” means we don’t use any deblurring algorithms, but rather only available pixels, picking or averaging in the right order.

A RAW photo taken with a plenoptic camera looks weird. To get the usual sharp JPEG out of it, you have to assemble it first. The result will vary depending on how we select the pixels from the RAW.

The farther the cluster is from the point of impact of the original ray, the more defocused the ray is. Because the optics. To get the image shifted in focus, we only need to choose the pixels at the desired distance from the original — either closer or farther.

The picture should be read from right to left as we are sort of restoring the image, knowing the pixels on the sensor. We get a sharp original image on top, and below we calculate what was behind it. That is, we shift the focus computationally.

The process of shifting the focus forward is a bit more complicated as we have fewer pixels in these parts of the clusters. In the beginning, Lytro developers didn’t even want to let the user focus manually because of that — the camera made a decision itself using the software. Users didn’t like that, so the feature was added in the late versions as “creative mode”, but with very limited refocus for exactly that reason.

Depth Map and 3D using a single lens

One of the simplest operations in plenoptics is to get a depth map. You just need to gather two different images and calculate how the objects are shifted between them. The more the shift — the farther away from the camera the object is.

Google recently bought and killed Lytro, but used their technology for its VR and… Pixel’s camera. Starting with the Pixel 2, the camera became “a little bit” plenoptic, though with only two pixels per cluster. As a result, Google doesn’t need to install a second camera like all the other cool kids. Instead, they can calculate a depth map from one photo.

Images which top and bottom subpixels of the Google Pixel camera see. The right one is animated for clarity (click to enlarge and see animation). Source: Google
The depth map is additionally processed with neural networks to make the background blur more even. Source: Google
  • Link: Portrait mode on the Pixel 2 and Pixel 2 XL smartphones

The depth map is built on two shots shifted by one sub-pixel. This is enough to calculate a rudimentary depth map and separate the foreground from the background to blur it out with some fashionable bokeh. The result of this stratification is still smoothed and “improved” by neural networks which are trained to improve depth maps (rather than to observe, as many people think).

The trick is that we got plenoptics in smartphones almost at no charge. We already put lenses on these tiny sensors to increase the luminous flux at least somehow. Some patents from Google suggest that future Pixel phones may go further and cover four photodiodes with a lens.

Slicing layers and objects

You don’t see your nose because your brain combines a final image from both of your eyes. Close one eye, and you will see a huge Egyptian pyramid at the edge.

The same effect can be achieved in a plenoptic camera. By assembling shifted images from pixels of different clusters, we can look at the object as if from several points. Same as our eyes do. It gives us two cool opportunities. First is we can estimate the approximate distance to the objects, which allows us easily separate the foreground from the background as in life. And second, if the object is small, we can completely remove it from the photo since we can effectively look around the object. Like a nose. Just clone it out. Optically, for real, with no photoshop.

Using this, we can cut out trees between the camera and the object or remove the falling confetti, as in the video below.

“Optical” stabilization with no optics

From a plenoptic RAW, you can make a hundred of photos with several pixels shift over the entire sensor area. Accordingly, we have a tube of lens diameter within which we can move the shooting point freely, thereby offsetting the shake of the image.

Technically, stabilization is still optical, because we don’t have to calculate anything — we just select pixels in the right places. On the other hand, any plenoptic camera sacrifices the number of megapixels in favor of plenoptic capabilities, and any digital stabilizer works the same way. It’s nice to have it as a bonus, but using it only for its sake is costly.

The larger the sensor and lens, the bigger window for movement. The more camera capabilities, the more ozone holes from supplying this circus with electricity and cooling. Yeah, technology!

Fighting with Bayer filter

Bayer filter is still necessary even with a plenoptic camera. We haven’t come up with any other way of getting a colorful digital image. And using a plenoptic RAW, we can average the color not only by the group of nearby pixels, as in classic demosaicing, but also using dozens of its copies in neighboring clusters.

It’s called “computable super-resolution” in some articles, but I would question it. In fact, we reduce the real resolution of the sensor in these some dozen times first in order to proudly restore it again. You have to try hard to sell it to someone.

But technically it’s still more interesting than shaking the sensor in a pixel shifting spasm.

Computational aperture (bokeh)

Those who like to shoot bokeh hearts will be thrilled. Since we know how to control the refocus, we can move on and take only a few pixels from the unfocused image and others from the normal one. Thus we can get an aperture of any shape. Yay! (No)

Many more tricks for video

So, not to move too far away from the photo topic, everyone who’s interested should check out the links above and below. They contain about half a dozen other interesting applications of a plenoptic camera.

  • Video link: Watch Lytro Change Cinematography Forever

Light Field: More than a photo, less than VR

Usually, the explanation of plenoptics starts with light fields. And yes, from the science perspective, the plenoptic camera captures the light field, not just the photo. Plenus comes from the Latin “full”, i.e., collecting all the information about the rays of light. Just like a Parliament plenary session.

Let’s get to the bottom of this to understand what a light field is and why we need it.

Traditional photos are two-dimensional. When a ray hits a sensor there will be a corresponding pixel in the photo that records simply its intensity. The camera doesn’t care where the ray came from, whether it accidentally fell from aside or was reflected off of another object. The photo captures only the point of intersection of the ray with the surface of the sensor. So it’s kinda 2D.

Light field images are similar, but with a new component — the origin and angle of each ray. The microlens array in front of the sensor is calibrated such that each lens samples a certain portion of the aperture of the main lens, and each pixel behind each lens samples a certain set of ray angles. And since light rays emanating from an object with different angles fall across different pixels on a light field camera’s sensor, you can build an understanding of all the different incoming angles of light rays from this object. This means the camera effectively captures the ray vectors in 3D space. Like calculating the lighting of a video game, but the other way around — we’re trying to catch the scene, not create it. The light field is the set of all the light rays in our scene — capturing both the intensity and angular information about each ray.

There are a lot of mathematical models of light fields. Here’s one of the most representative.

The light field is essentially a visual model of the space around it. We can easily compute any photo within this space mathematically. Point of view, depth of field, aperture — all these are also computable; however, one can only reposition the point of view so much, determined by the entrance pupil of the main lens. That is, the amount of freedom with which you can change the field of view depends upon the breadth of perspectives you’ve captured, which is necessarily limited.

I love to draw an analogy with a city here. Photography is like your favorite path from your home to the bar you always remember, while the light field is a map of the whole town. Using the map, you can calculate any route from point A to B. In the same way, knowing the light field, we can calculate any photo.

For an ordinary photo it’s overkill, I agree. But here comes VR, where light fields are one of the most promising areas of development.

Having a light field model of an object or a room allows you to see this object or a room from multiple perspectives, with motion parallax and other depth cues like realistic changes in textures and lighting as you move your head. You can even travel through a space, albeit to a limited degree. It feels like virtual reality, but it’s no longer necessary to build a 3D-model of the room. We can ‘simply’ capture all the rays inside it and calculate many different pictures from within that volume. Simply, yeah. That’s what we’re fighting over.

  • Link: Google AR and VR: Experimenting with Light Fields

Vasily Zubarev is a Berlin-based Python developer and a hobbyist photographer and blogger. To see more of his work, visit his website or follow him on Instagram and Twitter.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Computational photography part II: Computational sensors and optics

Posted in Uncategorized

 

Computational photography part I: What is computational photography?

04 Jun

Editor’s note: The term ‘computational photography’ gets used a lot these days, but what exactly does it mean? In this article, the first in a three-part series, guest contributor Vasily Zubarev takes us on a journey from present to future, explaining computational photography today, where it’s going and how it will change the very essence of photography.

Series overview:

  • Part I: What is Computational Photography?
  • Part II: Computational sensors and optics (coming soon)
  • Part III: Computational lighting, 3D scene and augmented reality (coming soon)

You can visit Vasily’s website where he also demystifies other complex subjects. If you find this article useful we encourage you to give him a small donation so that he can write about other interesting topics.


Computational Photography: From Selfies to Black Holes

It’s impossible to imagine a smartphone presentation today without dancing around its camera. Google makes Pixel shoot in the dark, Huawei zooms like a telescope, Samsung puts lidars inside, and Apple presents the new world’s roundest corners. Illegal level of innovations happening here.

DSLRs, on the other hand, seem half dead. Sony showers everybody with a new sensor-megapixel rain every year, while manufacturers lazily update the minor version number and keep lying on piles of cash from movie makers. I have a $ 3000 Nikon on my desk, but I take an iPhone on my travels. Why?

It’s impossible to imagine a smartphone presentation today without dancing around its camera.

I went online with this question. There, I saw a lot of debate about “algorithms” and “neural networks”, though no one could explain how exactly they affect a photo. Journalists are loudly reading the number of megapixels from press releases, bloggers are shutting down the Internet with more unboxings, and the camera-nerds are overflowing it with “sensual perception of the sensor color palette”. Ah, Internet. You gave us access to all the information. Love you.

Thus, I spent half of my life to understand the whole thing on my own. I’ll try to explain everything I found in this article, otherwise I’ll forget it in a month.

What is Computational Photography?

Everywhere, including Wikipedia, you get a definition like this: computational photography is a digital image capture and processing techniques that use digital computation instead of optical processes. Everything is fine with it except that it’s bullshit. The fuzziness of the official definitions kinda indicates that we still have no idea what are we doing.

Stanford Professor and pioneer of computational photography Marc Levoy (he was also behind many of the innovations in Google’s Pixel cameras) gives another definition – computational imaging techniques enhance or extend the capabilities of digital photography in which the output is an ordinary photograph, but one that could not have been taken by a traditional camera. I like it more, and in the article, I will follow this definition.

So, the smartphones were to blame for everything.

So, the smartphones were to blame for everything. Smartphones had no choice but to give life to a new kind of photography — computational.

They had little noisy sensors and tiny slow lenses. According to all the laws of physics, they could only bring us pain and suffering. And they did. Until some devs figured out how to use their strengths to overcome the weaknesses: fast electronic shutters, powerful processors, and software.

Most of the significant research in the computational photography field was done in 2005-2015, which counts as yesterday in science. That means, right now, just in front of our eyes and inside our pockets, there’s a new field of knowledge and technology rising that never existed before.

Computational photography isn’t just about the bokeh on selfies. A recent photograph of a black hole would not have been taken without using computational photography methods. To take such picture with a standard telescope, we would have to make it the size of the Earth. However, by combining the data of eight radio telescopes at different locations of our Earth-ball and writing some cool Python scripts, we got the world’s first picture of the event horizon.

It’s still good for selfies though, don’t worry.

  • Link: Computational Photography: Principles and Practice
  • Link: Marc Levoy: New Techniques in Computational photography

(I’m going to insert such links in the course of the story. They will lead you to the rare brilliant articles or videos that I found, and allow you to dive deeper into a topic if you suddenly become interested. Because I physically can’t tell you everything in one article.)

The Beginning: Digital Processing

Let’s get back to 2010. Justin Bieber released his first album and the Burj Khalifa had just opened in Dubai, but we couldn’t even capture these two great universe events because our photos were noisy 2-megapixel JPEGs. We got the first irresistible desire to hide the worthlessness of mobile cameras by using “vintage” presets. Instagram comes out.

Math and Instagram

With the release of Instagram, everyone got obsessed with filters. As the man who reverse engineered the X-Pro II, Lo-Fi, and Valencia for, of course, research (hehe) purposes, I still remember that they comprised three components:

  • Color settings (Hue, Saturation, Lightness, Contrast, Levels, etc.) are simple coefficients, just like in any presets that photographers used since ancient times.
  • Tone Mapping is a vector of values, each tells us that “red with a hue of 128 should be turned into a hue of 240”. It’s often represented as a single-pixel picture, like this one. This is an example for the X-Pro II filter.
  • Overlay — translucent picture with dust, grain, vignette, and everything else that can be applied from above to get the (not at all, yeah) banal effect of the old film. Used rarely.

Modern filters have not gone far from these three, but have become a little more complicated from the math perspective. With the advent of hardware shaders and OpenCL on smartphones, they were quickly rewritten under the GPU, and it was considered insanely cool. For 2012, of course. Today any kid can do the same thing on CSS, but he still won’t invite a girl to prom.

However, progress in the area of filters has not stopped there. Guys from Dehan?er, for example, are getting very hands-on with non-linear filters. Instead of poor-human tone-mapping, they use more posh and complex non-linear transformations, which opens up many more opportunities, according to them.

With the release of Instagram, everyone got obsessed with filters.

You can do a lot of things with non-linear transformations, but they are incredibly complex, and we humans are incredibly stupid. As soon as it comes to non-linear transformations, we prefer to go with numerical methods or run neural networks to do our job. The same thing happens here.

Automation and Dreams of a “Masterpiece” Button

When everybody got used to filters, we started to integrate them right into our cameras. It’s hidden in history whoever was the first manufacturer to implement this, but just to understand how long ago it was, think, that in iOS 5.0 released in 2011 we already had a public API for Auto Enhancing Images. Only Steve Jobs knows how long it was in use before it opened to the public.

The automation was doing the same thing that any of us does by opening the photo editor — it fixed the lights and shadows, increased the brightness, took away the red eyes, and fixed the face color. Users didn’t even know that “dramatically improved camera” was just the merit of a couple of new lines of code.

ML Enhance in Pixelmator.

Today, the battles for the Masterpiece button have moved to the machine learning field. Tired of playing with tone-mapping, everyone rushed to the hype train CNN’s and GAN’s and started forcing computers to move the sliders for us. In other words, to use an input image to determine a set of optimal parameters that will bring the given image closer to a particular subjective understanding of “good photography”. Check out how it’s implemented in Pixelmator Pro and other editors who’s luring you with their fancy “ML” features stated on a landing page. It doesn’t always work well, as you can guess. But you can always take the datasets and train your own network to beat these guys, using the links below. Or not.

  • Link: Image Enhancement Papers
  • Link: DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks

Vasily Zubarev is a Berlin-based Python developer and a hobbyist photographer and blogger. To see more of his work, visit his website or follow him on Instagram and Twitter.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Computational photography part I: What is computational photography?

Posted in Uncategorized

 

The gear that changed my (photographic) life: reader responses part 2

03 May

Reader responses part two – gear that changed my life

Photo of Canon AE-1 by DPR member WoifC

We’ve compiled more of our favorite responses to the question we’ve been asking – both of ourselves and our readers – “What was the piece of gear that made the biggest difference to your photography?” We enjoyed reading all of your stories and have picked out a few of our very favorites to highlight.

This time around, we saw many responses expressing gratitude toward the person who inspired them to pursue photography, in addition to the gear that made the difference. There were also several responses naming the books that changed their photographic lives, which is a sentiment we can definitely get behind.

Reading your answers to this question has been a true joy in times when joy has been harder to come by than usual. We’re grateful to share in the remembrances of the people, books, cameras and lenses that spurred each of our readers further down a path pursuing photography. Thanks to all who took the time to respond, and if you haven’t yet it’s not too late! Leave a comment and tell us your story.

Pentax K10D

Doc Pockets: I was to take a 15-week road trip in a quest to photograph what most will call lousy winter weather. A 1996 4X4 F350 with a service body took us from the Sonoran Desert (home) to and across all the Canadian provinces ending in the Maritimes then driving down the American East Coast…. Three bodies, two DA* 2.8 zooms and a wide prime was chosen.

Drenched in downpours (Vancouver Island), blizzard -blasted (Cabot Trail), sand-blasted (Lake Superior’s shorelines) and one spent two hours with the 50-150mm 2.8 DA* attached in 20 feet of silty water (thanks to my sister) without the slightest problem. To this day those cameras work!

Read the full comment

‘My friend Peter’

JeffieBoy: He is about 5 yrs older than me and for 40+ years has been a mentor and someone I have looked up to. The first time we met, he walked into the room and mumbled something like F5.6 under his breath.

He later explained that he was teaching himself to quantify light In his mind’s eye so he would always be ready to get a good exposure. I practised for a month or more and eventually got very good at it… My cameras were always ready because I was unconsciously presetting everything as light changed.

Read the full comment

Michael Reichmann

Chris Butler: It wasn’t an “it” but a “who” that changed my concept of what I could do with with a camera. Specifically, it was Michael Reichmann’s 2000 comparison of digital images to film, in which he had the audacity to prove the 3 megapixel D30 could produce images as good or better than film. I sold all my considerable film gear and never looked back. Well done, Michael, and RIP.

Read the full comment

Pentax SFXn

Photo of Pentax SFXn by DPR member arthur01

arthur01: …the game changer for me, as a wedding photographer using film, was the underrated Pentax SFXn. It was the first time I used autofocus. As a person wearing glasses and struggling to achieve sharp focus as it got dark towards the end of the after ceremony shots it made all the difference. It prolonged my wedding career.

Read the full comment

‘The New Joy of Photography’ (1985 edition) by the Editors of Eastman Kodak Co.

donnybrook: I was a young field engineer that had just bought a used Minolta XG-7 and a few lenses off a colleague to upgrade my point and shoot film camera. That book basically taught me photography and I would review it before going on vacation trips with my film SLR for years. Not just aperture and exposure compensation but balance, composition, vision and light. Lots of great shots to admire and motivate.

Read the full comment

Nikon D5300

Photo by DPR member Aphidman with Nikon D5300

Aphidman: In 2013 I discovered that 35mm film could not be found outside of cities, and realized it was time to change technologies. Used Air Miles points to get the D5300. It re-ignited my love of photography that had been dormant since my teenage years. Used it to discover what kinds of photography I enjoyed most; 4 years later, upgraded to a D7500… which addressed all the things that held me back with the D5300. An adult daughter now uses that D5300, for which I will always have fond memories.

Read the full comment

Nikkormat Ftn

Photo via Wikimedia Commons by E Magnuson

CTaylorTX: It was January 15, 1972. Fairhaven Camera in East Haven, CT. I was 16, and had saved for a year and was ready to buy my first 35mm SLR. The man behind the counter had already loaded the batteries into a Pentax Spotmatic SP1000. My mom looked at me and said “I have another $ 50, is there something you would like better than this?” I pointed at a Nikkormat Ftn with 50mm f/1.4 Auto-Nikkor – “yes, that!” … While I still love the Pentaxes, the Nikkormat opened the doors to shooting Nikon for the next two decades.

Oh, yes, how do I know the exact date? On the ride back home, the car’s A.M. radio informed me that ‘American Pie’ was now #1 on ‘America’s Top 40’. Thanks for the memory, Kasey Kasem.

Read the full comment

Canon TS-E 17mm F4L

Photo by DPR member John Crowe with Canon 17mm F4 L TS-E

John Crowe: After striving to improve my ultra wide angle photography for 25 years, through three different formats, I sold the 4×5 and 120 cameras and went all in on the Canon 17mm f4 L TS-E. That was almost 10 years ago, and soon realized that not only could I correct perspective but that I could also shift and stitch images together to create even wider views! It took a couple more years for the stitching software to catch up, but once it did, I could achieve the kind of results that I had been searching to create for decades.

Read the full comment

Sony a6000

Photo by DPR member Luddhi with the Sony a6000

Luddhi: …I was rarely taking my camera out as it was too heavy to take bush-walking so I pestered my local camera shop trying out all the lighter cameras until – against the advice of the shop, I bought a Sony a6000. This changed my life. I was able to carry it in my jacket pocket.

I carried it in my hand for about 6 hours through Washpool National Park after I tore my jacket pocket. I could take satisfactory photos one handed – important when holding onto a tree to lean out and take a shot of a ravine. Also whereas my grandchildren would flinch when they saw me with the 50D they practically ignore(d) the a6000. So I now have some good and some funny shots of my grandchildren that I otherwise would not have got.

Read the full comment

Canon AE-1

Photo by DPR member WoifC taken with the Canon AE-1 and Ilford FP4

WoifC: When I was 6 or 7 years old, my mother gave me a Canon AE-1 no one used… There was no film in it and I walked around, tried to focus on anything I found interesting and was soooo proud that I was allowed to push the shutter release button. That’s 30 years ago but I still remember that day and know that this was the day I fell in love for photography.

My son is now 8 years old (since Monday) and loves to take photos too. Sometimes he asks me to borrow my X-T2… and walks around taking photos like I did when i was as old as him. Maybe we will share this hobby when he is older. I hope so.

Read the full comment

Speed Graphic

Photo of minor league baseball images created by DPR member SRHEdD using 35mm and Speed Graphic cameras

SRHEdD: I worked for a rural ad agency and shot 35mm Nikons, but we hired a photographer with a Sinar 4×5 from a larger metropolitan area at great expense. On vacation, I saw an old Speed Graphic in its fiberboard case with two lenses and a half dozen film holders for $ 200 at an antique shop. It worked perfectly. I bought a Polaroid back when I got home and instantly replaced having to hire anyone else.

I shot food for a major poultry company, team photos for a minor league baseball team, and some great still lifes used for our clients’ annual reports, etc. I think it was then that I was comfortable calling myself a professional photographer.

Read the full comment

Foba camera stand

Photo of DPR member Jim Kasson with Foba camera stand

Jim Kasson: Lots of gear has allowed me to do things I couldn’t otherwise do. I couldn’t have done Staccato before the D3. I couldn’t have done much of Timescapes without the Betterlight scanning back. But the piece of gear that has changed my life the most in the past few years is a Foba camera stand. Setups that were a pain are now effortless.

Read the full comment

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The gear that changed my (photographic) life: reader responses part 2

Posted in Uncategorized

 

Fujifilm Will Award $90,000 in Gear as Part of “Students of Storytelling” Initiative

13 Apr

The post Fujifilm Will Award $ 90,000 in Gear as Part of “Students of Storytelling” Initiative appeared first on Digital Photography School. It was authored by Jaymes Dempsey.

Fujifilm Contest

If you’re a university student, or you’re interested in following the work of student photographers and videographers around the US, then I have good news:

Fujifilm is launching its Students of Storytelling contest, which awards 30 students up to $ 3000 USD in Fujifilm gear.

Its purpose?

To help students tell their own stories through photography and videography.

As Fujifilm explains, ” We are passionate about stories and truly believe that the future of storytelling rests in the hands of today’s college students. This is why the Students of Storytelling contest will award up to $ 3,000 of Fujifilm gear to a select group of winners to help bring their creative stories out into the light.”

Note that you don’t have to be an accomplished artist to take part; all current part-time and full-time college students are eligible, excepting Florida residents.

students of storytelling contest page

Fujifilm does offer entry guidelines, stating that the ideal proposal “should be designed to tell a cohesive ‘story’ of a human, or life-related experience, event, challenge, objective, relationship(s), approach, passion, and/or interest that may be depicted and effectively communicated through photographic images or video.”

Fujifilm also notes that participants will need to adhere to the CDC’s COVID-19 social distancing guidelines when carrying out their project.

The submission period goes until May 31st, during which eligible students can submit their proposed stories in written, video, or photographic format. The first half of June will be spent judging the entries, and students will be notified of their success at the end of June.

At that point, winners will be given the opportunity to choose Fujifilm equipment totaling up to $ 3000 USD. Winners will then have 90 days to complete and submit their stories, which are to be shared via social media, as well as on Fujifilm’s Create Forever website.

So if you’re an eligible student, head on over to Fujifilm’s website, where you can submit your own proposal to be considered for the Students of Storytelling contest.

And for everyone else:

If you’re interested in following the contest and all the winners, be sure to check Create-Forever.com for updates.

The post Fujifilm Will Award $ 90,000 in Gear as Part of “Students of Storytelling” Initiative appeared first on Digital Photography School. It was authored by Jaymes Dempsey.


Digital Photography School

 
Comments Off on Fujifilm Will Award $90,000 in Gear as Part of “Students of Storytelling” Initiative

Posted in Photography

 

Gear of the year 2019: Barney’s choice (part 2) Nikon Z 50mm F1.8 S

16 Dec
Photo: Dan Bracaglia

We’ve been writing these articles for a few years now, and when it comes time to think about what I would pick as my ‘Gear of the Year’, I tend to go by two main criteria: What (if any) gear in the past 12 months did I actually spend my own money on, and what did I most enjoy using? And if those two criteria happen to be met by a single product, then there’s my answer. No further consideration required.

This year, two products met both of those criteria. The Ricoh GR III (which I wrote about here) and the Nikon Z 50mm F1.8 S. Clearly they’re very different things. One is an APS-C compact camera and the other is a lens for full-frame mirrorless cameras. But both have been in my camera bag almost every time I’ve gone out shooting in 2019.

Of the thousands of frames I’ve shot with the Z 50mm this year, the vast majority have been taken at F1.8

Why do I love the Nikon Z 50mm F1.8 S so much? The boring answer is that it’s just really really good. Historically I’ve not been not a big 50mm fan in general, and I will admit to being a bit of a snob about F1.8 lenses in the past. But the Z 50mm F1.8 S is so good – and so good at F1.8 – that it has changed my perspective on what a ‘nifty fifty’ can be.

I would estimate that of the thousands of frames I’ve shot with the Z 50mm this year, the vast majority have been taken at F1.8. With most of the standard lenses I’ve used during my career, that would not be a particularly smart move. Generally speaking, lenses of this type are at their best when stopped down slightly. But the Z 50mm F1.8 is almost as sharp wide open as it is stopped down, and at all apertures it’s largely free from common aberrations like longitudinal chromatic aberration.

Nikkor Z 50mm F1.8 S | ISO 100 | 1/800 sec | F1.8

There are plenty of 50mm lenses that give a more interesting rendering than the Z 50mm F1.8 S, but few which provide its biting cross-frame sharpness and virtually coma-free images at wide apertures. And it just so happens that those qualities ended up being crucial to me this year, when working on a long-term project down on Washington’s coast, during twilight clam digs. The combination of the Nikon Z7’s resolution and in-body stabilization and the Z 50mm’s sharpness and clean rendering at F1.8 proved invaluable, allowing me to get sharp, hand-held images in near-darkness that I could never have captured with a DSLR.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_5393471639″,”galleryId”:”5393471639″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

The fact that the weather-sealed Z7 and Z 50mm F1.8 S continued to work reliably and accurately for hours in heavy rain and strong winds is another major point in both their favor.

I’ve also come to really appreciate the Z 50mm F1.8 S for portraiture, despite its relatively short focal length, which discourages very tight framing. Bokeh isn’t the smoothest at wide apertures, but it’s smooth enough, and virtually free from colored fringing.

Nikkor Z 50mm F1.8 S | ISO 64 | 1/80 sec | F1.8

Of course, I’m lucky. Like almost all professional photography reviewers I get to try all kinds of different equipment, at no cost. When I do spend my own money on something, it’s because I’ve used it, probably quite extensively, and I’m very confident in my investment.

That means that I have to be careful to stay grounded when talking to our readers, especially when it comes to making value judgements about the cost of new gear. Personally, having used a lot of lenses, I think that the Z 50mm F1.8 S’s price of around $ 600 is exceptionally good value, but I understand the complaints from some of you that $ 600 is a lot to pay for a 50mm F1.8. And a large-ish one, at that, by traditional (if not current) standards.

The point I would make (and which I hope I made in this article) is that $ 600 spent now, on a modern lens designed for mirrorless, buys you greater performance than $ 600 ever has before. We are very lucky, as photographers, to be on the cusp of a new era in optics, where some of the old paradigms are being overturned. In the case of this particular lens, it’s probably the only 50mm I’ll ever need for my Z7. Not bad for $ 600.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_9560366311″,”galleryId”:”9560366311″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });
Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Gear of the year 2019: Barney’s choice (part 2) Nikon Z 50mm F1.8 S

Posted in Uncategorized

 

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

08 Dec

The post Photoshop Adjustment Layers Explained and How to Use Them (Part 2) appeared first on Digital Photography School. It was authored by Nisha Ramroop.

photoshop-adjustment-layers-explained-part-2

Part 1 of How to Use Photoshop Adjustment Layers introduced you to the first eight of the adjustment layer type editing tools, which allow you to work non-destructively. Here, we continue to look at some of the other tools available as Adjustment Layers.

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

1. Photo Filter

Did you know that there are colored filters that you place in front of your camera lens that alter the color temperature and balance of your final image? Well, the Photo Filter adjustment layer adds a color filter to your image similar to this.

There are many preset photo filters in Photoshop, but the most common are those that make your image warm or cool. You can further tweak each preset to your liking. For instance, you can change the density of the effect easily using the Density slider. There is also the Preserve Luminosity box to check so that the applied filter does not darken your image.

You can also choose an exact color that you would like to overlay as a filter by clicking on “color” and chosing from the color menu or by using the eyedropper tool to chose a color from your image.

Image: Warm (oranges) and Cool (Blues) Photo filters applied to the image above

Warm (oranges) and Cool (Blues) Photo filters applied to the image above

2. Channel Mixer

The Channel Mixer Photoshop Adjustment Layer is another great tool to create stunning black and white and tinted images.

The principle is similar to that used by the Black and White Adjustment Layer. In each of these, you can adjust the displayed grayscale image by changing the tonal values of the color elements of the image.

There are three channels in the RGB view: red, green and blue. Note: The source channel is the one that defaults to 100%. The Channel Mixer, therefore, allows you to combine and mix the best of each channel. It does this by adding (or subtracting) grayscale data from your source channel to another channel.

Also, of note, adding more color to a channel gives you a negative value and vice versa. Hence, at the end of your edit, it is advisable that all your numbers total 100%.

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

The Channel Mixer also allows you to exaggerate color and make creative color adjustments to your image.

3. Color Lookup

The Color Lookup adjustment layer uses presets to instantly color grade or change the “look” of your image. The presets are called LUTs or lookup tables. Each lookup table contains specific instructions for Photoshop to remap the colors in your image to a different set of colors to create the selected look.

Image: Applying the Late Sunset LUT creates a dramatic finish

Applying the Late Sunset LUT creates a dramatic finish

When you choose the Color Lookup Adjustment Layer, three options are available to you: 3DLUT File, Abstract and Device Link.

Most of the presets reside under the 3DLUT File option. Of note, 3D (in 3DLUT) refers to Photoshop’s RGB color channels (and not three-dimension).

Image: Late Sunset LUT applied at 60% opacity for a more realistic finish

Late Sunset LUT applied at 60% opacity for a more realistic finish

Furthermore, LUTS are available for download from various websites or you can create your own LUT.

4. Invert

The Invert Photoshop Adjustment Layer is self-explanatory. It inverts the colors and is an easy way to make a negative of your image for an interesting effect.

Image: The first image with colors inverted gives a surreal otherworldly effect

The first image with colors inverted gives a surreal otherworldly effect

5. Posterize

Looking for a flat, poster-like finish? The Posterize Adjustment Layer gives you that by reducing the number of brightness values available in your image.

You can make an image have as much or as little detail as you like by selecting the number in the levels slider. The higher the number, the more detail your image has. The lower the number, the less detail your image has.

This can come in handy when you want to screenprint your image. You can limit the tones of black and white. This is also true of the Threshold Adjustment Layer.

Image: Posterize Adjustment Layer

Posterize Adjustment Layer

6. Threshold

When you select Threshold from your Photoshop Adjustment Layers list, your image changes to black and white. By changing the Threshold Level value, you control the number of pixels that are black or white.

Image: Threshold Adjustment Layer

Threshold Adjustment Layer

7. Gradient Map

The Gradient Map lets you map different colors to different tones in your image. The gradient fill, therefore, sets the colors representing both the shadow tones on one end and highlight tones on the other end of the gradient.

Likewise, checking the “Reverse” box swaps around the colors of your gradient. This means that the shadow colors are moved to the highlights end and vice versa.

A good rule of thumb is to keep your shadows dark and your highlights brighter for ease of reference.

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

Your gradient map also makes available many presets that are adjustable via the gradient editor window. Additionally, you can also define/create your own gradients by changing the slider colors.

8. Selective Color

Use the Selective Color Adjustment Layer to modify specific amounts of a primary color without modifying other primary colors in your image. Check the Absolute box if you want to adjust the color in absolute values.

Example: If you have a pixel that is 50% yellow and you add 10%, you are now at a 60% total. The Relative box is a little more complicated as it would adjust the yellow pixel only by the percentage it contributes to the total. Using the same example, if you add 10% to the yellow slider (with relative checked), it actually adds 50% of the 10%, which brings your total to 55%. Relative, therefore, gives you a more subtle effect.

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

However, when it comes to this editing tool, the potential is far beyond this simplistic edit technique. You can use it to correct skin tones and for general toning.

While selective color adjustments are similar to hue/saturation adjustments, there are subtle differences. Selective Color allows you to subtract/add color values, whereas Hue/Saturation does not.

The Hue/Saturation adjustment allows you to work with a range of hues that are included with the six color ranges in Selective Color, so there is more control there if you need it.

Conclusion

These basic examples of how to use the Photoshop Adjustment Layers tools merely scratch the surface of their capabilities. Certainly, you will appreciate editing non-destructively, whether you are just starting out or advanced with adjustment layers.

Some of the adjustment layers seem similar, but each has its differences and its pros and cons. Either way, there are many possibilities of playing around with your image, while preserving the original.

If you haven’t already, be sure to check out Part 1 in this series.

Do you use Photoshop Adjustment Layers? If so, which ones do you use and why? Share with us in the comments.

The post Photoshop Adjustment Layers Explained and How to Use Them (Part 2) appeared first on Digital Photography School. It was authored by Nisha Ramroop.


Digital Photography School

 
Comments Off on Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

Posted in Photography

 

Gear of the Year – Barney’s choice part 1: Ricoh GR III

05 Dec
Photo: Dan Bracaglia

I’m of the opinion that if you use a phrase like ‘shut up and take my money’ in the title of an article about a camera, you’d damned well better buy it. It’s not about gear acquisition (honest it isn’t) it’s about reader trust.

Yeah, right. But either way, I was serious. It wasn’t long after writing our review of the Ricoh GR III that I bought my own, right before a trip to Japan this summer. I’ve been to Japan a few times for work, but this was to be a proper vacation for once. Just me, a couple of guidebooks, some depressing podcasts and a sturdy pair of hiking boots. And the GR III.

In the end, it didn’t end up being all vacation (one of those “Hey, so we’re planning a video project in Japan, and since you’re going to be there anyway…” things) but I did get in a decent amount of hiking, and the GR III was with me every step of the way.

ISO 200 | 1/400 sec | F5.6

The GR III wasn’t the only personal camera I took to Japan (I also grabbed my Nikon Z7 with a 24-70mm F2.8 lens, just to be on the safe side) but it was the one I ended up using most. Partly that’s because it’s a great camera and I love the images that come out of it, but that’s equally true of the Z7. Mostly it’s because the GR III is small enough to fit into a shirt pocket.

In terms of image quality, the new sensor in the GR III offers a useful resolution boost over its predecessors, but more important to me is the addition of stabilization and a major increase in usable Raw dynamic range.

ISO 160 | 1/400 sec | F7.1

There’s no doubt that 28mm equiv. is a limiting focal length, but it also turns out to be perfect for trail landscapes and for quick grab shots walking around cities. Considering that the GR III is barely any bigger than my phone (albeit thicker) it’s hard to imagine a better traveling companion, provided of course that you don’t need to shoot video.

Downsides? Naturally there are a few. The aforementioned uninspiring video mode, for one, but aside from that, the GR III’s maximum aperture of F2.8 means there’s very little scope for creative depth of field control, and while built-in stabilization helps, low light shooting often ends up meaning high ISO shooting.

ISO 640 | 1/40 sec | F4

There’s no built-in flash, which I know some GR/II fans will sincerely miss, the battery is tiny (but offers more stamina than you might expect in normal use) and there’s no EVF. Outside on a sunny day it’s not always easy to get an accurate idea of composition on the shiny rear screen, and it’s hard even to make out the horizon level indicator when shooting in especially bright conditions.

It’s a pocketable and silent camera with a very sharp lens, which can get you pictures that larger, louder cameras simply cannot.

Of course you can boost the screen brightness, and you can also add an optical finder. Neither are a perfect solution though. Bumping up the brightness kills battery life, and with a finder, framing becomes approximate, there’s no shooting data in your eye-line (obviously) and the GR III suddenly gets less pocketable.

Being such a small camera, the GR III’s controls are also rather cramped in general, but that comes with the territory.

Like many cameras of its type, the GR III is arguably at its best when used as a point and shoot, but that doesn’t mean you can’t (or shouldn’t) take full control. The GR III offers full manual exposure control and retains the top control dial from previous generations, which for an aperture-priority photographer such as myself is probably the most important single control point. A large, responsive touchscreen takes care of almost everything else.

ISO 1600 | 1/40 sec | F2.8

Although some GR/II fans will miss those cameras’ dedicated +/- rocker switch for exposure compensation, the rear jog switch on the GR III can be set up to do the exact same thing, and users of previous generations will be reassured to know that it’s just as easy to accidentally hit.

That was sarcasm. For the most part, the GR III does exactly what I want it to, when I want it to, and it’s exactly in line with what Ricoh has aimed to provide from the very beginning of the GR series way back in the 1990s. The GR III is a pocketable and silent camera with a very sharp lens, which precisely for those reasons can get me pictures that larger, louder cameras simply cannot.

Like all cameras, it has some limitations. Many of these are inherent to the design and form factor, but all are forgivable and in my opinion none devalue its main selling points.

For all of these reasons, my first choice for Gear of the Year is a camera that I’ve carried with me more than any other in 2019, not including my phone: the Ricoh GR III.

Watch out for Part 2 of my personal ‘Gear of the Year’ in a few days.


Ricoh GR III sample gallery

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_7380089310″,”galleryId”:”7380089310″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });
Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Gear of the Year – Barney’s choice part 1: Ricoh GR III

Posted in Uncategorized

 

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

02 Dec

The post Photoshop Adjustment Layers Explained and How to Use Them (Part 1) appeared first on Digital Photography School. It was authored by Nisha Ramroop.

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

If you use Photoshop, you probably already know that layers are a great non-destructive way to edit. Within the realm of layers, there exists a group of very useful editing tools called Adjustment Layers that allows for easy editing of your images. As with most Photoshop tools, there are several ways to achieve the same result. When you use Photoshop adjustment layers (as with other layer types), you can make changes, save it as a Photoshop file (PSD) and undo/change it many years later. Since no pixels are destroyed or changed, your original image stays intact. Let’s take a look at the basics of using Photoshop Adjustment Layers.

Accessing Photoshop Adjustment Layers

There are two ways to access Photoshop Adjustment Layers.

1. To access via the Layers Menu; choose Layer->New Adjustment Layer, and choose one of the many adjustment types (which are expanded upon below).

photoshop-adjustment-layers-explained

2. To access via the Layers Panel; click on the half black/half white circle at the bottom of the Layers Panel, and choose the adjustment type you want to work with.

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

Adjustment Layer Types

1. Brightness and Contrast

Brightness and Contrast allow you to make simple adjustments to the brightness and contrast levels within your photo. When you adjust brightness, the overall lightness (or darkness) of each pixel in your frame is changed. To increase a photo’s tonal values and increase the highlights, slide the Brightness to the right. To decrease a photo’s tonal values and increase the shadows, slide the Brightness to the left.

Contrast, however, adjusts the difference between the brightness of the elements in your image.  Thus, if you increase brightness you make every pixel lighter, whereas if you increase contrast you make the light areas lighter and the dark areas darker.

photoshop-adjustment-layers-explained

2. Levels

The levels tool adjusts the tonal range and color balance of your image. It does this by adjusting the intensity levels of the shadows, mid-tones, and highlights in your image. Levels Presets can be saved and then easily applied to further images.

Of note, if you use the Image menu to open the levels tool (Image->Adjustments->Levels) a separate layer will not be created and the changes will be committed directly (destructively) to your image layer. Thus, I recommend using the Adjustment Layers menu (as shown above)  to access this very useful tool.

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

3. Curves

While the Levels adjustment allows you to adjust all the tones proportionally in your image, the Curves adjustment lets you choose the section of the tonal scale you want to change. On the Levels graph, the upper-right area represents the highlights, while the lower-left area represents the shadows.

Use either of these adjustments (levels or curves) to correct your tone when your image’s contrast is off (either too low or high).

The Levels Adjustment works well if you need to apply a global adjustment to your tone. To apply more selective adjustments, you are better off using Curves. This includes adjustments to just a small section of the tonal range or if you only want to adjust light or dark tones.

photoshop-adjustment-layers-explained

4. Exposure

When you think of exposing an image properly, you are concerned with capturing the ideal brightness, which will give you details in both the highlights and shadows. In Photoshop Adjustment Layers, the Exposure Adjustment has three sliders that adjust Exposure, Offset and Gamma.

Use the Exposure slider to adjust the highlights of the image, the Offset slider for the mid-tones and the Gamma to target the dark tones only.

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

5. Vibrance

Use the Vibrance Adjustment Layer to boost the duller colors in your image. The great thing about increasing vibrance is that it focuses on the less-saturated areas and does not affect colors that are already saturated.

Image: Vibrance adjusts only the duller colors in an image

Vibrance adjusts only the duller colors in an image

photoshop-adjustment-layers-explained

Look at the difference in the greens between this image and the one above. Saturation adjusts all the colors (and tonal range) in an image.

6. Hue/Saturation

Hue and Saturation, allows you to change the overall color hue of your image, as well as how saturated the color is.

You can change the hue (color) of your entire image by keeping “Master” selected in the dropdown (this is set by default). Alternatively, you can pinpoint the color you would like to change the hue of. You can choose from Reds, Yellows, Greens, Cyan, Blues or Magentas.

In addition to adjusting the obvious hue and color saturation of your image, this Photoshop Adjustment Layer allows you to adjust the lightness of your entire image as well as work with specified colors. Keep in mind that changing the overall saturation of an image affects your tonal range.

Image: Use the Hue Adjustment to get creative

Use the Hue Adjustment to get creative

Color Balance

The Color Balance Adjustment layer is used to change the overall mixture of colors in an image and works well for color correction.

photoshop-adjustment-layers-explained

Color Balance adjusted for the mid-tones to include more red

You first need to select either Shadows, Midtones or Highlights, to choose the tonal range you want to change.

Check the Preserve Luminosity box to preserve your luminosity values (brightness or darkness) and maintain the tonal balance as you change the color in your image. Move your slider toward the color you want to increase and away from the color you wish to decrease.

Black and White

As the name implies, the Black and White adjustment layer allows you to easily take your images to a grayscale version or apply a color tint entirely.

There are many ways to achieve black and white image processing. The Black and White Photoshop Adjustment Layer is one of the better ones. It allows you to lighten or darken specific color ranges to enhance your black and white conversion. Example: If you want the blues of your color image to stand out more when converted to black and white, simply toggle that slider. You can add more or less contrast by making particular colors lighter or darker.

photoshop-adjustment-layers-explained

1. When you choose the Black & White Adjustment Layer, you get a default black & white conversion 2. You can tweak the image based on selective colors. In this example, the blues and yellows were adjusted 3. You can apply a tint (of any color) over the entire image by ticking the Tint box and selecting the color you wish to overlay.

Important Note: While most of these adjustments are available under the Image menu (Image->Adjustments), using them from there does not work the same. The main difference is that these are applied directly to the image (destructively) as opposed to when done under Adjustment Layers. When done under Adjustment Layers, you can turn the adjustment on and off by selecting and deselecting the “eye” in the layers panel.

Conclusion

Photoshop Adjustment Layers are a great group of tools that allow you to smartly edit your image in a non-destructive way. Your original pixels are preserved, so you are able to come back and change your edits years later. Thus, they give you the power to undo easier and work more efficiently.

Photoshop Adjustment Layers group together the most common editing tasks, along with a few others to help you bring your images to life.

In Part 2, we will explore some other tools in the Adjustment suite.

Share with us in the comments your favorite adjustment tool and how you use them.

The post Photoshop Adjustment Layers Explained and How to Use Them (Part 1) appeared first on Digital Photography School. It was authored by Nisha Ramroop.


Digital Photography School

 
Comments Off on Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

Posted in Photography