RSS
 

Posts Tagged ‘Explained’

Your Camera’s Metering System Explained

09 Jan

The post Your Camera’s Metering System Explained appeared first on Digital Photography School. It was authored by Herb Paynter.

your-cameras-metering-system-explained

Your camera’s metering system “reads” the light reflecting from surfaces in front of the lens and reports its findings both in the viewfinder and on the LCD display.

This information serves as a guideline for accurately setting your camera’s exposure controls. It’s quite important that you know what information your camera is providing, and how to best make use of that data if you wish to achieve accurate color.

Image: f/3.2, 1/60, ISO 3200, Lumix Vario G 12-35/2.8, 17mm, Pattern Metering

f/3.2, 1/60, ISO 3200, Lumix Vario G 12-35/2.8, 17mm, Pattern Metering

Your camera’s metering system suggests the amount of light that will be needed to expose middle gray, or typical skin tone in a scene, as measured in the techie term of luminosity. Luminosity is a geeky word that describes light as visual volume.

That means that the reading taken by the meter reflects (pun intended) the correct exposure setting needed to render either a specific spot, or the average lighting range within an entire scene, with the correct amount of light to deliver a proper exposure.

The exposure your camera is looking for is referred to as the 18% gray balance. This particular tone is approximately the reflective density of middle gray to human sight. That is the reason why professional photographers usually pack an 18% gray card as a reference in their camera bag.

Image: f/2.8, 1/3200, ISO 400, Lumix Vario G 12-35/2.8, 32mm, Pattern Metering

f/2.8, 1/3200, ISO 400, Lumix Vario G 12-35/2.8, 32mm, Pattern Metering

Colors are seriously influenced by tonality, which is the balance between dark and light tones. The same colors seen in ample light as bright and colorful tend to appear dark and muted in dim lighting. This is quite a natural occurrence that happens with your eye just as it does with your camera.

Remember, your eye uses the color-insensitive rods to see images in dimly-lit areas because the color-sensitive cones don’t see well in the dark.

Setting your camera to capture specific lighting conditions requires accurate feedback from your camera’s light metering system.

Metering systems

Your camera provides at least two types of reflected light analysis: matrix and spot.

Some cameras offer several variations of these two systems. The one you choose will make a significant difference in your photo.

Matrix metering involves light that is averaged or integrated from the entire scene. Spot Metering measures light in a specific part of the scene.

You should understand the fundamental difference between the way your eyes register light and the way your camera’s metering system measures light.

your-cameras-metering-system-explained

First, your eyes…

When viewing a subject in a dimly-lit area of a scene, your mind tells your eyes where the subject is, and your eyes then focus on that particular area.

This allows the iris (your eye’s equivalent to your camera’s aperture) to dilate, or open. Then adjusting to the dim light and admitting a sufficient volume of light into your eyes it allows you to see the dimly-lit subject in all its detail.

In this way, your eyes actually use a “spot metering” system.

Your camera, on the other hand, must be specifically set to read distinct areas of an image. If your camera is set to Matrix metering, reading a specific spot requires some nimble fingers and quick thinking.

Some of the trickiest pictures to capture involve extreme lighting contrast, that is, scenes that contain both bright sunlit areas and important detail in the dark shade.

Spot Metering

Capturing accurate color when the subject is located in the shadow areas of a scene generally requires setting your camera to Spot Metering and centering the viewfinder on the subject.

your-cameras-metering-system-explained

f/4.5, 1/80, ISO 200, Lumix Vario G 12-35/2.8, 26mm, Spot Metering

Spot Metering concentrates on a small “spot” area in the middle of the viewfinder to evaluate light. You can usually set the size of this “spot” in your camera’s Preferences.

Spot metering requires your camera to register the light reflecting from a specific area, assuring you that that particular element will be perfectly exposed.

Matrix Metering

Matrix metering considers all lighting in the scene as equally important, with greater importance directed toward the center of the frame. Matrix lighting is usually a digital camera’s default metering mode.

When a scene contains several different areas of light intensity, your camera has to decide on how important that light is to the overall exposure. It delivers a reading that will attempt to capture the full range of light in the scene.

When a bright light appears in the corner of a scene, that light only proportionately influences the overall exposure in the image. When the center of the frame (as seen in the LCD preview window) is darker, and the outside edge of the frame is lighter, the center portion of the scene (usually being the center of focus), heavily influences the overall exposure, allowing more light to enter the lens.

The opposite lighting scenario influences the exposure in the other direction, reducing the overall exposure to favor the lighting in the center portion of the frame. DSLR cameras allow this lighting “favoritism” to be overridden, or redirected, by the photographer.

Matrix metering looks at a scene in segments and makes calculated decisions based on the mix of lighting in the various segments.

While each camera manufacturer has its own patented (and quite secret) metering configurations, most cameras employ some form of matrix metering as their standard exposure system.

Image: f/3.5, 1/160, ISO 200, Lumix Vario G 12-35/2.8, 26mm, Matrix Metering

f/3.5, 1/160, ISO 200, Lumix Vario G 12-35/2.8, 26mm, Matrix Metering

Cameras cannot expose individual segments of the picture separately, but by keeping stray light from entering your camera’s lens and using your camera’s matrix metering system, the tonal curve (the determination of the middle tone of the scene) can be shifted in such a way as to place more light on the subject.

Matrix and Spot metering modes allow your camera to measure the color and set the proper exposure even in challenging lighting conditions. Most cameras provide this dual-metering capability and provide very specific controls over exposure even though some camera owners are not aware of it.

But remember, relying too much on any of your camera’s Auto functions can compromise an otherwise spectacular color picture. If your camera has both Spot and Matrix metering capabilities, it’s better to choose whether the scene, the subject, or a small portion of the subject should determine your camera’s exposure.

Image: f/5, 1/800, ISO 800, Lumix Vario G 35-100/2.8, 93mm, Center weight-Average Metering

f/5, 1/800, ISO 800, Lumix Vario G 35-100/2.8, 93mm, Center weight-Average Metering

Centre-Weighted Metering

Somewhere between Spot and Matrix Metering is a light metering system called Center-Weighted Metering. This system reads lighting from several points around the frame but gives more preference to an enlarged area in the center of the frame. This form of metering was very prevalent before the more advanced development of Matrix-Metering Systems, and still offers a very viable method of addressing scene lighting.

Regardless of which system you use, the composite (averaged) light value actually measured by your camera will indicate the equivalent tonal value typically assigned to skin tones.

Professional photographers use a handheld meter to read the light reflecting from a subject’s face because human skin color is the general tonal value on which all camera exposures are determined.

Image: f/1.8, 1/2000, ISO 25, iPhone XR,1.8, 4.25mm, Centre-Weighted Metering

f/1.8, 1/2000, ISO 25, iPhone XR,1.8, 4.25mm, Centre-Weighted Metering

Pattern Metering

Always remember that the area which your camera ultimately reads will be captured as a middle-tonal-value. Setting your reading on extremely dark tones will produce those tones as a medium tonal value and thus render lighter tones as very light.

Extremely light areas will, in turn, be exposed as if they were significantly darker than they actually were. They could completely lose detail in the darker areas of the image. So, be very careful about where you take your readings.

Conclusion

Most cameras offer exposure compensation controls that allow the photographer to adjust the overall exposure to override your camera’s metering system. This feature usually provides adjustments, in both increasing and decreasing values, in one-third stop increments. They can be helpful if, and when, the metering system provides consistently under or over-exposed images.

Learning to control your camera’s metering system and apply the correct mode will deliver more attractive and dynamic results. Take control, and you’ll be very pleased with the outcome.

The post Your Camera’s Metering System Explained appeared first on Digital Photography School. It was authored by Herb Paynter.


Digital Photography School

 
Comments Off on Your Camera’s Metering System Explained

Posted in Photography

 

Photoshop Sharpening for Beginners – Unsharp Mask, High Pass and Smart Sharpen Explained

02 Jan

The post Photoshop Sharpening for Beginners – Unsharp Mask, High Pass and Smart Sharpen Explained appeared first on Digital Photography School. It was authored by Nisha Ramroop.

photoshop-sharpening-for-beginners

With the constant improvement of technology, it is realistic to expect tack sharp images straight out of your camera. Many times, though, the image appears sharper on your camera than when uploaded to your computer. This is because images need to be sharpened for their specified output. In Photoshop sharpening for beginners, we look at why and how to sharpen your images. Also, you will review some of the sharpening filters and techniques that help make a huge difference to the overall quality of your final edit.

Image: F/5.6 1/200/ ISO800 @420mm

F/5.6 1/200/ ISO800 @420mm

Why sharpen?

Before we delve into the photoshop sharpening tools, it is important to know why and when you need to sharpen your images. The premise of sharpening is that it increases the contrasts around the edges in your image.

As previously mentioned, you need to sharpen images for their specific output types. This means you sharpen an image displayed on the internet differently than one meant for print.

Sharpening is also very useful when it comes to correcting smaller focus issues. For example, if you are shooting a portrait and the focus is on the ear instead of the eyes.

Sharpening the subject’s eyes brings the attention back where you intended. In other genres of photography, use sharpening in pretty much this same way. Sharpen the areas you want to draw the viewer’s eyes to and decrease the sharpness in areas you want to “fall away”.

Photoshop-Sharpening-for-Beginners

Sharpen for your respective output: web vs print. Details make a difference F/11 1/250 ISO200 @10mm

Keep in mind that while sharpening takes your image to the next level, there are times when it is not needed. While sharpening a person’s eyes and lips can take their portrait to the next level, you want to avoid sharpening their skin!

How to sharpen

As with other editing techniques, Photoshop sharpening filters are destructive (when applied directly to your image). Use layers and smart objects, to maximize the following sharpening tools and avoid permanently changing it.

Bonus Tip: When post-processing, save the application of your sharpening technique for last.

Unsharp Masks

While the name “unsharp” sounds like a tool that would make your images less sharp, this counter-intuitively named filter is, in fact, a sharpening tool. Interestingly enough, it was regarded as the best tool to sharpen images in earlier iterations of Photoshop.

While it is still useful, some of the other tools afford you a greater amount of control.

Photoshop-Sharpening-for-Beginners

Photoshop sharpening for beginners notes: When you sharpen an image too much, it starts to look a little noisy.

To use Unsharp Mask:

  • Duplicate your original/background layer
  • Right-click on your new layer and choose “Convert to Smart Object”
  • Go to Filter -> Sharpen -> Unsharp Mask. This brings up a dialog box with the options: Amount, Radius and Threshold

 

Remember, we said that sharpening basically increases the contrasts around the edges in your image? Well, building on that will make these sliders easier to understand.

Use the Amount slider to increase or decrease the amount of contrast in the edges of your image.

Radius manages the level of detail. So a smaller radius will manage the smaller details, while a large radius affects a bigger area.

The last slider, Threshold, affects the areas of higher contrast.

Photoshop Sharpening for Beginners – Unsharp Mask, High Pass and Smart Sharpen Explained

When using these sliders, a good starting place is to figure out your radius first. Do you want to sharpen the smaller details or the larger ones? If you are still unsure how to work with Photoshop sharpening for beginners, experiment!

In this example, push your radius up and then work back down until you affect the areas that you want. From here, you can start moving around the other two sliders until you get your desired results.

One of the downsides of using Unsharp Mask is that it is Layer specific. This is because it affects your entire image, so you need Layer Masking to discard areas that you don’t want sharpening.

Smart Sharpen

The Smart Sharpen tool is like a child of Unsharp Masks, as it gives you some more options. One of the cool things with Smart Sharpening is that it ignores skin detail and focuses on areas with higher contrasts e.g. eyes, eyebrows, lips.

Use the same workflow as above to access the Smart Sharpen filter. The dialog box gives you more advanced options than the previous Unsharp Mask. Amount and Radius work the same way as previously described.

Image: F/5.6 1/1600 ISO100 @420mm

F/5.6 1/1600 ISO100 @420mm

A drop-down menu allows you to remove different types of blurs such as Gaussian, Lens and Motion blur. Lens blur is the most common removal used in this menu.

Image: In Photoshop CC, your menu looks a little differently but has all the same options. Additiona...

In Photoshop CC, your menu looks a little differently but has all the same options. Additionally, it includes a useful option to reduce noise.

Smart Sharpen also allows you to create presets. This is helpful if you are sharpening a batch of images at similar focal lengths. Remember, when you apply your filters/filter preset to a Smart Object, you can further adjust/refine it.

High Pass Sharpening

Another photoshop sharpening for beginners is the High Pass Sharpening technique. While it is a little more advanced than the other tools, you can still easily apply it. It is not a filter found in the Sharpen Menu like the Unsharp Mask and Smart Sharpen filters. What it is, however, is a combination of steps that sharpens your image.

To process with High Pass Sharpening:

  • Duplicate your layer
  • Desaturate your new layer. It seems like a strange step, but since sharpening increases saturation around your edges, your image may start to look surreal.
  • Right-click on your desaturated layer and choose “Convert to Smart Object”
  • Go to Filter -> Other- > High Pass. Your entire image now turns to gray, and when you move the radius slider, you will see the targeted detail areas affected.
Photoshop-Sharpening-for-Beginners

Left: Using High-Pass in the default normal mode, see how your edges are defined. Right: Using High-Pass in Overlay Blend Mode allows you to see your changes in real-time

There is, of course, an alternate way to this gray image where you see your edit in real-time.

Prior to the last step above:

  • Select your desaturated layer and go to Blend Modes
  • Select Overlay
  • Then go to Filter -> Other -> High Pass and adjust

 

Photoshop-Sharpening-for-Beginners

At full size, you don’t always see that the eyes are not as sharp as they can be. Like Smart Sharpen, the eyes and lips are sharpened, but the skin is unaffected. F/7.1 1/125 ISO100 @70mm

Note: This sharpening technique is the only one that lets you apply sharpening with different blend modes.

Conclusion

Remember, sharpening should be the last step in your post-processing workflow. These filters and techniques in Photoshop Sharpening for Beginners will take your image to that next level. It helps to know when and why to sharpen an image and as with all processing, applying correctly makes a huge difference to your final image.

Do you have any other photoshop sharpening for beginners tips you’d like to share? Please do so in the comments!

The post Photoshop Sharpening for Beginners – Unsharp Mask, High Pass and Smart Sharpen Explained appeared first on Digital Photography School. It was authored by Nisha Ramroop.


Digital Photography School

 
Comments Off on Photoshop Sharpening for Beginners – Unsharp Mask, High Pass and Smart Sharpen Explained

Posted in Photography

 

Camera Color Spaces Explained – sRGB vs Adobe RGB vs RAW

22 Dec

The post Camera Color Spaces Explained – sRGB vs Adobe RGB vs RAW appeared first on Digital Photography School. It was authored by Herb Paynter.

camera-color-spaces-explained

Your camera is probably able to capture color images in a variety of different color containers called “spaces.” These camera color spaces collect colors in one of several size light buckets labeled sRGB, AdobeRGB, and RAW.

Each bucket gathers slightly increased varieties of light, similar to the way Crayola crayons are packaged and sold in increasingly inclusive collections of colors; small, large, and jumbo.

Camera color spaces offer photographers a variety of different size boxes.

Camera color spaces

camera-color-spaces

Scenes that include both brilliant colors and bright lighting are excellent candidates for capture with AdobeRGB color space.
F/3.5, 1/1000, ISO 400, Lumix G Vario 2.8, 35mm

 

A debate in the photo community usually arises over which camera color spaces to choose in the camera’s preferences. Some color spaces capture more of the hues and saturated colors than others. Pictures captured in one space may include more colors than another.

Each space is ideally suited for certain purposes, and the question of which camera color space to choose needs a bit of explanation. In addition to the capture question, choosing a color space for post-production editing will depend on the image’s ultimate usage.

Your camera’s color spaces involve not just color data, but additional parking space on the drive. Larger color spaces provide more bit-depth (explained below), which occupies more digital real estate on the memory card. So, the choice of which to use does have practical importance.

What camera color space to use

There is no singularly perfect color space choice, so let’s examine which is best for specific situations.

camera-color-spaces

Images that do not include highly-saturated color but contain significant detail in the shadow areas will benefit from RAW format capture and high-bit processing. F/10, 1/1600, ISO 800, Lumix G Vario 2.8, 200mm

 

Unless the sole purpose of a photo is to display as a high-resolution digital image, you might want to convert the file’s original color space for a less demanding result. However, keep in mind that every time a file mutates from a larger color space to a smaller color space (RAW to AdobeRGB, or AdobeRGB to sRGB), the image’s color intensity and integrity may diminish in the process. Some imaging applications are less demanding than others.

While copies of digital files remain identical in size and intensity to the original regardless of how many times they have been copied, when a digital file mutates to a lesser color space, it will always lose some critical color information. Your camera color spaces in general, and device color spaces, in particular, are all unique. Each serves a particular purpose.

Image: The extreme dynamic range and saturated skies benefitted from the RAW capture and editing in...

The extreme dynamic range and saturated skies benefitted from the RAW capture and editing in AdobeRGB. Detail buried in the shadows was possible because of the 14-bit capture. F/14, 1/300, ISO 3200, Lumix G Vario 2.8, 12mm

 

It’s a matter of depth

The difference between camera color spaces boils down to an issue called bit depth. Bit depth is a mathematical description of how many visible distinctions between shades of color can be recognized and reproduced by different devices (a techie term for scanners, cameras, computer monitors, and printing machines). Unfortunately, not all devices can reproduce all colors the same (which is the primary stumbling block amidst all color issues).

Every device reads and reproduces color using a different process. While this sounds like a fixable problem, there is a sad and unsolvable reality behind the problem. There are at least three different interpretations of color at play in every capture-display-print cycle.

Image: These colorful seat cushions and deep shadows were captured in RAW format, edited in AdobeRGB...

These colorful seat cushions and deep shadows were captured in RAW format, edited in AdobeRGB, and saved in sRGB for upload to our camera club’s server for display as part of a club field trip slideshow. F/7.1, 1/320, ISO 400, Lumix G Vario 2.8, 19mm

 

First, cameras capture color by recording intensities of light as electrical signals and interpreting those signals as colors. Each color is assigned a specific number.

Second, these numbers are then sent to the computer. Here, they get translated into another process that interprets those electrical signals into a process that turns on tiny lights (called pixels) on a backlit screen.

And third, those pixels are then sent to a printing machine that instructs those pixel values to spit tiny splatters of colored ink onto paper.

It’s a very complicated process that color scientists have tried for years to make simple. Unfortunately, it just ain’t that simple!

Anyway, during this hair-on-fire digital transition, different methods are employed that utilize the various color spaces in a way that transforms the colors from one device to another as accurately as possible. Sometimes the color translations don’t convey the colors as accurately as we would like, which is why sometimes the monitor colors don’t match the printer colors.

camera-color-spaces-explained

Science uses charts like this to plot the characteristics of camera color spaces. While these charts are referred to as “theoretical” because they are not visible to the human eye but represent what each color “bucket” can capture versus what the eye can see.

 

The ultimate referee

The only comprehensive color space that plots the full scope of what the human eye can see is what the science community calls L*a*b* (inverted horseshoe diagram) space.

The human eye is the ultimate arbitrator in the color wars, and all device capabilities (camera, display, and printer) are defined by how they match up to the eye’s master gamut. This is why this strange horseshoe shape is referred to as the Reference Space. All other devices, whether camera, display, or printer, can only recognize and utilize portions of this “reference space,” and they usually disagree with each other.

Color is a very diverse and dysfunctional family. Each device speaks a different dialect of a similar language. Each produces colors that cannot be faithfully reproduced on other devices. Color is a very messy topic.

color-spaces-explained

Crayola crayon boxes contain varying numbers of colors just as color spaces collect varying amounts of color. The lightest and darkest color crayons are the same value, but larger boxes contain more colors than smaller ones.

 

Some devices can express color more completely than others. Unfortunately, no device created by humans can reproduce all the colors that can be seen by humans. Also, the colors captured by one device that fall outside the gamut (Crayola box size) of other devices, get clipped, lost, or compressed during the handoff. Those colors never come back home.

This is the tragic truth about digital color reproduction. The trick to color reproduction is in retaining as much of the common color as possible during the process. Fortunately, this same human eye (and brain) are very forgiving about accepting the limitations of non-human devices.

Color reproduction is a true application of the law of diminishing returns and the visual science of physics. Photographers understand this law quite well.

Very rarely can a camera actually capture all the color and dynamics of an original scene. Moreover, nature’s color gamut extends even further than the colors that the human eye can identify. Any time a digital image gets transposed from one form into any another form, that transformation is a diminished-value exchange.

As an image is transferred from one device to another, those pixel values located outside the color gamut of the destination device always get lost in the translation. The object of color management is to mitigate color loss and maintain as much of the appearance of the original as possible, all the way through the reproduction process.

RGB spaces (sRGB, AdobeRGB, ProPhoto RGB)

It all begins with the camera’s color settings that are in place when you capture the scene. All cameras capture light through red, green, and blue filters (RGB color space). While there are a number of RGB color spaces to choose from, each sports a slightly different color gamut.

camera-color-spaces-explained

Each device in the photography chain interprets colors slightly differently, and each responds to the individual color spaces uniquely.

 

Each color space (sRGB, AdobeRGB, ProPhoto RGB, etc.) provides a unique collection of color attributes, and each space satisfies specific display and reproduction requirements.

Gamuts are descriptions of the range of colors that a device can recognize, record, display, or print.

Shooting a vibrant, saturated scene with the camera requires a larger color space. Using a camera color space with a smaller gamut could significantly diminish the raw, harsh emotion of the scene. This is why most photography experts encourage photographers to set their cameras to capture images in AdobeRGB.

sRGB

Almost all digital cameras are factory-set to capture colors using sRGB as the default color space for a plausible reason; most of the pictures we take never get printed! At best, we view them on computer monitors or social media. Quite honestly, most of the pictures we capture never make it past the initial glance at the camera’s LCD screen. Capturing those images in higher-bit color space is a total waste of disk space.

camera-color-spaces-explained

sRGB color space remains largely unchanged since it was defined in the 1950s to compress video images into a manageable size for broadcast. While the format has been updated slightly, the basic intent is the same.

 

sRGB was developed by HP, Microsoft (and others) back in the early days of television to address the color gamut needs of most televisions (early versions of computer monitors), and the standard was set long ago. The airwaves and Internet browsers live on an sRGB diet. As such, the sRGB color space standardizes the way images are still viewed on monitors and televisions.

Adobe RGB

If the ultimate destination for your picture is monitor or display-based presence (presentations, Internet, or television displays), this is probably the best choice to capture images. However, if you shoot for print on paper, both AdobeRGB 1998 and ProPhoto RGB RGB contain a wider gamut of colors and are thus more suited for preparing images for print.

camera-color-spaces-explained

The brilliant dynamics and saturated colors are always captured best in the deepest color bucket of all – RAW. The degree of adjustments provided by RAW capture and ProPhoto RGB editing is perfect for images like this. F/6.3, 1/800, ISO 400, Lumix G Vario 2.8, 26mm

 

RAW

Actually, the most ideal bucket for capturing images actually exceeds the gamuts of all three of these camera color spaces. I’m speaking of course of your camera’s ability to capture images in RAW format. This is a format that supersedes any defined color spaces.

RAW files capture color in the highest bit depth possible; up to 14-bits per color. RAW is not an acronym; it is more of a description. It is the recording of all the limited color depth and uncompressed dynamic range of the original scene. Start RAW and strip down from there.

Camera color spaces explained – Conclusion

Congratulations on sticking with this article through all the minutia.

By now, it probably seems like camera color space is more like outer space, but it doesn’t have to remain this technical. Simply remember to capture images in RAW format (perhaps in addition to capturing them as JPG) and then transform the colors down the chain of reproduction as the need dictates.

Edit images in the camera color spaces of ProPhoto RGB or AdobeRGB to retain as much color elbow room as necessary. Those images destined for print should be transposed to AdobeRGB, and reduce those images destined for the Internet or slideshows to sRGB. Simple, enough!

The post Camera Color Spaces Explained – sRGB vs Adobe RGB vs RAW appeared first on Digital Photography School. It was authored by Herb Paynter.


Digital Photography School

 
Comments Off on Camera Color Spaces Explained – sRGB vs Adobe RGB vs RAW

Posted in Photography

 

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

08 Dec

The post Photoshop Adjustment Layers Explained and How to Use Them (Part 2) appeared first on Digital Photography School. It was authored by Nisha Ramroop.

photoshop-adjustment-layers-explained-part-2

Part 1 of How to Use Photoshop Adjustment Layers introduced you to the first eight of the adjustment layer type editing tools, which allow you to work non-destructively. Here, we continue to look at some of the other tools available as Adjustment Layers.

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

1. Photo Filter

Did you know that there are colored filters that you place in front of your camera lens that alter the color temperature and balance of your final image? Well, the Photo Filter adjustment layer adds a color filter to your image similar to this.

There are many preset photo filters in Photoshop, but the most common are those that make your image warm or cool. You can further tweak each preset to your liking. For instance, you can change the density of the effect easily using the Density slider. There is also the Preserve Luminosity box to check so that the applied filter does not darken your image.

You can also choose an exact color that you would like to overlay as a filter by clicking on “color” and chosing from the color menu or by using the eyedropper tool to chose a color from your image.

Image: Warm (oranges) and Cool (Blues) Photo filters applied to the image above

Warm (oranges) and Cool (Blues) Photo filters applied to the image above

2. Channel Mixer

The Channel Mixer Photoshop Adjustment Layer is another great tool to create stunning black and white and tinted images.

The principle is similar to that used by the Black and White Adjustment Layer. In each of these, you can adjust the displayed grayscale image by changing the tonal values of the color elements of the image.

There are three channels in the RGB view: red, green and blue. Note: The source channel is the one that defaults to 100%. The Channel Mixer, therefore, allows you to combine and mix the best of each channel. It does this by adding (or subtracting) grayscale data from your source channel to another channel.

Also, of note, adding more color to a channel gives you a negative value and vice versa. Hence, at the end of your edit, it is advisable that all your numbers total 100%.

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

The Channel Mixer also allows you to exaggerate color and make creative color adjustments to your image.

3. Color Lookup

The Color Lookup adjustment layer uses presets to instantly color grade or change the “look” of your image. The presets are called LUTs or lookup tables. Each lookup table contains specific instructions for Photoshop to remap the colors in your image to a different set of colors to create the selected look.

Image: Applying the Late Sunset LUT creates a dramatic finish

Applying the Late Sunset LUT creates a dramatic finish

When you choose the Color Lookup Adjustment Layer, three options are available to you: 3DLUT File, Abstract and Device Link.

Most of the presets reside under the 3DLUT File option. Of note, 3D (in 3DLUT) refers to Photoshop’s RGB color channels (and not three-dimension).

Image: Late Sunset LUT applied at 60% opacity for a more realistic finish

Late Sunset LUT applied at 60% opacity for a more realistic finish

Furthermore, LUTS are available for download from various websites or you can create your own LUT.

4. Invert

The Invert Photoshop Adjustment Layer is self-explanatory. It inverts the colors and is an easy way to make a negative of your image for an interesting effect.

Image: The first image with colors inverted gives a surreal otherworldly effect

The first image with colors inverted gives a surreal otherworldly effect

5. Posterize

Looking for a flat, poster-like finish? The Posterize Adjustment Layer gives you that by reducing the number of brightness values available in your image.

You can make an image have as much or as little detail as you like by selecting the number in the levels slider. The higher the number, the more detail your image has. The lower the number, the less detail your image has.

This can come in handy when you want to screenprint your image. You can limit the tones of black and white. This is also true of the Threshold Adjustment Layer.

Image: Posterize Adjustment Layer

Posterize Adjustment Layer

6. Threshold

When you select Threshold from your Photoshop Adjustment Layers list, your image changes to black and white. By changing the Threshold Level value, you control the number of pixels that are black or white.

Image: Threshold Adjustment Layer

Threshold Adjustment Layer

7. Gradient Map

The Gradient Map lets you map different colors to different tones in your image. The gradient fill, therefore, sets the colors representing both the shadow tones on one end and highlight tones on the other end of the gradient.

Likewise, checking the “Reverse” box swaps around the colors of your gradient. This means that the shadow colors are moved to the highlights end and vice versa.

A good rule of thumb is to keep your shadows dark and your highlights brighter for ease of reference.

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

Your gradient map also makes available many presets that are adjustable via the gradient editor window. Additionally, you can also define/create your own gradients by changing the slider colors.

8. Selective Color

Use the Selective Color Adjustment Layer to modify specific amounts of a primary color without modifying other primary colors in your image. Check the Absolute box if you want to adjust the color in absolute values.

Example: If you have a pixel that is 50% yellow and you add 10%, you are now at a 60% total. The Relative box is a little more complicated as it would adjust the yellow pixel only by the percentage it contributes to the total. Using the same example, if you add 10% to the yellow slider (with relative checked), it actually adds 50% of the 10%, which brings your total to 55%. Relative, therefore, gives you a more subtle effect.

Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

However, when it comes to this editing tool, the potential is far beyond this simplistic edit technique. You can use it to correct skin tones and for general toning.

While selective color adjustments are similar to hue/saturation adjustments, there are subtle differences. Selective Color allows you to subtract/add color values, whereas Hue/Saturation does not.

The Hue/Saturation adjustment allows you to work with a range of hues that are included with the six color ranges in Selective Color, so there is more control there if you need it.

Conclusion

These basic examples of how to use the Photoshop Adjustment Layers tools merely scratch the surface of their capabilities. Certainly, you will appreciate editing non-destructively, whether you are just starting out or advanced with adjustment layers.

Some of the adjustment layers seem similar, but each has its differences and its pros and cons. Either way, there are many possibilities of playing around with your image, while preserving the original.

If you haven’t already, be sure to check out Part 1 in this series.

Do you use Photoshop Adjustment Layers? If so, which ones do you use and why? Share with us in the comments.

The post Photoshop Adjustment Layers Explained and How to Use Them (Part 2) appeared first on Digital Photography School. It was authored by Nisha Ramroop.


Digital Photography School

 
Comments Off on Photoshop Adjustment Layers Explained and How to Use Them (Part 2)

Posted in Photography

 

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

02 Dec

The post Photoshop Adjustment Layers Explained and How to Use Them (Part 1) appeared first on Digital Photography School. It was authored by Nisha Ramroop.

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

If you use Photoshop, you probably already know that layers are a great non-destructive way to edit. Within the realm of layers, there exists a group of very useful editing tools called Adjustment Layers that allows for easy editing of your images. As with most Photoshop tools, there are several ways to achieve the same result. When you use Photoshop adjustment layers (as with other layer types), you can make changes, save it as a Photoshop file (PSD) and undo/change it many years later. Since no pixels are destroyed or changed, your original image stays intact. Let’s take a look at the basics of using Photoshop Adjustment Layers.

Accessing Photoshop Adjustment Layers

There are two ways to access Photoshop Adjustment Layers.

1. To access via the Layers Menu; choose Layer->New Adjustment Layer, and choose one of the many adjustment types (which are expanded upon below).

photoshop-adjustment-layers-explained

2. To access via the Layers Panel; click on the half black/half white circle at the bottom of the Layers Panel, and choose the adjustment type you want to work with.

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

Adjustment Layer Types

1. Brightness and Contrast

Brightness and Contrast allow you to make simple adjustments to the brightness and contrast levels within your photo. When you adjust brightness, the overall lightness (or darkness) of each pixel in your frame is changed. To increase a photo’s tonal values and increase the highlights, slide the Brightness to the right. To decrease a photo’s tonal values and increase the shadows, slide the Brightness to the left.

Contrast, however, adjusts the difference between the brightness of the elements in your image.  Thus, if you increase brightness you make every pixel lighter, whereas if you increase contrast you make the light areas lighter and the dark areas darker.

photoshop-adjustment-layers-explained

2. Levels

The levels tool adjusts the tonal range and color balance of your image. It does this by adjusting the intensity levels of the shadows, mid-tones, and highlights in your image. Levels Presets can be saved and then easily applied to further images.

Of note, if you use the Image menu to open the levels tool (Image->Adjustments->Levels) a separate layer will not be created and the changes will be committed directly (destructively) to your image layer. Thus, I recommend using the Adjustment Layers menu (as shown above)  to access this very useful tool.

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

3. Curves

While the Levels adjustment allows you to adjust all the tones proportionally in your image, the Curves adjustment lets you choose the section of the tonal scale you want to change. On the Levels graph, the upper-right area represents the highlights, while the lower-left area represents the shadows.

Use either of these adjustments (levels or curves) to correct your tone when your image’s contrast is off (either too low or high).

The Levels Adjustment works well if you need to apply a global adjustment to your tone. To apply more selective adjustments, you are better off using Curves. This includes adjustments to just a small section of the tonal range or if you only want to adjust light or dark tones.

photoshop-adjustment-layers-explained

4. Exposure

When you think of exposing an image properly, you are concerned with capturing the ideal brightness, which will give you details in both the highlights and shadows. In Photoshop Adjustment Layers, the Exposure Adjustment has three sliders that adjust Exposure, Offset and Gamma.

Use the Exposure slider to adjust the highlights of the image, the Offset slider for the mid-tones and the Gamma to target the dark tones only.

Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

5. Vibrance

Use the Vibrance Adjustment Layer to boost the duller colors in your image. The great thing about increasing vibrance is that it focuses on the less-saturated areas and does not affect colors that are already saturated.

Image: Vibrance adjusts only the duller colors in an image

Vibrance adjusts only the duller colors in an image

photoshop-adjustment-layers-explained

Look at the difference in the greens between this image and the one above. Saturation adjusts all the colors (and tonal range) in an image.

6. Hue/Saturation

Hue and Saturation, allows you to change the overall color hue of your image, as well as how saturated the color is.

You can change the hue (color) of your entire image by keeping “Master” selected in the dropdown (this is set by default). Alternatively, you can pinpoint the color you would like to change the hue of. You can choose from Reds, Yellows, Greens, Cyan, Blues or Magentas.

In addition to adjusting the obvious hue and color saturation of your image, this Photoshop Adjustment Layer allows you to adjust the lightness of your entire image as well as work with specified colors. Keep in mind that changing the overall saturation of an image affects your tonal range.

Image: Use the Hue Adjustment to get creative

Use the Hue Adjustment to get creative

Color Balance

The Color Balance Adjustment layer is used to change the overall mixture of colors in an image and works well for color correction.

photoshop-adjustment-layers-explained

Color Balance adjusted for the mid-tones to include more red

You first need to select either Shadows, Midtones or Highlights, to choose the tonal range you want to change.

Check the Preserve Luminosity box to preserve your luminosity values (brightness or darkness) and maintain the tonal balance as you change the color in your image. Move your slider toward the color you want to increase and away from the color you wish to decrease.

Black and White

As the name implies, the Black and White adjustment layer allows you to easily take your images to a grayscale version or apply a color tint entirely.

There are many ways to achieve black and white image processing. The Black and White Photoshop Adjustment Layer is one of the better ones. It allows you to lighten or darken specific color ranges to enhance your black and white conversion. Example: If you want the blues of your color image to stand out more when converted to black and white, simply toggle that slider. You can add more or less contrast by making particular colors lighter or darker.

photoshop-adjustment-layers-explained

1. When you choose the Black & White Adjustment Layer, you get a default black & white conversion 2. You can tweak the image based on selective colors. In this example, the blues and yellows were adjusted 3. You can apply a tint (of any color) over the entire image by ticking the Tint box and selecting the color you wish to overlay.

Important Note: While most of these adjustments are available under the Image menu (Image->Adjustments), using them from there does not work the same. The main difference is that these are applied directly to the image (destructively) as opposed to when done under Adjustment Layers. When done under Adjustment Layers, you can turn the adjustment on and off by selecting and deselecting the “eye” in the layers panel.

Conclusion

Photoshop Adjustment Layers are a great group of tools that allow you to smartly edit your image in a non-destructive way. Your original pixels are preserved, so you are able to come back and change your edits years later. Thus, they give you the power to undo easier and work more efficiently.

Photoshop Adjustment Layers group together the most common editing tasks, along with a few others to help you bring your images to life.

In Part 2, we will explore some other tools in the Adjustment suite.

Share with us in the comments your favorite adjustment tool and how you use them.

The post Photoshop Adjustment Layers Explained and How to Use Them (Part 1) appeared first on Digital Photography School. It was authored by Nisha Ramroop.


Digital Photography School

 
Comments Off on Photoshop Adjustment Layers Explained and How to Use Them (Part 1)

Posted in Photography

 

Video: Google’s Super Resolution algorithm explained in three minutes

30 May

Space constraints in the thin bodies of modern smartphones mean camera engineers are limited in terms of the size of image sensors they can use in their designs. Manufacturers have therefore been pushing computational imaging methods in order to improve the quality of their devices’ image output.

Google’s Super Resolution algorithm is one such method. It involves shooting a burst of raw photos every time the shutter is pressed and takes advantage of the user’s natural hand-shake, even if it is ever so slight. The pixel-level differences between each of the frames in the burst can be used to merge several images of the burst into an output file with optimized detail at each pixel location.

An illustration that shows how multiple frames are aligned to create the final image.

Google uses the Super Resolution in the Night Sight feature and Super-Res zoom of the Pixel 3 devices and has previously published an in-depth article about it on its blog . Our own Rishi Sanyal has also had a close look at the technology and the features it has been implemented in.

A visual representation of the steps used to create the final image from a burst of Raw input images.

Now Google has published the above video that provides a great overview of the the technology in just over three minutes.

‘This approach, which includes no explicit demosaicing step, serves to both increase image resolution and boost signal to noise ratio,’ write the Google researchers in the paper the video is based on. ‘Our algorithm is robust to challenging scene conditions: local motion, occlusion, or scene changes. It runs at 100 milliseconds per 12-megapixel RAW input burst frame on mass-produced mobile phones.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: Google’s Super Resolution algorithm explained in three minutes

Posted in Uncategorized

 

Image Resolution Explained – Seeing the Big Picture

28 May

The post Image Resolution Explained – Seeing the Big Picture appeared first on Digital Photography School. It was authored by Herb Paynter.

The very first thing you must understand about photography is that it is totally based on illusion; you choose to believe what you perceive. This concept didn’t originate with photography’s pixels and dots; it is the very basis for human sight. Your brain chooses to believe something to be true well beyond what your eyes can verify or recognize to be true. The very word “resolution” gives light to this concept. The resolving power of a lens is its ability to distinguish small elements of detail. This same issue is true concerning the human eye and its perception of images on a computer screen and the printed page. Each of these “interpretations” relies on a mechanism to carry out an illusion. The eye’s mechanism is rods and cones, cameras use photo receptors, computer screens use pixels, and printing machines use spots and halftone dots. The degree to which each device succeeds in their illusionary quest is dependent upon the resolution of the mechanism and the resolving power of the device.

Each system requires two elements – a transmitter and a receiver. Just as a magic trick requires both a salesman (the magician) and a customer (the viewer), each “visual” process requires a good presenter and a willing observer. The common phrases, “seeing is believing,” and “perception is reality,” pretty much define the benchmark of success. Now let’s get image resolution explained and show you where it’s is most effectively used.

Image resolution

There comes a finite distance when viewing any image where your eye can no longer distinguish individual colors. Beyond that point, your brain must sell the idea that detail indeed exists beyond that point of distinction. The detail you see when viewing an object at close range continues to be perceived long after that object is too far away to verify that detail. There are limitations to the normal resolving power of the human eye with “normal” defined as 20-20 vision.

In the image reproduction process, delivering an image with excess resolution becomes useless when the result of that extra resolution has no purpose. Thus, the gauge of all visual resolution must ultimately be framed by resolving capabilities of the human eye. Producing more image resolution than the eye can perceive doesn’t increase the detail or improve the definition, it just creates bigger files.

While you feel more confident when you pass massive amounts of pixels on to your printer, your printer doesn’t appreciate the excess. It throws all those extra pixels away. More ain’t better; it’s just more.

Dots, Pixels, Lines, and Spots

Beware of the numbers game that is played by manufacturers in the imaging industry. There is ample misinformation and misused terminology floating around that causes significant confusion about imaging resolution. Allow me to clarify some very foggy air beginning with terminology.

DPI (Dots per inch)

The term DPI is probably the most misconstrued acronym in the digital imaging world as it is loosely cast about in digital imaging and applied to just about every device. DPI, or dots per inch, is a reference to printing device’s resolution and describes the dots and spots that each technology uses in various combinations to simulate “tones.” Dots are neither pixels nor halftone dots. We’d all be a bit better off not using this term as it has little practical application.

PPI (pixels per inch)

The basic structure of every digital image is the pixel. Pixels are the square blocks of tones and colors that you see when images are enlarged on computer screens (see the Eye illustration below). The measure of those pixels (typically in a linear inch) determines an image’s resolution and should always be addressed as PPI, or pixels per inch. This setting is affected by the Image Size dialog box in editing software. The higher the number of pixels in an inch, the higher the image resolution. Scanners, digital cameras, and paint programs all use the PPI terminology.

Of all the resolution terms in the industry, this is one that deserves top billing. While the rest of the terms need to be recognized, rarely will they have to enter the conversation.

When viewed in imaging software, these squares are referred to as pixels and should be defined in values of pixels per inch (PPI). This particular dialog defines the size of the “Eye” picture in this article. Internet images are defined by pixel count and concern the linear measurement of horizontal pixels in the image.

LPI (lines per inch)

LPI refers to the halftone dot structure used by laser printers and the offset printing process to simulate the continuous tones of photographic images. LPI refers to the number of “lines” of halftone dots used by various printing processes. “Lines” is a throwback reference to the days when actual lines were etched in glass plates to interpret photographic tones in early printing processes.

This LPI number is specific to the printing industry. Lower numbers refer to larger, more visible halftone dots (newspapers) while higher numbers refer to much smaller and less visible dots (magazines and artwork). I’ll get into the numbers later.

Spots and SPI (spots per inch)

A spot is a rarely used term that refers to both inkjet and imagesetter processes. With inkjet, it is the measure of micro-droplets of ink sprayed during the inkjet printing process. SPI, or spots per inch is a User-Selectable issue concerning the resolution choices when using some inkjet printers. Higher SPI also affects the quality of the printing process by slowing the speed at which the paper is fed through the printer. The spot “marking” size of both plate and imagesetters determines the quality of the shape of halftone dots produced and only applies to high-end lithographers and service bureaus.

Device real-world requirements for optimal resolution

Now we’ll look at each device’s real-world requirements for optimal resolution. How much is too little and how much is too much? The answers require a bit of explanation because there are some variables involved in the projects and the printing devices. First I’ll clarify some misconceptions about digital camera files, then I’ll address three specific printing technologies and give you some concrete examples.

Digital Cameras

The most common reference to camera resolution relates to the camera’s image sensor. These sensors contain a grid of cells called photosites, each cell measuring the light value (in lumens) striking it during an exposure. The actual number of cells contained in an image sensor varies depending on the camera model. When the number of horizontal cells gets multiplied by the number of vertical cells on the sensor, the “size” of the sensor is defined. The Nikon D500 sensor measures 4,288 x 2,848, or 12,212,224 pixels, making it a 12.3 mega (million) pixel camera.

The individual cells in the image sensor are covered by either a red, green, or blue filter called a Bayer array. Each cell records the filtered light, converting the combined values into individual pixel colors.

These pixels can produce any number of different size pictures for various purposes. Each printing process requires a different number of pixels per inch (PPI) to deliver optimal quality prints at a given size. This is because the technology used for each type of printing is different. For example, high-quality inkjet printers spray liquid inks onto paper using very small nozzles (usually 1440 spots per inch).

Laser printers

Most laser printers are either 600 or 1200 dpi devices meaning that a solid line printed horizontally will be composed of either 600 or 1200 dots. Type is printed using all these dots while halftone images can be effectively reproduced from 220-300 pixel-per-inch (PPI) images.

Inside these laser printers is a raster image processor (RIP) that generates halftone dots from square pixels. The value of each image pixel gets transposed into a halftone cell. The formula for exchanging this grid of square pixels into a diagonal pattern of variable-size dots goes way beyond explanation in this article, but it’s kind of like magic.

Laser printers simulate gray tones using the halftoning process provided by the printer’s RIP.

Inkjet printers

Inkjet printers use totally different technology to translate color pixels into printed images. Tiny spray nozzles distribute ink to specific parts of the image to deliver their version of the imaging illusion. The resolution (PPI) required to deliver accurate inkjet images differs from laser printers. This is because they do not use the geometric mechanism of halftone cells but instead, spray microscopic amounts of each ink to precise locations as determined by the pixel values.

Inkjet printers require significantly fewer pixels per inch (PPI) than laser printers to carry the illusion. Typically 150-200 PPI is quite sufficient.

Lithographic printing

Offset printing includes newspapers, magazines, and brochures. Each requires a slightly different lines-per-inch (LPI) pattern of dots. Newspapers are typically 85 LPI, magazines are 150 LPI, and high-end brochures and other collateral material require up to 200 LPI resolution.

Each line screen value is produced by a different PPI formula. While all these types of printing can be produced from 300 PPI files, all that resolution is certainly not required and is technically overkill. Even those high-end brochures technically don’t require this much resolution, but the early-adopted myth of 2xLPI persists yet today. The actual requirement for all high-end printing is only 1.4xLPI. Any more resolution simply gets discarded by the platesetter’s RIP.

In this calculation, newspapers (85LPI) need only 120 PPI, magazines require only 212 PPI, and even the best quality print is ideally produced with just 283 PPI.

In case you’re thinking that this is splitting hairs and irrelevant, consider this… using the 1.4 rule totally meets the mathematical requirement and saves a whopping 50% of the file size in storage real estate and transfer time.

I fully expect to hear some pushback about these numbers, but science and math don’t lie. Phobias about resolution are long entrenched, respected, and expected. However, in the end, it really doesn’t matter that much.

No-nos

There are two unforgivable sins in preparing your images for proper resolution. Low-res and up-res.

Low-res

The biggest sin of all is sending files to the printer/publisher with too little resolution.

That is a certain formula for poor results and shows up in the form of soft detail and bitmapped edges caused by normal sharpening.

Every form of print technology requires a minimum of pixels to produce fully-detailed and sharp images. So do not shortchange your project in this respect.

Remember, size your images for the final appearance and assign the PPI at that final size. If you want to see an 8”x10” image appear in print, make sure you address the issue of PPI in the Image Size dialog and before you save the file.

Monitor the Image Size dialog carefully when you make changes. Resample an image while watching the Image Size figure at the top of the dialog. Try to never let it increase. You can get away with a small increase but do so only when necessary.

Up-res

Make it a rule never to increase your image size as it is a sure-fire recipe for disaster. You can’t create detail; you can only destroy it. Whatever size file (pixel count) you begin with is the largest pixel count you should print unless you’re okay with soft images.

Pixels are not rubber, and you cannot stretch them to a larger size without sacrificing the sharpness of the image. Your digital camera most likely provides you with ample original pixels to print most projects, try to stay within that original ratio.

You can increase the image size, but you can’t increase its detail. Every time you enlarge an image, you distort the pixels. So if you want to print sharp images, don’t enlarge them!

The major advantage to maintaining higher resolution files for an archive is that if an image ever needs to be cropped or enlarged, that extra resolution will undoubtedly come in handy.

It remains standard operating procedure in the printing industry to send all files to the printer with 300 PPI resolution. Cloud services, backup systems, and storage media sales folks certainly want you to continue the 300 PPI trend and rent more parking space on their sites.

Final thought

Make it your goal to make the best of this visual illusion called photography. Your camera, your computer, and your printer provide all the tools you need to perform your magic with great success. Enjoy.

 

The post Image Resolution Explained – Seeing the Big Picture appeared first on Digital Photography School. It was authored by Herb Paynter.


Digital Photography School

 
Comments Off on Image Resolution Explained – Seeing the Big Picture

Posted in Photography

 

Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

05 Dec

The post Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained appeared first on Digital Photography School. It was authored by Kunal Malhotra.

1 - Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

‘DSLR Camera, Full-Frame, Crop Sensor’- Just 3 terms which are prevalent in virtually every discussion involving photography. The two terms in use to classify sensor sizes of a DSLR camera are ‘Full-Frame’ and ‘Crop-Sensor.’ A Full-Frame camera contains a sensor size equivalent to a 35mm film format whereas a Crop-Sensor camera has a sensor size smaller than a full-frame sensor or a 35mm film format.

Micro-Four-Thirds (4/3) is a relatively new format (and term). First introduced around 2008, this sensor is slightly smaller and compact in nature. However, owing to a variety of factors, this format is now considered almost equal to, if not better than, the Crop Sensor format.

Apart from the physical size difference, there are several other points of difference between a full-frame sensor, a crop-sensor, and a micro-four-thirds sensor. Let’s take a look at a comparison between them under the following characteristics, to get an accurate understanding of their differences.

Crop Factor

As mentioned above, a full-frame camera has a 35mm sensor based on the old film-format concept. Whereas, a crop-sensor (also called APS-C) has a crop factor of 1.5x (Nikon) or 1.6x (Canon). Micro-Four-Thirds are even smaller sensors having a crop factor of 2x.

This crop factor also directly affects our field of view. Simply put, an APS-C sensor would show us a cropped (tighter) view of the same frame as compared to a full-frame sensor, and a Micro-Four-Thirds sensor would show an even tighter (more cropped) output of the same frame.

2 - Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

LEFT: Photo clicked using a Full-Frame camera. CENTER: Photo clicked using a Crop-Sensor camera. RIGHT: Photo clicked using a Micro-Four-Thirds camera.

Focal Length

The focal length obtained by different sensors is also directly associated with crop-factor. The focal length measurement of any given lens is based on the standard 35mm film format. Whenever we use any crop-sensor camera, its sensor crops out the edges of the frame, which effectively increases the focal length. However, this is not the case with any full-frame sensor, as there is no cropping involved with a full-frame field of view.

For example, in the Nikon eco-system, a crop-sensor camera such as the D5600 has a ‘multiplier factor’ of 1.5x. Thus, if I mount a 35mm f/1.8 lens on my Nikon D5600, it would multiply the focal length by 1.5x, thus effectively giving me a focal length output of around 52.5mm. If you mount the same lens on a full-frame Nikon body such as the D850, it gives an output of 35mm.

Similarly, if you mount a 35mm lens on a Micro-Four-Thirds sensor, which has a crop factor of 2x, it effectively doubles the focal length obtained to around 70mm.

3 - Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

LEFT: Photo clicked at 35mm on a Full-Frame camera. CENTER: Photo clicked at 35mm on a Crop-Sensor camera. RIGHT: Photo clicked at 35mm on a Micro-Four-Thirds camera.

Depth of Field

Similar to focal length, the aperture or f-stop measurement of a lens is based on the full-frame 35mm format. Similar to focal length, a ‘multiplier effect’ gets applied to the f-stop when using crop-sensors. As we know, the f-stop or aperture is the singular most important factor that affects the Depth of Field.

Thus, a Micro-Four-Thirds camera gives us less (shallow) Depth of Field at similar focal lengths when compared with a full-frame camera. For example, an image shot at f/1.8 on a Micro-Four-Thirds camera would give an output similar to an image shot at f/3.6 on a full-frame camera, and f/2.7 on a crop sensor camera. This is assuming that the effective focal length, and other shooting conditions, are the same.

Low Light Performance

Generally, full-frame cameras provide not only better low light & high ISO performance, but a better dynamic range. These factors combined eventually produces a much better image output than any crop-sensor camera can achieve.

Full-frame cameras are capable of capturing the most light and will almost always out-perform an APS-C or Micro-Four-Thirds camera body under low-light conditions. Micro-Four-Thirds sensors don’t perform well under low-light conditions where the ISO needs to be cranked up to say, above 2000.

For these reasons, despite full-frame camera kits being expensive, bulky and heavy to carry around, they are still industry-standard and the preferred cameras for virtually all professional photography work.

Conclusion

Thus, while full-frame DSLR’s remaining the industry standard even today, we cannot ignore the undeniable advantages of the Micro-Four-Thirds cameras. Micro-Four-Third cameras, such as the Olympus EP-5 & the Panasonic GH5, are affordable and easy to carry around. Thus, enabling a much larger group of people (who are hobbyists and enthusiasts but not professionals) to have access to DSLR-like shooting conditions at a fraction of the price.

Ultimately, factors such as your budget, use and other criteria define whether you choose either Full-Frame, Crop-Sensor, or Micro-Four-Thirds cameras.

Read more info on sensors here.

The post Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained appeared first on Digital Photography School. It was authored by Kunal Malhotra.


Digital Photography School

 
Comments Off on Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

Posted in Photography

 

A Few Common Photography Abbreviations Explained

26 Jun

For a visual medium, photography is a bit of an alphabet soup when it comes to abbreviations. AWB, DOF, RGB… Even for the seasoned photographer, photographic abbreviations can be a confusing encounter. Here are a few of the most common photography abbreviations to help you tell your TIFF’s from your TTL’s.

A

Aperture priority commonly abbreviated to A or Av (for aperture value) is a setting on your camera that allows you to adjust the aperture value (otherwise known as the f-number or f-stop) while the camera automatically selects a shutter speed to produce an image with the correct exposure.

As you adjust the aperture for different photographic effects, the camera’s internal light meter measures the lighting conditions of the scene and adjusts the shutter speed accordingly.

Read more here: Getting off Auto – Manual, Aperture and Shutter Priority modes explained

AF

AF is an abbreviation for autofocus. The AF feature automatically adjusts the camera lens to focus on a subject, creating a sharp image.

There are several types of AF focus modes. Single focus, known as AF-S (Nikon) or One Shot AF (Canon) will cause the camera to lock focus on a subject and the camera won’t re-focus while you keep the shutter actuator depressed half way. Continuous or tracking focus – AF-C (Nikon) or AI Servo (Canon) on the other hand, continuously readjusts the focus if you keep the shutter button half-depressed. This maintains focus on moving subjects. Some cameras also have a mode called AF-A (Nikon) or AI Focus AF (Canon) that switches between the two modes automatically.

Read more here:  5 Beginner Tips for More Autofocus Success

Auto

Auto is short for automatic and is sometimes signified by a small green rectangle on the camera’s shooting mode selector wheel. In this mode, the camera calculates and adjusts all camera settings for correct exposure, taking into account shutter speed, aperture, focus, white balance, ISO and light metering automatically.

Some cameras have automatic modes programmed to specialize in taking photographs of a particular subject. For example, action or sports mode prioritizes a higher ISO value and faster shutter speeds. It is represented by a running figure on your dial if your camera offers such modes.

A Few Common Photography Abbreviations Explained - camera mode dial

Auto mode is sometimes signified by a small green rectangle on the camera’s shooting mode selector wheel.

AWB

Light is different under different conditions. AWB or Automatic White Balance works in-camera to measure the white balance (WB) of a scene and remove any color casts that may impede on a photograph. In short, it tries to automatically analyze and color correct your scene. It works fairly well in most cases but can be tricked.

Note: if you shoot in RAW format you can easily tweak the White Balance later in post-production.

Read more: Auto White Balance: Yay or Nay?

B

B stands for Bulb, a mode designed for longer exposures like those often seen in time-lapse photography. In Bulb mode, when you depress the shutter button, the shutter will remain open until the button is pressed again (or until it is released, depending on your camera).

This mode is usually used in conjunction with a tripod and a remote shutter release and is necessary to achieve exposures longer than 30 seconds (the maximum exposure time on most cameras).

Read more: How to Use Bulb Mode for Long Exposure Photography

CMYK

CMYK stands for cyan, magenta, yellow and black. Black is referred to as K which is shorthand for the key plate – a printing tool which makes the artistic detail of a picture in black ink. CMYK is the color space used for most color reproduction printers (magazines, posters, business cards, etc.). This four-color mode utilizes each color in set amounts to create a color print. It is a subtractive process, so each additional color means more light is absorbed to create colors.

Because RGB (the color space in which your camera records an image) provides a larger range of colors available on the digital screen, a printed image will be inconsistent with the image you see when you press “print”. Converting an image to CMYK in Photoshop or Illustrator before printing will produce an image on the screen that is much closer to the printed product, allowing you to print an image accurately.

DOF

Depth of Field or DOF is the zone of focus in a photograph. Depth of field is affected by the aperture. A large aperture creates a shallow depth of field with a small amount of the image will be in focus. A small aperture creates a large depth of field with more in focus. Depth of field is also defined by lens focal length and the distance from the subject to the camera.

Read more: Seeing in Depth of Field: A Simple Understanding of Aperture

A Few Common Photography Abbreviations Explained

A small aperture creates a large depth of field with more in focus

DPI

DPI or dots per inch is often used interchangeably with PPI or pixels per inch. Technically, DPI measures the number of dots that can be printed in a line within the span of one inch. PPI also measures the number of dots in a line within the span of an inch but on a computer screen instead. Printers and screens with higher DPI or PPI values are clearer and more detailed.

You need to know the DPI of your printer or lab to correctly size your images for printing. Read more: How to Choose Your Lightroom Export Settings for Printing

DSLR

DSLR stands for Digital Single-Lens Reflex camera. A DSLR camera has a mirror that reflects the light coming in from the lens and directs it through a prism or set of mirrors to the viewfinder. This arrangement allows you to see what you are shooting by looking through the viewfinder. When the shutter button is depressed, the mirror flips up and allows the light coming through the lens to reach the camera sensor.

Canon 5D Mark IV full-frame DSLR camera – Image by dPS writer Mark Hughes. 

Read more: The dPS Ultimate Guide to Photography Terms – a Glossary of Common Words and Phrases

F-stop or f-number

The f-stop or f-number is a term that indicates the size of the aperture opening on your lens. Every aperture is expressed as an f-stop or f-number, like f/8 or f/2.8.

Read more: How to Take Control of Aperture and Create Stronger Photos

IS

IS stands for Image Stabilization. This technology goes under several names; Vibration Reduction, SR, VR, and VC are a few. Image stabilization is a feature in your lens (not all lenses have it!) that enables you to photograph sharper images when shooting handheld at lower shutter speeds, in dark conditions, at longer focal lengths.

Note: Some cameras have the stabilization inside the camera body. Read your user manual to be sure.

ISO

ISO stands for International Standards Organization. In film photography, ISO (or formerly ASA) was an indication of how sensitive a roll of film was to light. In digital photography, ISO measures the relative sensitivity of the camera sensor. This value can be adjusted in-camera.

The higher the number, the more light the sensor can capture. However, the greater the sensitivity of the film or sensor, the grainier the image will be (in digital photography it’s called noise).

Editor’s Note: Before you jump up and down and add a comment below about the fact that the sensitivity of the camera sensor does not actually change, let’s agree to keep it simple for the purpose of this article and these definitions. No, it isn’t that simple, but people new to photography need to take baby steps in understanding these terms, so please accept that we’ve simplified it here.

A Few Common Photography Abbreviations Explained - digital noise and bokeh

The graininess in this image is caused by a high ISO value.

JPEG

JPEG (sometimes shortened to JPG) is an image file format. It stands for “Joint Photographic Experts Group” – the name of the group that created the format. It’s one of the most common image formats saved by digital cameras, the other being RAW.

JPEG files are lossy which means that images in this file format are compressed. Lossy formats are smaller and easier to handle, but they suffer from a loss of quality.

Read more: RAW Versus JPEG – Which one is right for you and why?

M

M or Manual Mode is a shooting mode on your camera that when activated, means that you have complete control over every setting on your camera. This includes the aperture, shutter speed, ISO, white balance, metering mode, and more.

Note: Manual Mode and Manual Focus are NOT the same thing and are not exclusive of one another. Meaning you can shoot in Manual Mode using Autofocus, or in an Automatic mode using Manual Focus.

Read more: Simplifying Manual Mode to Help You Take Control of Your Images

M4/3

M4/3 is short for Micro Four Thirds and it is also known as MFT. Developed by Olympus and Panasonic in 2008, the M4/3 is a mirrorless interchangeable lens system for digital cameras and lenses. This mirrorless system means that the camera does not have an optical viewfinder system like conventional SLR/DSLR cameras, but an electronic viewfinder (EVF) instead. This system is simpler, lighter and allows for smaller cameras than DSLRs.

Read more: The 19 Most Popular Compact System and Mirrorless Cameras with Our Readers

The Olympus OM-D EM-10 is a micro four-thirds camera which means it has a smaller sensor size but is every bit as capable as most other cameras on the market.

P

P stands for Program Mode. This shooting mode has the camera adjust aperture and shutter speed automatically, while allowing you to adjust other settings like ISO, flash, white balance and focusing functions.

Read more: Your Guide to Understanding Program Mode on Your Camera

RGB

Based on the human perception of colors, RGB stands for Red, Green, and Blue. RGB is an additive color space designed for viewing imagery on digital displays (see CMYK above).

Read more: Adobe RGB Versus sRGB – Which Color Space Should You Be Using and Why

S

Shutter Priority Mode (also known as SP or TV for Time Value) is a setting that allows you to select the shutter speed while the camera automatically adjusts the aperture for proper exposure. As you adjust the shutter speed the camera’s internal light meter measures the lighting conditions of the scene you’re shooting and adjusts the aperture accordingly.

This mode is best used for shooting fast moving objects or when you want to blur or freeze a moving subject.

Panning a moving target is a good time to use Shutter Priority Mode.

Read more: Getting Your Priorities Straight – A Guide to Selecting Your Shot Priority

SLR

SLR or “single lens reflex” refers to a non-digital camera with single-lens reflex capabilities (see DSLR).

TIFF

Short for Tagged Image File Format, TIFF is a file format for digital images that does not lose color and detail in the way that lossy compression formats such as JPEG files do. This type of file format is described as lossless.

TTL

TTL stands for Through the Lens and refers to an automatic flash metering system. The flash fires a short burst prior to the actual exposure, the camera reads the amount of light coming through the lens, and sets the power of the flash according to the selected aperture. This mode is most often used with the flash on the camera.

Read more: How to Understand the Difference Between TTL Versus Manual Flash Modes

TTL versus manual flash – image by dPS writer Kunal Malhotra.

USM

USM stands for Ultra Sonic Motor, a type of autofocus motor in lenses trademarked by Canon. Equivalent systems include Nikon’s SWM (Silent Wave Motor), Sigma’s HSM (Hyper-Sonic Motor) and Olympus’ SWD (Supersonic Wave Drive Motor). They are designed to have the lens’s autofocus work as silently as possible.

WB

WB stands for White Balance, the act of balancing the color cast found in different lighting conditions for an accurate image (see AWB). White Balance can be set in-camera and adjusted in post-processing if you have shot in RAW format.

Read more: How to Use White Balance as a Creative Tool

Conclusion

There you have it. Of course, there are plenty more photography abbreviations where they come from. But knowing these basics will get you on the right track to navigating the alphabet soup that is photographic lingo! Be sure to add any extra abbreviations you’d like to see in the comments below.

The post A Few Common Photography Abbreviations Explained appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on A Few Common Photography Abbreviations Explained

Posted in Photography

 

3 Misunderstood But Important Buttons on Your Camera Explained

15 Jun

Today’s modern DSLR cameras have so many functions, buttons, and menus that it can be confusing and overwhelming to learn how to use properly. In this article, you’ll learn about three commonly misunderstood, but extremely important buttons on your camera. See what they each do, and when to use them.

#1 – The Depth of Field Preview Button

This is one that is not often used but it really handy once you know what it’s for, the depth of field preview button. Let’s have a look.

?

#2 – The Exposure Compensation Button

Next up is the Exposure Compensation button or dial. I use this one a lot with my Fuji X-T1 and X100F cameras when I’m shooting in Aperture Priority mode, which is most of the time. See where to find on your camera and how to apply it here.

#3 – Auto Exposure Lock (AEL)

Finally, the last button you should learn about is the AEL or Auto Exposure Lock button. It’s very handy when you want to lock your exposure, or your focus, or both and take multiple images of the same scene, with different compositions.

?

Can you confidently say you are familiar with and comfortable using all these buttons on your camera? If not, make it a habit to learn one new thing about your camera every day. Get to know all the buttons and dials. If you can’t figure it out, consult your camera user manual. Or search for your camera and model number on YouTube to find some good tutorials specific to your setup.

Know your camera inside and out. Then, and only then can you decide if it’s time to upgrade or not. But that’s another topic for another day!

The post 3 Misunderstood But Important Buttons on Your Camera Explained appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on 3 Misunderstood But Important Buttons on Your Camera Explained

Posted in Photography