RSS
 

Archive for February, 2018

6 Tips for How to Photograph Waterfalls

22 Feb

Waterfalls are some of the most beautiful natural features you will ever get the chance to photograph and are a very popular subject for landscape photographers. Photographing waterfalls provides a great way to get outdoors and explore nature.

 Tips for How to Photograph Waterfalls

There is something magical about the patterns and sounds of flowing water that really heighten your senses and make you feel at one with nature. Although waterfalls look great, you may be wondering well how do I photograph them? Here are six tips to help you on your way.

1 – Get the right equipment

You will be better equipped to photograph waterfalls if you have the right equipment. A wide-angle lens is essential to broaden the angle of view and ensure you are able to photograph the whole waterfall. You will also be able to get up close to the falls rather than photographing them from a distance.

Once you have found a great waterfall and have the right equipment to capture it, you are ready to take some photographs.

6 Tips for How to Photograph Waterfalls

2 – Experiment with different shutter speeds

So now that you have the gear, how do you take photos that capture the authenticity and beauty of the scene?

When photographing waterfalls, finding the ideal shutter speed involves a lot of experimenting. This step is all about trial and error, which is part of the fun. Try taking shots with different shutter speeds and check out the results to see the differences.


I would recommend taking pictures with both fast and slow shutter speeds ranging from between 1/500th of a second to a few seconds and see which style of image you prefer.

3 – Freeze motion

How you shoot waterfalls effectively depends on the look and feel of the image you are trying to achieve. If you want to capture the water in a static way, you will need to choose a fast shutter speed to freeze the motion of the water. This isolates the water in motion and gives a very different result to using an extended shutter speed.

See the difference between the three images below and how the change in shutter speed affects the water. (Images courtesy of dPS Managing Editor, Darlene Hildebrandt)

ISO 100, f/4, no ND filter, 1/640th of a second.

ISO 100, F/22, o.3 sec with ND filter

ISO 100, F/22, 1.3 sec with ND filter

4 – Blur motion

Using a slow shutter speed will help you to capture the water’s movement. You will find that the longer the shutter is open, the smoother the water will be. Be careful not to use a shutter speed that is too slow if the water is very fast flowing as the water may become one large white mass without any definition.

6 Tips for How to Photograph Waterfalls

Generally, you will obtain better results by using an extremely slow shutter speed of over a second. However, this will not be possible if you are hand holding the camera due to excessive camera shake, which brings us to the next tip.

5 – Use a tripod

Investing in a tripod will help to keep the camera more stable and enhance your chances of getting good images. The main advantage of using a tripod is that you are more likely to capture images of waterfalls that are sharper as the camera is less prone to movement during slower exposures.

Using a tripod will allow you to use slower shutter speeds to give you a smoother look and feel to your waterfall images. Images captured using long shutter speeds tend to look more dramatic and the silky water looks more appealing and pleasing to the eye.

If you do not have a tripod, you could set your camera on a stone or some other object to capture part or all of the waterfall.

6 Tips for How to Photograph Waterfalls

6 – Use a polarizing filter

One of the best ways to add some color to your images is to use a polarizing filter. This is a great way to deepen colors by increasing their saturation. But be aware that the polarizer also cuts the amount of light entering the camera, and thus increases your exposure by up to two stops of light.

6 Tips for How to Photograph Waterfalls

Polarizers also help to eliminate glare and reflections from the surface of the water and can be used to increase contrast. This is especially true when shooting during the day in bright conditions.

When adding a polarizer, the water you capture should become blurred, depending on how fast it is flowing. The advantage to using a polarizer is that you can increase the exposure time and slow the shutter speed, as the amount of light going through the lens is decreased. This allows you to create images with motion and silky-smooth water action.

Your turn

With these practical tips, it’s time for you to get out there and start photographing your next waterfall!

The post 6 Tips for How to Photograph Waterfalls by Jeremy Flint appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on 6 Tips for How to Photograph Waterfalls

Posted in Photography

 

Fujifilm interview: ‘We want the X-H1 to be friendly for DSLR users’

21 Feb

Fujifilm’s new X-H1 sits above the X-T2 in the company’s X-series APS-C lineup. As well as offering several enhancements in its core stills photography feature set, the X-H1 also brings high-end 4K video capture with up to 200Mbps capture and 5-axis in-body stabilization.

At the X-H1’s launch in Los Angeles last week, we sat down with the camera’s product manager, Jun Watanabe, to get a detailed look at the new camera. The following interview has been edited for clarity and flow.


Jun Watanabe is the Manager of Product Planning in the Sales & Marketing group of the Optical Device & Electronic Imaging Products Division at Fujifilm.

Fujifilm has stated previously that IBIS would not be possible in X-series cameras because of the small imaging circle of some XF lenses. What changed?

We have spent the past two or three years developing a system where using both hardware and software, we can cover [the necessary] imaging circle. The most important thing is precision. Because a sensor with IBIS is a floating device, it has to be perfectly centered and perfectly flat. We had already achieved a sensor flatness tolerance down to an order of microns, but the challenge was to maintain this precision with IBIS.

A laser measurement device is used during the process of manufacturing the image stabilization unit, and the assembly process also includes inspection and adjustment of each individual camera. For that reason, a micron order level of sensor parallelism is realized even while IBIS is activated.

A chart showing CIPA figures for image stabilization benefit of all compatible XF lenses, when used with the X-H1. As you can see, the least amount of benefit comes when the 10-24mm wideangle zoom is used. Users of the vast majority of XF lenses should see 5 stops of stabilization benefit.

Are there some lenses that will deliver better stabilization than others, as a result of having a larger imaging circle?

Yes. The most effective is the 35mm F1.4. But every XF lens without OIS will benefit from five stops of stabilization.

When you were developing the X-H1, how important was the requirement to add high-end video features?

Many videographers gave us input. A lot of them said they needed in-body stabilization, and F-Log in-camera recording. Those were the top requests from video users.

Compared to the X-T2, the X-H1 is a larger, more DSLR-styled camera which inherits a lot of styling cues from the medium-format GFX 50S. It is also 25% thicker, and better sealed against the elements.

What kind of feedback have you had from videographers since the X-H1 was announced?

Pretty good. We’ve heard from videographers that they really like the 200Mb/s internal recording and 12 stops of dynamic range with the Eterna film simulation. They’ve told us that this combination is the best solution for quick, high-quality video capture.

We wanted to create a more cinematic look, so we studied ‘Eterna’ – one of our cine film emulsions

We received a lot of feedback after we launched the X-T2, from videographers and DPs who said that our film simulation modes in video were unique, but too still photography oriented, with the narrow dynamic range. They wanted a real cinema look. On the product planning side we wanted to create a more cinematic look, so we studied one of our cine film emulsions – ‘Eterna’. That was the starting point.

Velvia is tuned to give you colors as you remembered them. More vivid blue skies, for example. Eterna is tuned in the opposite direction, for moderate saturation, with more cyan and green bias. With Eterna, combined with the X-H1’s dynamic range settings, we have achieved a 12 stop dynamic range.

How did you decide on what video features to include in the camera? Some expected features – like zebra – are missing.

Honestly, we couldn’t add zebra because of hardware constraints. The processor cannot support it. It requires too much processing power. At this time, we’ve achieved the best possible performance for the processor.

The X-H1 (on the left) features a substantially deeper handgrip than the X-T2, which we’re told was a major feature request from existing X-series customers. It also sports a top-plate mounted LCD, which should make it more familiar to photographers coming from using an enthusiast DSLR.

Is 8-bit capture enough, for F-Log recording?

There are 10-bit cameras on the market, but we recommend using Eterna to short-cut the recording process. We think 8-bit is enough for good quality.

Do you think the X-H1 will be bought mostly by stills photographers, or videographers?

We are targeting both. We have greatly upgraded the video performance [compared to the X-T2] but we have upgraded the stills performance too, especially autofocus in low light, and subject tracking. We also added flicker reduction and dynamic range priority, and so on. We are targeting both kinds of professional users.

When it comes to autofocus, minimum low light AF response has been improved from 0.5EV to -1EV. We’ve also introduced a new phase-detection autofocus algorithm and parallel data processing. The X-H1 has the same processor as the X-T2 but the algorithms are new. A single autofocus point in the X-T2 was divided into 5 zones. In the X-H1, this has been increased to 20 zones.

Phase-detection autofocus will be possible with our 100-400mm lens in combination with a 2X teleconverter

Data from each zone is processed in three ways, for horizontal detail, vertical detail, and fine, natural detail like foliage or a bird’s feathers. This processing happens simultaneously, rather than in series, which is a big advantage over the X-T2. We’ve also achieved phase-detection performance down to F11, which means that phase-detection autofocus will be possible with our 100-400mm lens in combination with a 2X teleconverter, with a much higher hit-rate compared to the X-T2.

During shooting, the predictive AF algorithm now generates information from captured images in a sequence, for more reliable subject tracking while zooming.

Now that you have a powerful 4K-capable video camera with IBIS, how will this change how you develop lenses, in the future?

For stills lenses, our approach will stay the same. But we’ve also announced two cinema lenses. These both work with IBIS and the MKX 18-55mm zoom will deliver 5 stops of correction. This is a unique selling point.

We have had requests from some of our professional users for a bigger camera

The X-H1 is considerably larger than its predecessors. Is there a point when the size advantage of APS-C compared to full-frame gets lost?

Professionals are generally more accepting of larger cameras, and [compared to DSLRs] the X-H1 isn’t that big. And we have had requests from some of our professional users for a bigger camera, especially those photographers that use our longer lenses. A bigger grip and more solid body were both requested.

Here’s that deeper handgrip, in action.

When the camera gets bigger, does it make some aspects of design easier? Like heat management?

Yes, the increased camera volume gives us some advantages when it comes to heat and cooling systems. In fact the X-H1’s 4K recording time is 50% longer than the X-T2, thanks to a new cooling system and two large copper heat sinks.

How much technology from the GFX 50S has made it into the X-H1?

Some of the operation and operability improvements have made their way into this camera. We hope that some DSLRs users will come over to the X-series, thanks to things like the top LCD, and twin control dials and so on. We wanted the X-H1 to be ‘friendly’ to photographers who are used to DSLRs.


Editor’s note:

I always enjoy talking to engineers, even with the caveat that some of what they say occasionally goes completely over my head. I was very surprised, for instance, after hearing Mr. Watanabe detail all of the clever ways in which the X-H1 processes AF information, to be told that the new camera has the same processor as the X-T2.

It’s not impossible to imagine that the X-T2 might yet benefit from some of these advances.

Quite how Fujifilm has managed to eke such increased efficiency from essentially the same amount of computing power is beyond my intellect, but if the claimed increase in performance holds up in our testing, the company deserves a lot of credit. And given Fujifilm’s excellent track record of updating older models, it’s not impossible to imagine that the X-T2 might yet benefit from some of these advances.

Apparently there were internal discussions about including a dual, or even a completely new processor in the X-H1, but this would have added to development time, as well as cost. It’s possible too that some of the heat-management benefits of the X-H1’s larger internal volume compared to the X-T2 might have been nullified.

‘Silent control’ in movie shooting allows you to adjust exposure settings by touching the rear LCD – avoiding the noise and vibration of clicky buttons and dials making its way into your footage.

And in these days of 4K video capture, heat matters. The X-H1 isn’t a perfect video camera by any means, but it’s the most convincing X-series model yet. It should compare well against most of its competitors, barring only the more specialized Panasonic GH5/S. In-camera 5-axis stabilization is a big part of that (involving 10,000 calculations per second, if you can believe it), but features like 12EV of video dynamic range (Eterna + DR400%), internal F-log recording and a maximum quality of 200 Mbps are sure to attract the attention of professional, as well as casual videographers.

One of the most requested features from Fujifilm’s X-series customers was a bigger grip

Even for people with little or no interest in video, the X-H1’s enhanced feature set might still be enough to justify the extra cost over the X-T2. And possibly also its ergonomics. According to Mr. Watanabe, one of the most requested features from Fujifilm’s X-series customers was a bigger grip. The X-H1 gets bigger everythings, just about. Obviously this means that the camera is bigger as a result, but Fujifilm is hoping that this will make the X-H1 appeal to more traditional DSLR users.

Will the X-H1 prove a hit? I hope so. It’s an impressive camera, and a bold move by Fujifilm. I can’t see the company creating a dedicated video camera any time soon (and Mr. Watanabe would not be drawn on this question when I asked him) but however it gets there, one thing is clear: Fujifilm really wants to be taken seriously by filmmakers, as well as traditional stills photographers.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Fujifilm interview: ‘We want the X-H1 to be friendly for DSLR users’

Posted in Uncategorized

 

Tips for Shooting Landscape Photography Towards the Sun

21 Feb

Avoid photographing towards the sun is one of the most common tips you’ll hear for landscape photography. In fact, it’s a tip that I’ve shared previously myself.

While it’s not without a reason that’s it’s a well-known tip, it might not be as relevant today as it was several years ago. Today’s sensors and post-processing opportunities are much more forgiving and what once was a bad idea can now be an opportunity.

In this article, I’ll show you how including the sun in the frame can enhance the atmosphere and add an extra dimension to your images as well as sharing my best tips for doing so.

Why you should include the sun in your images

I’m sure that many of you are ready to jump straight into the comment section right now and tell me how much of a bad idea it is to shoot towards the sun. But give me a minute to explain a few reasons why it’s something you might want to consider doing with your landscape photography.

Tips for Shooting Landscape Photography Towards the Sun

The greatest benefit of adding the sun in the frame is that it adds depth to the image. Take the image above as an example. Remove the sun and the image becomes flat and much less interesting. With the sun included, the image comes to life and drags you into it.

Tips for Shooting Landscape Photography Towards the Sun

Compositionally it can also be beneficial. Of course, this depends on where you place the sun. In the example above, the bright sun serves as a focal point. Naturally, the viewer’s eye is guided along the cliffs and up towards the bright area.

Keep in mind that our eyes are naturally attracted to the brighter parts of the image.

Another benefit of shooting towards the sun is that you often get beautiful shadows striking towards you. This serves as additional leading lines and benefits the composition.

Tips for including the sun in your images

Now, there’s one thing I need to make clear; including the sun in an image won’t always be beneficial. There are certain conditions or methods you should take advantage of for this to work. Here are some tips.

The time of day matters

While there are exceptions, the best images come when the sun is low on the horizon. The sun then creates a soft glow and gives a nicely balanced light.

Tips for Shooting Landscape Photography Towards the Sun

During midday when the sun is positioned higher in the sky, the light is harsh and less pleasing to the eyes. Generally, this is something you want to avoid.

Consider the sun’s placement within the frame

I’ll start by saying this, there’s no one single correct spot to place the sun within your image. Sometimes it’s beneficial to place it in the center, while other times it’s better to place it on the side.

This is where trial and error, and experience come into play.

Tips for Shooting Landscape Photography Towards the Sun

In the image above, I chose to place the sun at the very edge of the frame. Partly obscured by the clouds, it doesn’t take too much attention but instead, you’re drawn to the beautiful light hitting the landscape.

If you are familiar with semi-advanced post-processing techniques, you might be aware of a processing style called light bleed. This is a technique that involves heavy dodging and enhancing/creating a light source that strikes through the image. However, this is an effect you’re able to get in-camera as well by placing the sun at the corner or edge of your frame.

Tips for Shooting Landscape Photography Towards the Sun

Other times, you want to place the sun in the center of the image. In the image above, placing the sun in the center adds a light source that your eyes naturally go toward. Had I instead placed the sun to the side, this image would be less balanced.

Obscure the sun

In my opinion, one of the most efficient ways of including the sun in your image is by partly obscuring it. Combining that with a narrow aperture, you get a nice sun-star or sunburst.

Tips for Shooting Landscape Photography Towards the Sun

Use a Graduated ND Filter

Since the sun is so much brighter than the surrounding landscape, it can be hard to capture a well-exposed image when including it in the frame. By using a Graduated ND Filter you’re able to darken the sky in your image – meaning that you can capture a well-balanced image even with the sun in the frame.

Unfortunately, a Graduated ND Filter is not always ideal. Since the transition between darkened and transparent parts of the filter is a straight line, it can create some unwanted effects if you’re photographing a scene where something is projecting above the horizon.

Graduated ND Filters are better to use when the horizon is flat, such as the image below:

Tips for Shooting Landscape Photography Towards the Sun

… Or bracket multiple exposures

Another more flexible method of capturing well-balanced images with the sun included is to bracket multiple exposures and blend them in a photo editor. This is the better choice when the sun is at the highest position in the sky, as the contrast is even greater.

For the image below, I captured three images; one exposed for the landscape, one exposed for the sky and one even darker to balance out the brightest parts.

Tips for Shooting Landscape Photography Towards the Sun

Your turn

Hopefully, I’ve been able to convince you that shooting towards the sun isn’t a complete no-no anymore. Have you captured any images that are shot towards the sun for your landscape photography? I would love to see them in a comment below!

The post Tips for Shooting Landscape Photography Towards the Sun by Christian Hoiberg appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on Tips for Shooting Landscape Photography Towards the Sun

Posted in Photography

 

Lensrentals tears down the Sony a7R III in search of better weather sealing

21 Feb

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_8395811675″,”galleryId”:”8395811675″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Our good friend Roger Cicala over at Lensrentals finally got around to tearing down the Sony a7R III, to see if Sony was being honest when it claimed the newest a7R was much better weather sealed than its predecessor. The results? Well, it’s a “good news, bad news” situation. Yes, Sony was being truthful… but it screwed up in one major place.

You can see the full teardown over on the Lensrentals blog—Roger tears the thing all the way down, even giving us a great look a the IBIS system and how far the sensor can travel—but the TL;DR version goes something like this:

Sony weather sealed most of this camera very well, much better than its predecessor. BUT, for some reason, Sony left the bottom of this camera extremely vulnerable to water. You can see just how vulnerable in the gallery above. Or, if you prefer words, here’s Roger’s conclusion:

Sony spoke truly. Except for the bottom this camera has thorough and extensive weather sealing, as good as any camera I’ve seen. (Before you Pentax guys start, I have not taken apart a Pentax so it may be completely sealed in a super glue matrix for all I know.)

That being said, the bottom of the camera is not protected worth a damn. If you’re out in a sprinkle or shower, this probably doesn’t matter; water hits the top first. But if you’re in severe weather, near surf, or might set your camera down where someone might spill something, you need to be aware of that.

To read the full conclusion, scroll through the entire teardown, and see just how many rubber gaskets and foam pieces Sony added to the a7RIII to keep it safe from inclement weather, head over to the Lensrentals blog.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Lensrentals tears down the Sony a7R III in search of better weather sealing

Posted in Uncategorized

 

This strange gadget literally shocks you into taking ‘better’ photos

21 Feb

A new project called Prosthetic Photographer involves a very real gadget designed to zap humans into taking better images. The system was created by artist and designer Peter Buczkowski, and it works with both DSLR and mirrorless cameras. Using artificial intelligence, the device constantly scans for ‘ideal’ scenes and uses mild electric shocks to force/train the photographer to capture them.

“The Prosthetic Photographer enables anybody to unwillingly take beautiful pictures,” Buczkowski explains on the project’s website. The gadget is a way for an AI to train a human, though the AI itself was first trained using a dataset containing 17,000 images, and those images were captured and rated by humans.

Using what it learned about quality photos, the Prosthetic Photographer AI identifies scenes worth capturing and trains the human behind the camera to recognize them. To do this, the AI triggers a small electric shock delivered through electrodes on the handgrip, which forces the photographer’s finger to press a button and capture said ideal scene.

As demonstrated in the video at the top of this post, users can adjust the shock strength using knobs on the back of the device. “This system is part of a new aesthetic, based on computer-generated decisions that were taught by previous human skill,” Buczkowski explains on his site. “The conscious skill of photography becomes obsolete this way.”

The resulting images feature the AI’s own aesthetic tastes, which are based on the images used to train the system. Of course, some of the scenes captured by the human who is being ‘trained’ are often… less than striking.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on This strange gadget literally shocks you into taking ‘better’ photos

Posted in Uncategorized

 

Drone may have caused helicopter crash in South Carolina

21 Feb

Officials are investigating whether a recent helicopter crash near Charleston, South Carolina, was caused by a civilian drone operated nearby. The accident, which happened last Wednesday, involved a Robinson Helicopter Co. R22 helicopter carrying an instructor pilot and student.

The two are reporting that a small UAV flew directly in their path, forcing the instructor to perform evasive action. That evasive action, unfortunately, caused the helicopter’s tail to hit a tree, which sent the helicopter into a crash landing, according to Bloomberg. Sources speaking to the publication report that the helicopter’s tail was severely damaged; fortunately, neither person was injured.

A National Transportation Safety Board spokesman confirmed to Bloomberg that it is looking into initial reports claiming a drone contributed to the crash. Assuming that’s true, this would be the first time that a drone has caused an aircraft crash in the US. The FAA hasn’t commented on the possibly of a drone’s involvement.

Reports of drones being operated illegally, near-misses with aircraft, and even possible collisions are increasing. In recent days, a video surfaced of a drone being operated directly above a commercial passenger jet in Las Vegas. Following that, more recent reports claim a drone struck a tour helicopter in Hawaii. Canadian officials also recently released a report detailing a collision between a drone and a small plane.

Though the drone model hasn’t been stated (and may not be known), Chinese drone maker DJI has preemptively released a statement on the matter, saying:

DJI is trying to learn more about this incident and stands ready to assist investigators. While we cannot comment on what may have happened here, DJI is the industry leader in developing educational and technological solutions to help drone pilots steer clear of traditional aircraft.

Last year, DJI introduced a system called AeroScope that helps law enforcement and airport officials identify drones being operated in restricted airspace.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Drone may have caused helicopter crash in South Carolina

Posted in Uncategorized

 

dPS Writer’s Favourite Lens: Canon 100mm Macro

21 Feb

The Canon 100mm macro lens was on my Want List for such a long time, next to the Canon 10-22mm Ultra Wide-Angle. Oddly, once I did get it, I never used it, and it sat gathering dust in the cupboard for a couple of years. Now it is my go-to lens for doing still life, food and of course, macro photography.

dPS Writer's Favourite Lens: Canon 100mm Macro

Why is it my favorite lens?

Sharpness, image quality, color, and versatility – it has it all!

I know when using this lens it is going to pick up absolutely every detail, and when it is sharp it is crystal clear. Unfortunately, due to the combined weight of the lens (625g) on my Canon 7D MK II, I find it difficult to handhold and get sharp shots. So I use it on my tripod to guarantee the focus is bang on.

dPS Writer's Favourite Lens: Canon 100mm Macro

Merits of the Canon 100mm macro lens

This lens has a richness to the colors that I appreciate, it gives the best color reproduction of any of my lenses. Also when you are shooting at its native 2.8, the soft background blur is quite delicious as well.

Finally, the versatility of this lens, given it is a macro lens, is impressive. I use it for macro, food photography, flower photography, and other still life subjects. It is also a favorite lens for portrait photographers due to the factors that make it my personal favorite.

It’s quiet, it’s fast and it’s a lovely lens to use. Once I mastered the art of fine focusing with a really tiny depth of field and was able to consistently get sharp shots, the quality of the images impressed me more and more.

dPS Writer's Favourite Lens: Canon 100mm Macro

How I use it

1 – Food Photography

Working with natural light in my home studio sometimes means the light is not always abundant. Or possibly you need to filter it quite heavily so you don’t blow out the highlights on some whipped cream or icing. So working in slightly less than ideal light conditions is where I find this lens really comes into its own.

With a 67mm filter diameter, it has a lot of surface area to bring in the available light.  The native f/2.8 aperture captures all the light possible. While I might have to increase ISO a small amount, it is not enough to affect the quality of the image.

With such high image quality, capturing the finest small details really adds character to food shots taken with this lens. Water droplets on fruit or the tiny hairs on a raspberry become things of wonder, brought into view by the capabilities of this lens.

dPS Writer's Favourite Lens: Canon 100mm Macro

2. Flower Photography

Doing photography of flowers is what finally forced me to get my Canon 100mm lens out of storage and start using it. I had become interested in still life photography and was using flowers as the subject to base my compositions around.

Flowers offer so many opportunities to be creative with this lens, you can shoot the whole flower, move in to shoot just a portion of it, or really get into the macro side of things.

dPS Writer's Favourite Lens: Canon 100mm Macro

The lovely colour and soft bokeh suit flower photography very well, and I enjoy using it a great deal. It is a lot of fun to experiment with areas of selective focus or just using depth of field in unexpected ways.

dPS Writer's Favourite Lens: Canon 100mm Macro

3. Macro photography

There is a whole world of things too small for our eyes to see naturally that suddenly become revealed when we shoot with a macro lens. It is fascinating to uncover tiny details in everyday objects.

Playing with abstracts of textures or just exploring the things we cannot normally see are possible with the 100mm macro lens. The ordinary becomes extraordinary when you can get up close and personal. When my camera is mounted on my tripod, I know that I can get sharp focus with a very narrow depth of field on a very small subject.

dPS Writer's Favourite Lens: Canon 100mm Macro

 

4. Other options

I am not a portrait photographer but I do have cats, and they are fun to shoot with this lens as it picks up so much detail. I personally struggle to sucessfully handhold my 7D Mark II with this lens and get sharp images, so I don’t shoot with it off my tripod very often.

dPS Writer's Favourite Lens: Canon 100mm Macro

Specifications

The Canon EF 100mm F2.8 IS L Macro lens – full specifications on Canon site – 625g, minimum focus distance 300mm, Hybrid Image Stabilization for handheld macro shooting.

Pros

  • Sharpness
  • Depth of field
  • Bokeh is smooth
  • Color
  • Hybrid Image Stabilizing
  • EF and EFS compatible
  • 1:1 magnification
  • Comes with a lens hood and carry bag

Cons

  • Heavy and can be difficult to handhold, requiring a tripod
  • Expensive
  • 300mm minimum focus distance

Conclusion

Overall for me, the pros of shooting with this lens far outweigh the cons. Have you used the Canon 100mm macro lens or one similar? Please share in the comments below if you enjoy it as much as I do.

dPS Writer's Favourite Lens: Canon 100mm Macro

The post dPS Writer’s Favourite Lens: Canon 100mm Macro by Stacey Hill appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on dPS Writer’s Favourite Lens: Canon 100mm Macro

Posted in Photography

 

DPReview on TWiT: tech trends in smartphone cameras

20 Feb

As part of our regular appearances on the TWiT Network (named after its flagship show, This Week in Tech) show ‘The New Screen Savers’, our Science Editor Rishi Sanyal joined host Leo Laporte and co-host Megan Morrone to talk about how smartphone cameras are revolutionizing photography. Watch the segment above, then catch the full episode here.

Rishi has also expounded upon some of the topics covered in the segment below, with detailed examples that clarify some of the points covered. Have a read after the fold once you’ve watched the segment.

You can watch The New Screen Savers live every Saturday at 3pm Pacific Time (23:00 UTC), on demand through our articles, the TWiT website, or YouTube, as well as through most podcasting apps.


So who wins? iPhone X or Pixel 2?

Not so fast. Neither.

Each has its strengths, which we hope to tell you about in our video segment above and in our examples below. Google and Apple take different approaches, and each has its pros and cons, but there are common overlapping practices and themes as well. And that’s before we begin discussing video, where the iPhone’s 4K/60p HEVC video borders on professional quality while Google’s stabilization may make you want to chuck your gimbal.

Smartphones have to deal with the fact that their cameras, and therefore sensors, are tiny. And since we all (now) know that, generally speaking, it’s the amount of light you capture that determines image quality, smartphones have a serious disadvantage to deal with: they don’t capture enough light. But that’s where computational photography comes in. By combining machine learning, computer vision, and computer graphics with traditional optical processes, computational photography aims to enhance what is achievable with traditional methods.

Intelligent exposure and processing? Press. Here.

One of the defining characteristics of smartphone photography is the idea that you can get a great image with one button press, and nothing more. No exposure decision, no tapping on the screen to set your exposure, no exposure compensation, and no post-processing. Just take a look at what the Google Pixel 2 XL did with this huge dynamic range sunrise at Banff National Park in Canada:

Sunrise at Banff, with Mt. Rundle in the background. Shot on Pixel 2 with one button press. I also shot this with my Sony a7R II full-frame camera, but that required a 4-stop reverse graduated neutral density (‘Daryl Benson’) filter, and a dynamic range compensation mode (DRO Lv5) to get a usable image. While the resulting image from the Sony was head-and-shoulders above this one at 100%, I got this image from the Pixel 2 by just pointing and shooting.

Apple’s iPhones try to achieve similar results by combining multiple exposures if the scene has enough contrast to warrant it. But iPhones can’t achieve these results (yet) since they don’t average as many ‘samples’ as the Google Pixel 2. Sometimes Apple’s longer exposures can blur subjects, and iPhones tend to overexpose and blow highlights for the sake of exposing the subject properly. Apple is also still pretty reticent to enable HDR in ‘Auto HDR’.

The Pixel 2 was able to achieve the image above by first determining the correct focal plane exposure required to not blow large bright (non-specular) areas (an approach known as ETTR or ‘expose-to-the-right’). When you press the shutter button, the Pixel 2 goes back in time 9 frames, aligning and averaging them to give you a final image with quality similar to what you might expect from a sensor with 9x as much surface area.

How does it do that? It’s constantly keeping the last 9 frames it shot in memory, so when you press the shutter it can grab them, break each into many square ’tiles’, align them all, and then average them. Breaking each image into small tiles allows for alignment despite photographer or subject movement by ignoring moving elements, discarding blurred elements in some shots, or re-aligning subjects that have moved from frame to frame. Averaging simulates the effects of shooting with a larger sensor by ‘evening out’ noise.

That’s what allows the Pixel 2 to capture such a wide dynamic range scene: expose for the bright regions, while reducing noise in static elements of the scene by image averaging, while not blurring moving (water) elements of the scene by making intelligent decisions about what to do with elements that shift from frame to frame. Sure, moving elements have more noise to them (since they couldn’t have as many of the 9 frames dedicated to them for averaging), but overall, do you see anything but a pleasing image?

Autofocus

Who focuses better? Google Pixel 2, hands down. Its dual pixel AF uses nearly the entire sensor for autofocus (binning the high-resolution sensor into a low-resolution mode to decrease noise), while also using HDR+ and its 9-frame image averaging to further decrease noise and have a usable signal to make AF calculations from.

Google Pixel 2 can focus lightning fast even in indoor artificial light, which allowed me to snap this candid before it was over in a split second. The iPhone X captured a far less interesting moment seconds later when it finally achieved focus, missing the candid moment.

And despite the left and right perspectives the split pixels in the Pixel 2 sensor ‘see’ having less than 1mm stereo disparity, an impressive depth map can be built, rendering an optically accurate lens blur. This isn’t just a matter of masking the foreground and blurring the background, it’s an actual progressive blur based on depth.

That’s what allowed me to nail this candid image the instant after my wife and child whirled around to face the camera. Nearly all my iPhone X images of this scene were either out-of-focus or captured a less interesting, non-candid moment because of the shutter lag required to focus. The iPhone X only uses approximately 3% of its pixels for its ‘Dual PDAF’ autofocus, as opposed to the Pixel 2’s use of its entire sensor combined with multi-frame noise reduction, not just for image capture but also for focus.

Portrait Lighting

While we’ve been praising the Pixel phones, Apple is leading smartphone photography in a number of ways. First and foremost: color accuracy. Apple displays are all calibrated and profiled to display accurate colors, so no matter what Apple or color-managed device (or print) you’re viewing, colors look the same. Android devices are still the Wild West in this regard, but Google is trying to solve this via a proper color management system (CMS) under-the-hood. It’ll be some time before all devices catch up, and even Google itself is struggling with its current display and CMS implementation.

But let’s talk about Portrait Lighting. Look at the iPhone X ‘Contour Lighting’ shot below, left, vs. what the natural lighting looked like at the right (shot on a Google Pixel 2 with no special lighting features). While the Pixel 2 image is more natural, the iPhone X image is far more interesting, as if I’d lit my subject with a light on the spot.

Apple iPhone X, ‘Contour Lighting’ Google Pixel 2

Apple builds a 3D map of a face using trained algorithms, then allows you to re-light your subject using modes such as ‘natural’, ‘studio’ and ‘contour’ lighting. The latter highlights points of the face like the nose, cheeks and chin that would’ve caught the light from an external light source aimed at the subject. This gives the image a dimensionality you could normally only achieve using external lighting solutions or a lot of post-processing.

Currently, the Pixel 2 has no such feature, so we get the flat lighting the scene actually had on the right. But, as you can imagine, it won’t be long before we see other phones and software packages taking advantage of—and even improving on—these computational approaches.

HDR and wide-gamut photography

And then we have HDR. Not the HDR you’re used to thinking about, that creates flat images from large dynamic range scenes. No, we’re talking about the ability of HDR displays—like bright contrasty OLEDs—to display the wide range of tones and colors cameras can capture these days, rather than sacrificing global contrast just to increase and preserve local contrast, as traditional camera JPEGs do.

iPhone X is the first device ever to support the HDR display of HDR photos. That is: it can capture a wide dynamic range and color gamut but then also display them without clipping tones and colors on its class-leading OLED display, all in an effort to get closer to reproducing the range of tones and colors we see in the real world.

iPhone X is the first device ever to support HDR display of HDR photos

Have a look below at a Portrait Mode image I shot of my daughter that utilizes colors and luminances in the P3 color space. P3 is the color space Hollywood is now using for most of its movies (it’s similar, though shifted, to Adobe RGB). You’ll only see the extra colors if you have a P3-capable display and a color-managed OS/browser (macOS + Google Chrome, or the newest iPads and iPhones). On a P3 display, switch between ‘P3’ and ‘sRGB’ to see the colors you’re missing with sRGB-only capture.

Or, on any display, hover over ‘Colors in P3 out-of-gamut of sRGB’ to see (in grey) what you’re missing with a sRGB-only capture/display workflow.

iPhone X Portrait Mode, image in P3 color space iPhone X Portrait mode, image in sRGB color space Colors in P3 out-of-gamut of sRGB highlighted in grey

Apple is not only taking advantage of the extra colors of the P3 color space, it’s also encoding its images in the ‘High Efficiency Image Format’ (HEIF), which is an advanced format aimed to replace JPEG that is more efficient and also allows for 10-bit color encoding (to avoid banding while allowing for more colors) and HDR encoding to allow the display of a larger range of tones on HDR displays.

But will smartphones replace traditional cameras?

For many, yes, absolutely. You’ve seen the autofocus speeds of the Pixel 2, assisted by not only dual pixel AF but also laser AF. You’ve seen the results of HDR+ image stacking, which will only get better with time. We’ve seen dual lens units that give you the focal lengths of a camera body and two primes, and we’ve seen the ability to selectively blur backgrounds and isolate subjects like the pros do.

Below is a shot from the Pixel 2 vs. a shot from a $ 4,000 full-frame body and 55mm F1.8 lens combo—which is which?

Full Frame or Pixel 2? Pixel 2 or Full Frame?

Yes, the trained—myself included—can pick out which is the smartphone image. But when is the smartphone image good enough?

Smartphone cameras are not only catching up with traditional cameras, they’re actually exceeding them in many ways. Take for example…

Creative control…

The image below exemplifies an interesting use of computational blur. The camera has chosen to keep much of the subject—like the front speaker cone, which has significant depth to it—in focus, while blurring the rest of the scene significantly. In fact, if you look at the upper right front of the speaker cabinet, you’ll see a good portion of it in focus. After a certain point, the cabinet suddenly-yet-gradually blurs significantly.

The camera and software has chosen to keep a significant depth-of-focus around the focus plane before blurring objects far enough away from the focus plane significantly. That’s the beauty of computational approaches: while F1.2 lenses can usually only keep one eye in focus—much less the nose or the ear—computational approaches allow you to choose how much you wish to keep in focus even if you wish to blur the rest of the scene to a degree where traditional optics wouldn’t allow for much of your subject to remain in focus.

B&W speakers at sunrise. Take a look at the depth-of-focus vs. depth-of-field in this image. If you look closely, the entire speaker cone and a large front portion of the black cabinet is in focus. There is then a sudden, yet gradual blur to very shallow depth-of-field. That’s the beauty of computational approaches: one can choose extended (say, F5.6 equivalent) depth-of-focus near the focus plane, but then gradually transition to far shallower – say F2.0 – depth-of-field outside of the focus plane. This allows one to keep much of the subject in focus, bet achieve the subject isolation of a much faster lens.

Surprise and delight…

Digital assistants. Love them or hate them, they will be a part of your future, and they’re another way in which smartphone photography augments and exceeds traditional photography approaches. My smartphone is always on me, and when I have my full-frame Sony a7R III with me, I often transfer JPEGs from it to my smartphone. Those images (and 720p video proxies) automatically upload to my Google Photos account. From there any image or video that has my or my daughter’s face in it automatically gets shared with my wife without my so much as lifting a finger.

Better yet? Often I get a notification that Google Assistant has pulled a cute animated GIF from my movie it thinks is interesting. And more often than not, the animations are adorable:

Splash splash! in Xcaret, Quintana Roo, Mexico. Animated GIF auto-generated from a movie shot on the Pixel 2.

Machine learning allowed Google Assistant to automatically guess that this clip from a much longer video was an interesting moment I might wish to revisit and preserve. And it was right. Just as it was right in picking the moment below, where my daughter is clapping in response to her cousin clapping at successfully feeding her… after which my wife claps as well.

Claps all around!

Google Assistant is impressive in its ability to pick out meaningful moments from photos and videos. Apple takes a similar approach in compiling ‘Memories’.

But animated GIFs aren’t the only way Google Assistant helps me curate and find the important moments in my life. It also auto-curates videos that pull together photos and clips from my videos—be it from my smartphone or media I’ve imported from my camera—into emotionally moving ‘Auto Awesome’ compilations:

At any time I can hand-select the photos and videos, down to the portions of each video, I want in a compilation—using an editing interface far simpler than Final Cut Pro or Adobe Premiere. I can even edit the auto-compilations Google Assistant generates, choosing my favorite photos, clips and music. And did you notice that the video clips and photos are cut down to the beat in the music?

This is a perfect example of where smartphone photography exceeds traditional cameras, especially for us time-starved souls that hardly have the time to download our assets to a hard drive (not to mention back up said assets). And it’s a reminder that traditional cameras that don’t play well with such automated services like Google and Apple Photos will only be left behind simpler services that surprise and delight a majority of us.

The future is bright

This is just the beginning. The computational approaches Apple, Google, Samsung and many others are taking are revolutionizing what we can expect from devices we have in our pockets, devices we always have on us.

Are they going to defy physics and replace traditional cameras tomorrow? Not necessarily, not yet, but for many purposes and people, they will offer pros that are well-worth the cons. In some cases they offer more than we’ve come to expect of traditional cameras, which will have to continue to innovate—perhaps taking advantage of the very computational techniques smartphones and other innovative computational devices are leveraging—to stay ahead of the curve.

But as techniques like HDR+ and Portrait Mode and Portrait Lighting have shown us, we can’t just look at past technologies to predict what’s to come. Computational photography will make things you’ve never imagined a reality. And that’s incredibly exciting.

Hungry for more? We’ve updated our standard studio scene to allow you to compare the Pixel 2 and iPhone X against each other and other cameras in Daylight and Low Light, as well as updated our galleries. Follow the links below:

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on DPReview on TWiT: tech trends in smartphone cameras

Posted in Uncategorized

 

Samsung unveils massive 30TB solid state drive, the world’s largest SSD

20 Feb
Photo: Samsung

Samsung has reached another solid state storage milestone with its newly-announced Serial Attached SCSI PM1643 30TB SSD. The drive, which was developed for enterprise use, has double the capacity of the 15.36TB SSD Samsung introduced in early 2016. The company packed 512Gb V-NAND chips alongside 1TB NAND flash packages into the new drive, the combination enabling it to offer a 30TB capacity in a 2.5-inch form factor.

“With our launch of the 30.72TB SSD,” Samsung’s Jaesoo Han explained, “we are once again shattering the enterprise storage capacity barrier, and in the process, opening up new horizons for ultra-high capacity storage systems worldwide.”

In addition to hitting a record capacity, Samsung explains that its PM1643 is the first SSD to feature Through Silicon Via (TSV)-applied DRAM, which totals 40GB in this model. The company also managed to include an endurance level that supports writing 30.72TB of data to the drive every day for five years (the warranty period) without failure, an error correction code (ECC) algorithm for reliability, software offering sudden power failure and metadata protection, and sequential read/write speeds up to 2,100MB/s and 1,700MB/s.

Photo: Samsung

Samsung plans to offer other versions of this drive with capacities ranging from 800GB to 15.36TB. As for the 30.72TB model, the South Korean company explains that it started producing “initial quantities” of the drive last month, with lineup expansion planned for later in 2018.

The drive price isn’t listed, but we’re less excited about this specific drive (since it’s an enterprise drive) and more excited about the tech trickling down into consumer-focused higher capacity SSDs that photographers and videographers can use for backups.

Read the full press release below for more details about these drives.

Samsung Electronics Begins Mass Production of Industry’s Largest Capacity SSD – 30.72TB – for Next-Generation Enterprise Systems

New ‘PM1643’ is built on latest 512Gb V-NAND to offer the most advanced storage, featuring industry-first 1TB NAND flash package, 40GB of DRAM, new controller and custom software

Korea on February 20, 2018 – Samsung Electronics, the world leader in advanced memory technology, today announced that it has begun mass producing the industry’s largest capacity Serial Attached SCSI (SAS) solid state drive (SSD) – the PM1643 – for use in next-generation enterprise storage systems. Leveraging Samsung’s latest V-NAND technology with 64-layer, 3-bit 512-gigabit (Gb) chips, the 30.72 terabyte (TB) drive delivers twice the capacity and performance of the previous 15.36TB high-capacity lineup introduced in March 2016.

This breakthrough was made possible by combining 32 of the new 1TB NAND flash packages, each comprised of 16 stacked layers of 512Gb V-NAND chips. These super-dense 1TB packages allow for approximately 5,700 5-gigabyte (GB), full HD movie files to be stored within a mere 2.5-inch storage device.

In addition to the doubled capacity, performance levels have risen significantly and are nearly twice that of Samsung’s previous generation high-capacity SAS SSD. Based on a 12Gb/s SAS interface, the new PM1643 drive features random read and write speeds of up to 400,000 IOPS and 50,000 IOPS, and sequential read and write speeds of up to 2,100MB/s and 1,700 MB/s, respectively. These represent approximately four times the random read performance and three times the sequential read performance of a typical 2.5-inch SATA SSD*.

“With our launch of the 30.72TB SSD, we are once again shattering the enterprise storage capacity barrier, and in the process, opening up new horizons for ultra-high capacity storage systems worldwide,” said Jaesoo Han, executive vice president, Memory Sales & Marketing Team at Samsung Electronics. “Samsung will continue to move aggressively in meeting the shifting demand toward SSDs over 10TB and at the same time, accelerating adoption of our trail-blazing storage solutions in a new age of enterprise systems.”

Samsung reached the new capacity and performance enhancements through several technology progressions in the design of its controller, DRAM packaging and associated software. Included in these advancements is a highly efficient controller architecture that integrates nine controllers from the previous high-capacity SSD lineup into a single package, enabling a greater amount of space within the SSD to be used for storage. The PM1643 drive also applies Through Silicon Via (TSV) technology to interconnect 8Gb DDR4 chips, creating 10 4GB TSV DRAM packages, totaling 40GB of DRAM. This marks the first time that TSV-applied DRAM has been used in an SSD.

Complementing the SSD’s hardware ingenuity is enhanced software that supports metadata protection as well as data retention and recovery from sudden power failures, and an error correction code (ECC) algorithm to ensure high reliability and minimal storage maintenance. Furthermore, the SSD provides a robust endurance level of one full drive write per day (DWPD), which translates into writing 30.72TB of data every day over the five-year warranty period without failure. The PM1643 also offers a mean time between failures (MTBF) of two million hours.

Samsung started manufacturing initial quantities of the 30.72TB SSDs in January and plans to expand the lineup later this year – with 15.36TB, 7.68TB, 3.84TB, 1.92TB, 960GB and 800GB versions – to further drive the growth of all-flash-arrays and accelerate the transition from hard disk drives (HDDs) to SSDs in the enterprise market. The wide range of models and much improved performance will be pivotal in meeting the growing storage needs in a host of market segments, including the government, financial services, healthcare, education, oil & gas, pharmaceutical, social media, business services, retail and communications sectors.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Samsung unveils massive 30TB solid state drive, the world’s largest SSD

Posted in Uncategorized

 

Easy Color Grading With LUTs and Luminar 2018

20 Feb

Focusing on color can help you communicate style and emotion. This approach is often referred to as color grading.

Color grading versus color correction

You may have wondered how this differs from color correction, which is more of a technical adjustment. A tungsten bulb, for example, will produce a color shift in your images that’s warmer than what you’re accustomed to seeing with your eyes. Often you want to adjust that hue, cooling it off a bit so that it appears more natural. That’s a correction.

Color grading, on the other hand, leans toward the artistic. You may want to add or enhance orange tones and teals to create a mood similar to what one would experience in the movies. Exact reality isn’t the goal. It’s more about a creative look that elicits a feeling.

Here’s a simple example. Compare these two portraits. The first picture seems perfectly fine. The rendered colors are similar to what we would perceive if standing there during capture.

Color Corrected Portrait - color grading in Luminar 2018

A reasonably color correct portrait.

The second image is color graded to communicate a style, a look. And even though it isn’t natural by everyday lighting standards, it’s interesting – and probably more engaging than the “correct” color version.

Color Graded Portrait - color grading using LUTs

This version was color graded in Luminar 2018 using Chrono-Steel LUT by Lutify.me.

All image editors are equipped to correct color. But some are better than others at providing the means to manipulate it stylistically. Luminar 2018 is one of those creative applications.

The Power of LUTs

Lookup Tables (LUTs) sound like a technical adjustment. And indeed there is plenty of color science at work under the hood. They are used to precisely shift colors from one spot to another. But those shifts can be stored in a container, such as a “.cube” file, that can be used to color grade an image.

So even though LUTs are precise color science, their recipes can be wonderfully artistic.

Las vegas comparison - Easy Color Grading With LUTs and Luminar 2018

A side by side comparison of this Las Vegas scene shows how color grading can breathe life into an image.

The original version of this Las Vegas scene was serviceable, but certainly not exciting. Nor did it convey the majesty of the building. By color grading with a teal and orange LUT, suddenly the scene comes to life.

Does it look exactly like that in reality? No. But does the image feel like Las Vegas? Definitely more than the original.

Applying LUTs in Luminar 2018

Your gateway to this type of color grading in Luminar 2018 is via the LUT Mapping Filter. You can add this adjustment to your workspace by clicking on the Filters button, and by choosing LUT Mapping from the Professional category.

Adding LUT Mapping - Easy Color Grading With LUTs and Luminar 2018

LUT Mapping is available via the Filters menu in Luminar 2018.

Once the filter has been added to the workspace, click on the popup menu inside the panel to reveal the built-in LUTs (such as Tritone and Kodack chrome 3), or to access LUT files that you may have already added to your computer via Load Custom LUT File.

Before After Color Grading

LUT Mapping Filter - Easy Color Grading With LUTs and Luminar 2018

Luminar comes with built-in LUTs, or you can add your own.

Once you select a LUT, the image is color graded via the LUT’s recipe. You can fine-tune the recipe using the Amount, Contrast, and Saturation sliders. Also, a good companion filter for this color grading with LUTs is HSL, which provides color adjustments for hue, saturation, and luminance.

HSL Filter - Easy Color Grading With LUTs and Luminar 2018

Tips for Effective Color Grading with LUTs

Creating a separate adjustment layer for your color grading provides lots of flexibility. The base layer is used for basic adjustments via the Develop filter and the other tools that you need to establish a good range of tones. The adjustment layer (Layers > Add New Adjustment Layer) contains the LUT Mapping, HSL, and other creative filters. You can then use the blend modes and the opacity slider for precise control over the grading.

Custom Preset - Easy Color Grading With LUTs and Luminar 2018

Saving your LUT as a custom preset provides you with a preview thumbnail as well.

Another handy technique is to save your LUT color grading as a custom preset. Luminar makes this easy. Once you achieve a look that you want to use again, save it as a custom preset. Use the “Save Filters Preset” button in the lower right corner of Luminar. This provides the added benefit of a preview thumbnail for the LUT and its accompanying adjustments. You can create custom presets for all of your favorite LUTs. That’s a real time saver.

LUTs are also terrific for film emulation. There are LUTs for Kodachrome, Polaroid, and B&W film looks. This is a high-quality way to build your own Instagram-like filters, with a pinch of your own creativity added.

Downloading and Organizing More LUT Files

Skylum maintains a LUT downloads page that you can access through Luminar. Click on “Download New LUT Files” in the LUT Mapping popup menu. This will take you to the Skylum LUT catalog.

Download New LUTs - Easy Color Grading With LUTs and Luminar 2018

Once you download a new collection of LUTs, store them in a place that you will remember, such as a LUTs folder in Pictures or Documents. You’ll have to navigate there when you use the “Load Custom LUT File” command in Luminar. The application doesn’t store LUTs for you, so you have to remember where you are.

Bonus tip! Store your custom LUTs in Dropbox so you can access them from any computer.

Save Your Work

If you’re using Luminar 2018 as a standalone app (as opposed to a plug-in or editing extension), then save your favorite color gradings as a Luminar file. This allows you to return to the image and its settings at a future date to continue your work, or to change the color grading to another style.

Make it Look Easy

Your viewers may not realize the techniques that you used to create the enticing color schemes in your images. What they will notice are your style and creativity. Using LUTs can contribute greatly to that pursuit.

Disclaimer: Skylum (formerly Macphun) is a paid partner of dPS.

The post Easy Color Grading With LUTs and Luminar 2018 by Derrick Story appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on Easy Color Grading With LUTs and Luminar 2018

Posted in Photography