RSS
 

Posts Tagged ‘Feature’

The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight

19 Oct

The post The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight appeared first on Digital Photography School. It was authored by Jaymes Dempsey.

 

The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight

Earlier this week Google announced the long-awaited Pixel 4, which promises to take smartphone photography to a whole new level.

This comes in the wake of Apple’s iPhone 11 Pro announcement last month, which saw the debut of a triple-camera setup and features such as Night Mode.

In other words, the Pixel 4 is a competitor in an intense fight to create the best cameras, the best lenses, and the best camera software.

So what does the Google Pixel 4 offer?

Let’s take a closer look:

First, the Google Pixel 4 features a dual-camera setup, offering the usual wide-angle lens alongside a new 2X telephoto option. This isn’t unique (Apple has regularly included “telephoto” lenses going all the way back to the iPhone 7 Plus), but it is a nice addition for those who need a bit more reach. You can use the 2X lens for tighter portraits, and it’s also useful for street photography, where you often need to photograph subjects from a distance.

Interestingly, Google has decided to keep the wide-angle camera at 12 megapixels, but has packed in a 16-megapixel sensor for the telephoto camera. While plenty of photographers will be excited by this jump in resolution, it remains to be seen whether such tiny pixels will result in significant noise.

The dual-camera setup should also improve Google’s Portrait Mode, and Google has promised more natural background blur and very precise edges (e.g., when dealing with hair). Truthfully, I’m skeptical. I’ve yet to see a Portrait mode photo that looks perfect on any smartphone camera. But I’ll wait until I see the results from the Pixel 4 before judging.

One cool new feature that will debut in the Pixel 4 is Live HDR. When you go to capture an HDR photo, you’ll be able to see a live HDR preview on your smartphone screen; this should give you a sense of what you can expect from the HDR+ effect.

Finally, if you enjoy doing astrophotography, you’re in luck: The Pixel 4 offers an improved Night Sight mode, in which you can take stunning photos of the night sky. It works by taking a series of long exposures, before blending them together to create a beautiful final photo. Note that you’ll need a tripod or other method of stabilization to get sharp astrophotography shots.

Overall, the Google Pixel 4 offers some impressive new features, even if none of them feel totally groundbreaking. Up until now, the Pixel lineup has dominated regarding low-light shooting, and the enhanced Night Sight suggests that Google plans to keep running with this success.

The Google Pixel 4 is currently available for preorder starting at $ 799 USD and will hit the shelves on October 24.

You can check out this first look video from cnet to get more of an idea of the Google Pixel 4.

?

Are you interested in the Google Pixel 4? Let us know in the comments!

The post The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight appeared first on Digital Photography School. It was authored by Jaymes Dempsey.


Digital Photography School

 
Comments Off on The Google Pixel 4 Will Feature Two Cameras Plus Enhanced Night Sight

Posted in Photography

 

iPhone 11’s coolest photo feature is the hardest one to find

05 Oct
Cinerama leaning back – a natural result of pointing my camera upwards to capture the whole building.

Anyone who has stood at ground level and taken a photo of a building across the street has likely seen the effects of perspective distortion – you tilt your camera back to bring the whole building into frame, causing the straight lines of the building to appear to be ‘leaning back.’ Tilt-shift lenses are designed for exactly this problem, but they’re expensive, specialist optics.

More often, this effect will be corrected in software, but doing so usually requires the user to stretch the top of the image and crop to avoid the blank spaces this creates at the bottom of the frame. Apple is tackling this problem with a unique approach in the iPhone 11: by capturing more data outside of the frame.

I don’t know, I just like boring photos I guess?

For whatever reason, I’m drawn to the types of photos where perspective distortion is painfully obvious – signs, sides of buildings, etc. – but I’m horrible at lining them up correctly. Usually, I find out going through my images later that I wasn’t squared up to my subject even though I thought I was. Horizons are slightly askew, or I was leaning back slightly. Apple, it seems, has heard my cries.

When you’re shooting with the standard camera (with a focal length equivalent to about 26mm), the iPhone 11 will also capture image data from the ultra-wide (13mm equiv.) camera – a feature that is referred to in the settings menu as “Photos Capture Outside the Frame.” If you’re shooting on the telephoto camera of the 11 Pro, it’ll capture additional information from the standard camera.

That extra information is saved alongside your photo. When you edit that image in the native camera app, you’ll be able to use the extra data as you rotate and manipulate your image – a big help when you’re trying to fix crooked lines in a photo.

As you make image adjustments, you’ll see the extra data captured by the ultra-wide lens. This additional image information is available for 30 days.

The phone can use that information to automatically re-crop photos too. In the camera settings menu there’s an option to “Auto Apply Adjustments.” You’ll know that auto adjustments have been applied to an image when it shows a blue “Auto” icon above your captured photo. We’ve noticed this feature being employed when the phone detects a human subject cut off at the edge of the frame.

And even for many photos that aren’t automatically adjusted, the stock camera app will suggest tweaks when brought into edit. For example, take that image of the building that’s leaning back – if you edit it in the iPhone’s camera app and engage the crop tool, it will automatically correct for perspective distortion and use the extra image data it saved to fill in the areas at the edges of the frame that would otherwise need to be cropped out.

Bringing the image into the iPhone’s native editing app, then pressing the ‘crop’ option will take you to this view. The yellow ‘auto’ icon appears at the top of the image if there’s a suggested crop, as there is in this example.
The same adjustments can be applied in Photoshop, but without that extra image information at the sides of the frame you’ll need to crop in to avoid including blank space in your final image.
The iPhone goes beyond these limitations with that extra image data. In addition to correcting perspective, you can creatively re-crop your image to preserve details at the edge of the frame – and even include objects that were well outside of the frame in your initial standard image.

I don’t think many people will discover this feature, and that’s a shame. It’s not just helpful for correcting distortion and fixing crooked horizons – it’s a useful feature if you just want to re-crop an image after-the-fact. However, it will only be discovered by those who enable the ‘capture outside the frame’ feature and attempt to crop an image, which I imagine is a fraction of the many people who will use the camera day in and day out.

Regardless of how widely used this feature will be, what Apple is doing is clever. Photoshop’s Content Aware Fill feature does something similar – it will fill in missing data when rotating or stretching an image – but instead of using data from a wider lens, it’s filling in those empty spaces based on educated guesses. Apple’s approach is just one more way in which smartphone manufacturers are using data to their advantage – to the advantage of boring photo fans everywhere.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on iPhone 11’s coolest photo feature is the hardest one to find

Posted in Uncategorized

 

Instagram releases its ‘Restrict’ shadowbanning feature for all users

05 Oct

Instagram has fully released the ‘Restrict’ shadowban feature it first introduced as a test in July. The tool enables an Instagram user to restrict other accounts from posting content on and sending messages to their own account. As Instagram first explained this summer, Restrict is intended to limit the reach of bullies without fully blocking them, an action that may make the bullying worse.

The philosophy behind shadowbans on Instagram is simple: many users, particularly teens, face bullying from peers they know in real life, such as classmates. Blocking a bully on Instagram may cause that bully to increase their torment of the user in real life, which is why many users avoid blocking them.

In addition, and more broadly speaking, blocking an account that is posting abusive content may simply drive the bully to create a new account after the first one is blocked. For these reasons, blocking is not always the ideal way to prevent problematic comments and messages from being directed at an account.

Restrict is a solid alternative, enabling Instagram users to instead limit an unwanted account in a way that doesn’t alert the bully. Comments published by a restricted account are hidden by default and any private messages sent from the restricted account will be automatically sent to the recipient’s Message Request inbox. These restricted DMs can be read, but the sender won’t be alerted to the fact that their message was viewed.

Restrict is now available to all Instagram users.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Instagram releases its ‘Restrict’ shadowbanning feature for all users

Posted in Uncategorized

 

Useful Ways You Can Use the Olympus Live Composite Feature for Long Exposures

16 Sep

The post Useful Ways You Can Use the Olympus Live Composite Feature for Long Exposures appeared first on Digital Photography School. It was authored by Mark C Hughes.

Olympus has a number of unusual features for longer exposure photography. Aside from the classic bulb style photography, there are two other specific long exposure features available with Olympus cameras: Live Time and Live Composite. These two functions, although related, treat longer exposures quite differently and can produce quite interesting results. Both are really fascinating tools for photographers looking to experiment. Both use computational features of your camera to allow you to get an image in a different way. Although we will briefly discuss Live Time, this article primarily focuses on the Olympus Live Composite feature.

olympus-live-composite-feature

The Live Composite feature on Olympus Cameras lets you mix light sources for long exposure

Live Time

Live Time is like the bulb function on old film cameras that held the shutter open as long as the bulb was depressed but with a twist. With modern cameras, you open the shutter by pressing once and then close it by pressing it a second time. As with any bulb function, you end up holding the shutter open for as long as you want but without a set time on the camera.

In the film days, you would just guess how long you wanted (or use a light meter and a stopwatch). With most digital cameras, there is a function to allow you to hold the shutter open for an extended time. However, for many makes of cameras, you won’t see the image until the camera has closed the shutter, taken a noise reduction image, and then processed the image.

Live Time in Olympus cameras is a little different. It allows you to see the image on your display developing during the process while the shutter remains open. As the exposure lengthens, you see the image form as more and more light gets added to the entire image. The image gets brighter on the back panel, and it is really cool to see the image created live.

This process allows you to decide when you have held the shutter open long enough. Like the old bulb settings, you decide how long the image progresses. You press to start the exposure and press to stop.

Image: Long Exposure images simply mean that light sources get brighter

Long Exposure images simply mean that light sources get brighter

Fundamentally, Live Time is just a manually extended exposure time that allows you to watch the image develop as you take it. It is still a pretty cool feature.

Problems with long exposure photography

The trouble with Live Time (and any long exposure image with any camera for that matter), is that bright things get brighter faster than the dark areas.

This means that dim, infrequent events in lower light environments with some point sources of light will be overwhelmed by the point sources. When you have enough light to expose the dark areas properly, this usually means that the lit areas have far too much accumulated light. Everything in the frame gets treated the same.

Image: A still Image of a fire provides lots of detail and freezes the action of the flames.

A still Image of a fire provides lots of detail and freezes the action of the flames.

olympus-live-composite-feature

Long Exposures tend to smear the light as the light sources persist.

Enter Live Composite

Live Composite is a particularly unique feature present in Olympus cameras that is not currently offered by other camera makers at this time.

Live Composite is similar to Live Time, in that you are taking a longer exposure image, but Live Composite is just light additive for new sources of light, not existing sources. What this means is that Live Composite takes a base image and then only adds new light to the image that was not present in the original base image. This means that light sources seen in the original reference image do not get brighter. Only new lighting or new light sources that move in the frame will appear in the final image.

olympus-live-composite-feature

Live Composite of a campfire shows that the entire image is not getting brighter

The mechanics of Live Composite

Using Live Composite is a two-step process; first, it requires you to take a base or reference image exposure. This image forms the base layer of the composite image. Then you take subsequent additional images at intervals with only new light in the field of view added.

This allows you to take a static image of a colorful background under low light conditions and add only new light sources.

Just like Live Time, you get to watch the image develop right before your eyes.

olympus-live-composite-feature

After the base image is taken, only new light sources (such as a lightning bolt) show up

How to use It

Turning on Live Composite on your Olympus camera is not the most prominent process. It is a bit hidden. Live Composite is a type of manual mode setting, so that is where you find it on the camera.

However, before you use Live Composite, you need to decide a few key parameters for your base composite image – specifically initial shutter duration, ISO and aperture.

You set the shutter time duration in the menu (out of the function itself) before setting the camera to shoot. However, you set the ISO and aperture as you go.

Useful Ways You Can Use the Olympus Live Composite Feature for Long Exposures

Turning on Live Composite varies a little between cameras (the EM1X does it slightly differently), but for most Olympus cameras, you simply set the mode selector dial to Manual (M) and adjust the time to beyond the 60-second shutter duration. At that point, you get a Bulb, LiveTime, and then LiveComp setting. LiveComp is the one you want for Live Composite. On the EM1X, you set the mode selector dial to B (bulb) and then turn to LiveComp. Everything else is the same.

At this point in time, you set your ISO and Aperture. This, combined with the shutter time duration you set in the menu system for cycling the images, will be used to set your base composite image. For instance, if you set the shutter timing to 4 seconds, plan on using an aperture of f/4 and ISO of 800. You will use those values for the base reference composite image.

To activate Live Composite, set up your composition, focus your lens, and then press the shutter for the reference image. The composite is now ready to start.

Next, when you press your shutter button again, the image creating process begins! The camera will open the shutter and add to the image as each time period compares to the base image. Any new light gets added to the composite at the end of each cycle. The image changes and grows on your display as Live Composite progresses.

It is very cool to watch as your image develops.

Does Live Composite mean you can take images that you couldn’t before?

Image: Lightning storms work incredibly well with Live Composite, especially if there is a lot of ne...

Lightning storms work incredibly well with Live Composite, especially if there is a lot of nearby light sources (such as streetlights)

Yes and no. You could take the images separately and combine them as a composite, but as a single image, you would not have been able to do it. Also, there are certain types of images that are way easier to take with the Live Composite function than would be possible to achieve in a single image.

Live Composite also forces you to change your approach to certain images. As part of that change, it may actually take longer to take some images (because you need to create a base/reference every time), but you get the benefit of seeing if it is doing what you want.

What kind of image works well with Live Composite?

Several specific types of images can get the full benefit of Live Composite. These include star trails, lightning flashes, fireworks, night photography with bright lights present, and light painting.

You can take all of these in other ways, but using live composite allows you to see if the image is turning out how you want.

Most of these images all require manual focus and manual settings for your exposures. All require some trial and error and pretty much all benefit from the use of a tripod. In theory, you might be able to take these images without one, but in reality, the requirement to be steady really limits those cases.

Star Trails

In astrophotography, taking an image of stars can be particularly daunting. This is because the earth is rotating and the stars are relatively dim. What this means is that you need a fast enough shutter speed to freeze the motion of the star but also need to leave the shutter open long enough for the start to appear on your image. If you leave the shutter open too long, you will see a streak or smear instead of a star. If you leave the shutter open even longer, the stars leave even longer trails that are circular. In the northern hemisphere, these star trails appear to rotate around the North Star.

Image: Star trails occur when you take astrophotography shots and leave the shutter open for an exte...

Star trails occur when you take astrophotography shots and leave the shutter open for an extended period of time. The stars create a trail. This image was taken with the shutter open for 27 minutes

With conventional digital cameras (or film cameras for that matter), working at night can be a challenge. The shutter duration required to create star trails are long, and you can’t see what your image is like until you’ve completed the entire exposure duration. In addition, if you have made an error in focus or composition, you won’t see it until the entire process is complete. There are ways to combine star trails together in post-processing, but the Live Composite allows you to do it in a single exposure.

With Live Composite, you can see the image develop. Particularly with star trails, this allows you to quickly figure out if you want to have your image in a different setup or use a different point of interest so that the star trails work with your composition.

You also have the ability to have star trails show up when there is an illuminated object in the foreground.

Lightning

Another significant challenge for photography is capturing images of lightning, particularly in areas where there are light sources. As anyone who has attempted to take lightning images knows all too well, this is a difficult type of photography.

olympus-live-composite-feature

Lightning strike captured using the Olympus Live Composite Feature.

The main difficulties of capturing lightning images are fourfold. Lightning is difficult to schedule, so you have to wait to find a storm to photograph. Depending upon your position relative to the storm, you need to find a vantage point to capture images that are reasonably clear (you need to be able to see the lightning from a distance) and have a perspective that forms a reasonable composition. More common vantage points are across a field, across a valley or from highrise building.

Next, you need to hope the lightning is not blocked or shrouded by rain (a common companion to lightning). This will interfere with your sightlines. Lighting is often at leading and trailing edges of storms, but if you are at the wrong end, the lightning will simply light up the sky.

Finally, taking images at night always presents a problem for trying to achieve focus. Focusing in the dark means that you can’t see what you are focusing on and the light from the lightning hasn’t lit up your subject yet.

If you have the right conditions, you can take the base image and wait for the lightning strike and the image to develop. You just wait until lightning strikes in the field of view.

For a detailed guide on photographing lightning, see this Ultimate Guide to Photographing Lightning.

Fireworks

Fireworks is an interesting subject for live composite. It actually isn’t faster to take images, but I think it takes better images. Fireworks requires you to manually focus where you think the images are, set the time, aperture, and ISO for a darker setting than your camera will want, then wait.

Image: Fireworks also work well, although it is a two-step process for every image.

Fireworks also work well, although it is a two-step process for every image.

Without Live Composite, you simply open the shutter and wait. The image gets brighter, and the duration is based upon a little trial and error.

With Live Composite, you take the reference image and then wait. When the fireworks start, you hold the shutter and watch the screen. You press the shutter when you have the image you want.

Unfortunately, you need to take a new reference image each time, so you end up with additional steps. However, the results are at least as good (and often better) as simply guessing an exposure time.

Night photography with bright lights and light painting

olympus-live-composite-feature

Capturing car tail lights and headlights will appear but the street lights don’t get overly bright

Night photography featuring bright lights, such as carnivals or street performers using fire at night, can turn out really well with Live Composite. So can images where lights are moving, but you don’t want the background to get brighter.

You can also use Live Composite for light painting. This is particularly useful if you have someone helping you when you are taking a light painting image.

Light painting is a technique for taking an image under low light conditions with a long exposure and lighting up the object with controlled use of flashes or light sources. The neat thing about using Live Composite for light painting is that you can have light in the image when you are taking the image with the light painting because only new light gets added. It also means that dark objects won’t show up, and the bright surfaces behind them will remain illuminated.

Image: Live composite allows you to do light painting with light sources present in the image (not t...

Live composite allows you to do light painting with light sources present in the image (not the greatest light painting image!)

Conclusion

Live Composite is a unique feature in Olympus cameras that allow you to make composite images in-camera that previously would only be able to be created with two separate images and a bunch of post-processing. It is another useful tool for your photography kit.

olympus-live-composite-feature

The Olympus Live Composite feature is a unique tool to allow you to be creative with low light images.

Have you used Olympus Live Composite Feature before? What are your thoughts? Share with us in the comments!

 

olympus-live-composite-feature

The post Useful Ways You Can Use the Olympus Live Composite Feature for Long Exposures appeared first on Digital Photography School. It was authored by Mark C Hughes.


Digital Photography School

 
Comments Off on Useful Ways You Can Use the Olympus Live Composite Feature for Long Exposures

Posted in Photography

 

Google Photos adds Instagram Stories-style Memories feature, now offers canvas prints

13 Sep

Google Photos is expanding its feature set and has launched Memories, a slideshow feature that works in a similar way as Instagram stories. Memories is designed to highlight special events, such as birthdays, trips and holidays, and let you remember those special moments without having to sift through stacks of duplicate images.

Photos and videos from previous years will be pinned to the top of your gallery for you to browse. Google uses machine learning to curate your Memories and pick the best shots out of many similar ones. Certain people or time periods can be blocked in case you´d rather not be reminded of them, and you can also deactivate the feature completely.

Memories can also be shared with people who appear in them and others. Google says that in the coming months it will make this process even easier. Shared photos will be added to an ongoing, private conversation which should make it easier to keep count of the images you have shared with each other.

In addition, you can now search for text that appears in photographs or screenshots via the standard search function. This could be useful for those who store recipes or other text documents in image format in Google Photos.

U.S. users can now also order both standard photo prints and canvas prints directly from the app. Individual photo prints can be ordered directly through Google Photos and are available to pick up from your local CVS Pharmacy or Walmart that same day at over 11,000 locations. Canvas prints are available in 8x8in, 12x14in, and 16x20in formats and prices start at $ 19.99. The app suggests the best photos to print and the canvas prints will be delivered straight to your home.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google Photos adds Instagram Stories-style Memories feature, now offers canvas prints

Posted in Uncategorized

 

Facebook expands Face Recognition photo scanning, makes feature opt-in for new users

06 Sep

Facebook will no longer scan uploaded images for users’ faces by default, according to The Verge. The change will apply to new users who receive the Face Recognition setting as Facebook rolls it out globally over the next several weeks. The Face Recognition feature, which was first introduced in late 2017, will not be turned on unless the user chooses to enable it.

The facial recognition feature works by scanning images for users’ faces and alerting them about these images even if they’re not tagged in them. Users who receive one of these alerts can choose to tag themselves in the image, ignore it, or report the image when applicable.

In an update on the technology following the outcome of its federal appeal in August, Facebook has revealed that the facial recognition feature is rolling out to all users, but that they’ll need to manually enable it if they want the platform to scan other users’ images for their face. A notice in the user’s News Feed will alert that user when the feature becomes available on their account.

Users will be able to find the Face Recognition feature in their account’s Settings menu. Facebook users who currently have Face Recognition on their accounts can find instructions on disabling it here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Facebook expands Face Recognition photo scanning, makes feature opt-in for new users

Posted in Uncategorized

 

Interview: Colin Goudie, feature film editor

24 Aug
In the edit suite for The Show, a new film written by and starring Alan Moore

Colin Goudie is a feature film editor with a career spanning over 35 years, editing everything from 16mm film to Digital 65mm, and has cut films in big studios, hotel rooms and even tin shacks. Most recently, he’s known for his work on Rogue One: A Star Wars Story and Monsters, though he doesn’t limit himself to big budget productions and can often be found editing lower budget documentaries and dramas too.

Colin talked with DPRevew about editing movies back in the film days, and the transition to a fully digital workflow.


How did you start out as an editor?

I got a placement at Nene College of Art and did a foundation course for two years where I discovered 35mm stills photography. I graduated from Nene and got a placement at Bournemouth College of Art School of film, photography and TV production.

Instead of just concentrating on stills photography, in year one I would actually go to the second and third year students and ask them if they wanted a unit stills photographer and, of course, everybody always wanted a free unit stills photographer. I did that, and of course the thing about a film set is there’s always too much work to be done and not enough people to do it, so people would start asking you to hold this boom mic, could you alter those barn doors on that light, so you start to learn the process.

Of course, if you don’t mess up people give you more and more responsibility. By the time I went into my second year people were asking me to edit their films, so I jumped ahead by a year. Then I left film school and I managed to get a job at the BBC in their trainee assistant film editor course.

Do you remember the first time you saw non linear editing and what was your reaction?

I left the BBC after 10 years editing on film and tape, and the very first job I was offered was for a tiny documentary series for the BBC about volunteering. There were 15 minute films and the director of one of the episodes had recommended me to the Series Producer. The Series Producer phoned up to talk to me; we got on well over the phone and he said I’d been highly recommended, but the film was cutting on Lightworks and was I OK with that? I said yes.

In year one I would actually go to the second and third year students and ask them if they wanted a unit stills photographer and, of course, everybody always wanted a free unit stills photographer.

I hung the phone up and then I picked it up right away and called directory enquiries and said, “Can you put me through to a company called Lightworks?” and they looked up the number and put me through. I said, “Hi, can you tell me what a Lightworks is and how can I learn to use one?” Lightworks ran a training course that was two days, and I went and spent a lot of money to do the course and then on the following week I started on the series.

I do remember that the other person – there were only two of us on the course – when the instructor talked about using the mouse, said she had never used a computer mouse in her life. I had because of my days at the BBC, because you had to do some stuff with the mouse and a bit of DOS work. Also, because I used an Atari ST as a gamer.

It’s really interesting, everything everybody tells you that playing computer games are bad. But actually it turned out not to be because I understood about loading floppy disks, doing backups, using a mouse, what a DOS monitor was and how to type in code. All those things I learnt in gaming were really useful in an editing environment, and I did see other people struggle who had never done that.

I loved Lightworks straight off because for me it was finally a combination of of the dexterity of film editing, the fact that I can cut in a single frame, the fact that I can drop it in two-thirds of the way through a film or a third of the way through the film. It meant I didn’t have to re-conform my tape like it had on on the U-matic based editing system. It also meant that I could keep all my previous cuts of the film.

A young Colin editing 16mm. It was a very delicate process.

Editors who have worked with film often talk about the physicality of it. Do you miss it?

No, mainly because Lightworks felt physical when you were using that controller. It was lovely to have your pictures at better quality than they had been on U-matic in the early days of offline. Of course, you did come a cropper because you had to digitize at a quite a low quality picture rate, so sometimes you would miss things.

I remember editing a show once, a World War II drama, and there was an extremely wide shot of the outside of St. Pancras station with hundreds of cast members walking through shot, and a shot like that eats up your data rate so things look very blocky.

It was only when they conformed the film and the guys were dubbing it, and the sound crew were laying in the footsteps, looking at the film frame by frame for footsteps, said, “Do you know there’s somebody walking through shot with a Sainsbury’s shopping bag?” Which for a WWII drama was a bit of an error. Nobody has ever seen it, I never saw it on the rushes because I only saw the digitized picture.

It was a learning curve. It really made me learn to look for those kind of things even more on a heavily digitized picture. It also made me fight vociferously for productions moving forward from that day that I would have more memory and be able to digitize my rushes at the highest possible quality.

I learnt to do all my wide shots at a higher quality digitization rate than the close-ups, whereas up until that time it was always quicker and easier to do everything at the lowest possible rate, which is what everybody did because memory was so expensive in those days. A 9 gigabyte drive was £2,400. If you do the maths on that it was £90,000 in memory alone just for an edit, so you cut your cloth accordingly but those are all learning things. It was the early days of nonlinear.

Colin and Gareth Edwards on Rogue One.

You’ve edited in some quite odd places. Normally, people picture director and editor sitting in a air conditioned room with coffee and croissants coming in, but that’s not always been your experience has it?

Certainly among my peers I seem to have edited in more weird locations than many. I have done my fair bit of editing in suites, but I’ve also done a lot of location editing. I have edited at Soho and Pinewood and Shepperton, Skywalker Ranch at the Lucasfilm Presidio in San Francisco and also in the rim of a volcano, but it wasn’t active.

I like location editing for the access it gives me to the director and sometimes to the cast. It’s very useful if you befriend the actors on location and they become your mates, because when you go up to them and say, “Could you just record this line of dialogue for me so I can lay it into the edit and try out an idea with some new dialogue?” it’s much easier if you know them.

Certainly among my peers I seem to have edited in more weird locations than many.

Obviously, having the access to your director when on location is great because if I look at the rushes in the afternoon that they shot in the morning and spot something that I need a pickup on, I can pop down to set straight away while they’ve still got the cast on that set. I can even show them a little rough edit that I might have done and say this is why I need this shot, so everyone understands why you need it, and I’ll get it bashed out for you really quickly – there’s no delay in that process.

Don’t you find yourself out on a limb in those situations?

The drawback of editing on location is lack of technical backup, so if something goes wrong and you’re on the other side of the world with a laptop and suddenly your card reader doesn’t talk to your drive or your camera media, then you’re really stuffed. I have made numerous phone calls to people in the UK pleading for some help down a dodgy phone line to get me out of a scrape or send me a new driver down the internet.

My joke I always used when I talked to the production manager was, I don’t care about the quality of my hotel room so long as I’ve got a table and a chair, stable electricity mains supply and the internet, and you wouldn’t believe how many times you can’t get all four of those things. That can really affect your workflow.

If you don’t have a stable electricity supply it’s impossible to run your drives because they just keep dropping out the whole time, so suddenly the Producers are like, “Why didn’t you cut anything today?” and it’s like “I don’t have a main supply to edit it, to run the external hard drives up on.”

You get used to the fact of taking portable drives that you can maybe clone material from, and work off of on the short-term until you can get to some sort of electricity supply and recharge your laptop.

Colin editing Monsters on location.

You’ve also shared your edit suite with some non human occupants as well haven’t you?

I’ve had an edit suite where a scorpion came in underneath the door.

I also worked on a BBC series which was edited inside London Zoo, in a large Portacabin. It was a real team experience with four other freelance editors, a bit like the old days at the BBC. The great thing about that was that the zoo keepers sometimes brought a few of the animals into the office. We had Coatis and even a Lynx come round; the Coatis ate our lunch (they loved yogurt) and the Lynx pawed the carpet. We all adopted them.

We did have a problem with some other wildlife at the Zoo. One day I came into my suite, turned on my Avid and then booted up all the others. When I got back to mine all the media was offline and I checked the same had happened in every other edit too. On investigation, the Technical Manager found that a rat had chewed through the fiber optic cable than ran between the edit suites and the main building where the drives were kept.

I’ve had an edit suite where a scorpion came in underneath the door.

Cheap plastic cable cost us days in edit time until it could be replaced by the armored variety. Cutting costs there actually didn’t work out too well.

If you could re-cut one film that you haven’t made what would it be?

I’d inter-cut Dunkirk with The Darkest Hour and make one movie because I think there’s a way of doing that. When I was a kid growing up epic films were Lawrence of Arabia, which had incredible battle scenes, and they also had really intelligent political dialogue scenes and these days it seems that you have to have one or the other.

Dunkirk is a spectacular looking action movie but I don’t understand what’s going on plot wise in terms of the history of Dunkirk. I mean, I know because I grew up watching World War II movies, I’ve talked to my dad who fought in the war, but for a modern audience in terms of teaching you about Dunkirk it’s incomprehensible.

It doesn’t have any plot, it just has incredible action scenes, The Darkest Hour is a really brilliantly made Churchillian biopic which gives you all the political background but has no scale, it’s almost all people in rooms talking.

I think that if you took the rushes for those two films it would be fascinating to see if you could have a crack at making one 3 hour long Lawrence of Arabia style epic which told the story of Dunkirk and told the story of Churchill and made effectively the modern Lawrence of Arabia. You’d have to have the rushes, and you’d have to have carte blanche to do what you wanted, but I think that would be amazing to do that.

X Wings and kilts, not often seen together.

Thinking about Star Wars, what’s your favorite film and why?

Empire Strikes Back, definitely.

I saw Star Wars (Episode IV) when it came out. I liked it, but it wasn’t my favorite movie of all time, but I did enjoy it. I think one of the reasons why it wasn’t my favorite movie is because it took six months to get from America to the UK and by the time we actually sat in the cinema to watch it they’d shown so many clips of it on TV you kind of knew the story.

When Empire Strikes Back came out it was all shrouded in secrecy, there were no clips. Up until that movie I had not seen the Godfather Part II, so basically I’d never seen a good sequel. Every sequel that I’d ever seen was not as good as the original film; Jaws 2 was not as good as Jaws, and it was effectively the same film.

With Empire Strikes Back I sat down to watch what I thought was going to be Star Wars part II, and I saw a film that took things in a totally different Direction. It introduced new characters, had Yoda, who has got to be one of the greatest cinematic characters of all time, and was not flagged up in the first movie at all all.

Every sequel that I’d ever seen was not as good as the original film; Jaws 2 was not as good as Jaws, and it was effectively the same film.

Then the Twist with what they did with the Luke Skywalker character and the Darth Vader character, and the fact that it ended on a cliffhanger, which in those days no movie did. Now every movie does it. I just remember walking out of that movie theater on opening day – I saw it at 10:30 in the morning at Odeon Leicester Square in 70mm, and all I wanted to do was go back in and see that movie again, which I couldn’t do because it was sold out.

I just thought it was incredible, and I think that the score had some of the tracks from the original movie, but the Empire theme was new for Empire Strikes Back. Imperial March is not in the first movie, that is one of the most defining pieces of music in cinematic history, so to get all those pieces was incredible and the visual effects (VFX) were on a new level.

I understood how they did the VFX in the original Star Wars film, spaceships against black, because I really studied and knew about 2001 and how they did that. But, when they had sequences of snow speeders going across landscapes against snow in Empire… my little brain was going, “How did they do that?”

Colin editing at the BBC on tape.

Moving from 16mm to tape, and now on to digital, what’s been the biggest challenge?

I think the biggest technical difference during my career as an editor has been the introduction of video tape, and now digital over film, because when we shot on film the average shooting ratio was 10 to 1. I remember when I made my University film I shot on a ratio of 1.25 to 1. I had 40 minutes of film stock to make a 25-minute finished drama.

When you learn that discipline 10 to 1 seems like luxury. The skill set, the training that most directors had in the first part of my career, was that everybody had come up through film and they learnt to shoot on a 10 to 1 shooting ratio, so they don’t get the minimum amount of coverage needed, but have sufficient coverage – not excessive coverage – and that it was correct.

When I made my University film I shot on a ratio of 1.25 to 1. I had 40 minutes of film stock to make a 25-minute finished drama.

What happened with videotape was things started to become a bit more ‘shoot everything that moves and we’ll sort it out the edit’, and that’s a tradition that is even more prevalent with digital today. Traditionally capture was only in real time, now you don’t even to wait that long so shooting ratios have exploded. That is almost always to the detriment of what happens in the edit because that means that the editor now has more footage to look at than there are hours in the day to look at it.

Unless you are on a very long schedule you need either the director to have gone through the material and come in with at least some notation, and some honing down clarification as to what they’ve shot, so that you only need look at the minimum amount for what you need to do the edit.

Quite often what I’ll do is do that, and when I’ve assembled the film I will then talk to them and say, “What else have you got that I’ve not seen?” that we can now go back and look at with a view to improving some of the sequences, because you just don’t have time to watch everything on a standard schedule.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Interview: Colin Goudie, feature film editor

Posted in Uncategorized

 

Canon announces the RF 85mm F1.2 L USM lens, the first RF lens to feature its BR optics

09 May

Canon has released the RF 85mm F1.2 L USM lens, a medium telephoto lens that’s the first RF lens, and only the company’s second lens, to feature Canon’s Blue Spectrum Refractive (BR) Optics. It will sell with a recommended price of $ 2699. There’s no news of the DS ‘Defocus Smoothing’ variant Canon has said is in development.

The lens features a new optical design compared to the EF version of the 85mm F1.2 II USM lens that includes one aspherical element, one ultra-low dispersion (UD) glass element and the aforementioned BR optics, which is placed between the concave and convex lenses and is designed to eliminate longitudinal chromatic aberration (typically seen as purple and green fringing in front of and behind the focal plane). In total, it contains 13 elements in 9 groups. Canon has also included its Air Sphere Coating (ASC), which helps to minimize lens flare and ghosting.

The RF 85mm F1.2 L USM lens has an aperture range of F1.2 through F16, a minimum focus distance of 85cm (2.79ft), an 82mm front filter thread and is weather-resistant with a dedicated fluorine coating. Like other RF lenses, the RF 85mm F1.2 features a customizable control ring at the front of the lens that can be used to adjust exposure compensation, aperture, ISO or shutter speed.

Below are three high-resolution sample photos provided by Canon:

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_6495027054″,”galleryId”:”6495027054″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

The lens measures in at 10.4cm (4.1″) diameter, 11.6cm (4.6″) long and it weighs 1.2kg (2.6lbs). Compared to its EF counterpart, it’s both wider, longer and heavier.

The Canon RF 85mm F1.2 L USM lens is currently available to pre-order (Adorama, B&H) for $ 2,699 and is set to ship in June 2019.


Canon RF 85mm F1.2 L USM Specifications

Principal specifications
Lens type Prime lens
Max Format size 35mm FF
Focal length 85 mm
Image stabilization No
Lens mount Canon RF
Aperture
Maximum aperture F1.2
Aperture ring No
Number of diaphragm blades 9
Aperture notes Circular aperture blades
Optics
Elements 13
Groups 9
Focus
Minimum focus 0.85 m (33.46)
Autofocus Yes
Motor type Ring-type ultrasonic
Full time manual Yes
Focus method Internal
Distance scale No
DoF scale No
Physical
Weight 1195 g (2.63 lb)
Diameter 103 mm (4.06)
Length 117 mm (4.62)
Sealing Yes
Power zoom No
Zoom lock No
Filter thread 82 mm
Hood product code ET-89
Tripod collar No

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon announces the RF 85mm F1.2 L USM lens, the first RF lens to feature its BR optics

Posted in Uncategorized

 

How to Use the New Enhance Details Feature in Lightroom

05 May

The post How to Use the New Enhance Details Feature in Lightroom appeared first on Digital Photography School. It was authored by Simon Ringsmuth.

One benefit of subscribing to Adobe Creative Cloud is that the software you use is updated regularly throughout the year. Some of these updates might not add much to your workflow, while others result in dramatic improvements to how you edit your images.

In February 2019, Adobe rolled out a powerful new option in Lightroom called Enhance Details. You may not have noticed since there’s nothing new in the interface that even indicates the feature is available.

However, this can dramatically increase the quality of your RAW files, particularly if you shoot with Fuji cameras, and it is certainly worth investigating to see if it could benefit you.

In order to understand what Enhance Details does, it’s important to know how RAW files work. When you shoot in RAW you aren’t storing images on your memory card or computer like when you shoot in JPEG. Instead you are storing a set of instructions for how your editing software should create an image when it’s exported from Lightroom, Capture One, or any other image-editing program.

What’s weird to wrap your head around, though, is the notion that when you browse through your image library in Lightroom you aren’t looking at the RAW files at all. You’re seeing previews that the software has generated which give you a good idea of what the RAW files will look when they are exported.

This is why RAW files look slightly different when you open them in different software. Capture One, Lightroom, Luminar…they all use different methods to interpret the data in a RAW file. This results in previews (what you see when you edit an image or browse your image library) that look different, as well as your final exported final images.

This isn’t a RAW file. It’s a JPG file generated from RAW data, as interpreted by Lightroom.

Understanding RAW Files

So what does all this have to do with Enhance Details? It all goes back to how your RAW files are interpreted in Lightroom. Digital cameras collect Red, Blue, and Green data on their image sensors using an array of pixels that correspond to each color. When Lightroom loads a RAW file, it looks at the color data for each pixel and guesses what the resulting image should look like. This is what you see when you look at your images before exporting them.

This also means that Lightroom has to essentially fill in the details throughout each image since you don’t see individual Red, Blue, and Green pixels when you zoom in on an image. You see pixels of all colors that Lightroom has created based on what it thinks they should look like based on the Red, Blue, and Green color data in the RAW file.

Unfortunately, this means that some elements of the scene that you photographed, particularly the very fine details, get lost in the transition from RAW file to Lightroom.

Different camera sensors contain different types of RGB patterns. When saving RAW images, all of the color information for each pixel is stored without the camera deciding how to interpret the data as an actual image.

Enhance Details is a way for you to recover some of the finer aspects of your images that get lost along the way when interpreting RAW files.

It works by using Adobe’s artificial intelligence technology, called Sensei, to fill in some of the missing gaps when pixels are rendered from RAW data.

The results can be quite impressive, depending on the type of image you are working with. It can also mitigate some of the issues that Fuji users have traditionally had when rendering RAW data from Fuji’s X-Trans sensors. Traditionally, these result in wavy, worm-like artifacts with an overall loss of sharpness.

Bringing out the details

To use Enhance Details, select an image in your Lightroom Library and choose Photo -> Enhance Details.

This brings up a Preview window which lets you see what will happen after the Enhance Details procedure finishes.

It shows a zoomed-in view of the photo you are working with, and you can click and drag around to see what different parts of the image will look like after the operation is complete.

When you click on the image preview it reverts to its un-enhanced state, allowing you to compare the original and Enhanced versions with a single click. There are no parameters to configure, sliders to adjust, or options to customize with the operation which I find refreshing. It’s a take-it-or-leave-it approach, at least in its current state, which makes it a little less of a hassle from an end-user perspective.

When you are satisfied that you want to undergo the Enhance operation, click Enhance and wait for Lightroom to finish the operation.

When it’s done you will still have the original RAW file, but in addition you will now have a new Adobe DNG file that contains the Enhanced image. This file is, as you might expect, the same image as the original but with several additional megabytes of new data where Adobe has attempted to improve things.

Original on the left, Enhanced on the right.

More details, larger files

One important point to note in this process relates to file size and storage space. When I converted several RAW files that were originally about 22 megabytes, the resulting Enhanced DNG files were about five times larger. Since each new file easily takes up well over 100 megabytes you might want to be somewhat selective in choosing the images you want to Enhance. Either that, or start looking into more storage solutions!

So what’s different about the enhanced RAW pictures other than massive file sizes? It varies depending on the scene you photographed, the camera and lens you used, and other parameters. If you shoot Nikon, Canon, or Sony, you might not see that much of an improvement since Adobe already does a pretty good job interpreting those RAW files. However, if you use Fuji you might notice significant improvements. The image below is the original RAW file, shot with an X100F, that I edited in Lightroom.

Original Fuji RAW image. It seems fine, until you zoom in for a closer look.

At first glance, and sized down for on-screen resolution, it looks fine. But upon closer inspection you can see some significant issues particularly among the leaves and ground.

Some of the issues are now apparent, and they can’t be corrected simply by adjusting sliders in Lightroom.

When I first saw this up close, I thought there was something wrong with my computer! Either that or I had a broken camera. The edges of the leaves, particularly where the sun is shining through in the top-right corner, have a wavy, worm-like appearance that’s rather strange and almost a little disconcerting. This is due to how Lightroom renders Fuji RAW files and can be corrected quite easily using Enhance Images.

Original on the left, Enhanced on the right.

Notice the way the edges of the leaves are much smoother in the right-hand image. The gold light coming through the dark leaves is also crisper.

This isn’t just an issue of adjusting the Sharpening slider in Lightroom. Instead, it’s an entirely new RAW file built from the ground-up using Adobe’s artificial intelligence algorithms.

The new image really is enhanced – as the name of the process implies. While it might not be entirely obvious when viewed on a computer screen, there is a clear difference when files are shown at full resolution or as large prints.

Enhanced image. You can’t see a noticeable difference on a small screen, but when viewed full-size the details are much improved.

Your results may vary

While the process works wonders for Fuji RAW files, it’s somewhat hit-or-miss for major names like Nikon and Canon. For instance, below is a RAW file from a Nikon 7100 as rendered by Lightroom.

Original image, shot from the Columbia Center skyscraper in downtown Seattle.

The Seattle skyline looks crisp and clear, with no noticeable issues in the finer details even when zoomed in to 100%. When processed through the Enhance Image feature the improvements are discernible, but you really have to look for them. It’s a marginal improvement, and nowhere approaching the fixes to Fuji RAW files.

Original image on the left. Enhanced on the right. If you look at the roofline of the building in the middle, you can see a more accurate rendering in the Enhanced image…barely. The Enhanced version doesn’t have oddly-colored pixels where Lightroom didn’t quite get the original RAW file rendered properly.

Conclusion

In my opinion, Enhance Images isn’t worth the file size tradeoff on Nikon and Canon cameras. Lightroom already does such a good job of rendering them already. However, I encourage you to try it out and see for yourself. The amount of improvement depends greatly on a variety of factors including your camera, lens, and the subject in the photograph.

You might find that you prefer the Enhanced Images as a general rule, or you might only use this feature now and then. Either way, it’s nice to know it’s there.

Enhanced image, without a lot of truly noticeable improvements even enlarged to full size.

I like to think of Enhance Images as a useful tool to have in your back pocket for those times when you really need it and not something I use on an everyday basis.

The really exciting part is where this technology might end up in the future. Right now the process is done for one photo at a time and takes several seconds even on newer computers. I can easily see a time when it’s applied as easily as a filter or adjustment slider, with dramatic improvements to every image.

Until that happens, it’s fun to see technologies like this take shape and mature. As photographers, we live in an incredible time with technology like this that were unthinkable only a few years ago.

It’s amazing to ponder what the future might hold, and think about the tools we will have at our disposal to let our creative freedom loose.

Have you use this feature? Let us know your thoughts in the comments below.

 

The post How to Use the New Enhance Details Feature in Lightroom appeared first on Digital Photography School. It was authored by Simon Ringsmuth.


Digital Photography School

 
Comments Off on How to Use the New Enhance Details Feature in Lightroom

Posted in Photography

 

Shutterstock AR feature lets customers preview stock images as wall artwork

18 Apr

Shutterstock has announced the launch of its first augmented reality feature. The new tool ‘View in Room’ has been added to the company’s iOS app; customers can use it to preview stock images as virtual artwork on their office or home walls before deciding whether to make the purchase.

The ‘View in Room’ feature can be used with any of the millions of images available on Shutterstock, according to the company, which powers the tool with its own computer vision technology and the iOS ARKit framework. The feature first arrived as a hack to the future employee hackathon project.

According to Shutterstock, a growing number of its customers are purchasing images to use as artwork or decor. The augmented reality feature enables them to preview exactly what the final product would look like on their wall, eliminating the need to visualize it using less precise methods.

The Shutterstock iOS app can be downloaded from the App Store here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Shutterstock AR feature lets customers preview stock images as wall artwork

Posted in Uncategorized