RSS
 

Posts Tagged ‘scene’

Nikon Z5 added to studio scene comparison tool, gallery updated with Raw conversions

17 Sep

Updated sample gallery

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_4324192573″,”galleryId”:”4324192573″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

The Nikon Z5 is a compelling entry-level full-frame mirrorless camera sporting a 24MP non-BSI CMOS sensor. We recently got our hands on Adobe Camera Raw support and have updated our sample gallery with a variety Raw conversions adjacent to their out-of-camera JPEG counterparts, for your viewing pleasure.

In addition, we’ve run the Z5 through our studio test scene and added it to our comparison widget. Take a look below and see how it stacks up against its 24MP peers. And keep your eyes peeled for our full review, coming soon.


Studio scene

$ (document).ready(function() { ImageComparisonWidget({“containerId”:”reviewImageComparisonWidget-4571998″,”widgetId”:779,”initialStateId”:null}) })
Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nikon Z5 added to studio scene comparison tool, gallery updated with Raw conversions

Posted in Uncategorized

 

Adobe releases Premiere Pro update, including scene detection and improved HDR

17 Sep

Adobe has announced an update for Adobe Premiere Pro and released a new beta for After Effects. In Premiere Pro, which is now at version 14.4, Adobe has added Scene Edit Detection, HDR for broadcasters, exporting with proxies and more. The beta update for After Effects includes a new 3D Gizmo and new camera navigation tools. Both the new Premiere Pro release and the beta update for After Effects include improved performance.

The new Scene Edit Detection feature, powered by Adobe Sensei artificial intelligence, allows you to add edit points in any footage as you import it into Premiere Pro. When using Scene Edit Detection is used, Premiere Pro analyzes imported video, detects original edit points and adds cuts or markets at edit points. You can learn more about the feature in the video below and by clicking here.

Premiere Pro 14.4 includes a new Rec.2100 color space, allowing broadcasters to work with more dynamic HDR content. Additional HDR features include fully color managed and GPU accelerated workflows for Apple ProRes and Sony XAVC-I formats, color space overrides and the ability to set scopes for Rec2100 HLG. Additional information about HDR for broadcasters can be found here.

The next new feature for Premiere Pro, exporting with proxies, allows users to select to use proxies while exporting, such as when you want a quick export that doesn’t require full-resolution media. There is also a new export feature, Quick Export, currently in a public beta. This feature allows easier access to popular and frequently used export settings from the menu bar in Premiere Pro. You can see a preview of this feature below.

Premiere Pro and After Effects (beta) have both received improvements to overall performance. Premiere Pro’s improved performance results in third-party audio plugins now scanning up to 10-15 times faster than before. ProRes multi-cam performance has been doubled as well. After Effects’s channel effects incorporate GPU acceleration, which results in performance improvements ranging from about 1.5 to 3 times.

After Effects’s new beta includes a new 3D Gizmo. This feature allows for faster motion graphics work and improved speed for scene navigation. There are also new camera navigation tools, including support for multiple virtual cameras.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Adobe releases Premiere Pro update, including scene detection and improved HDR

Posted in Uncategorized

 

Canon EOS R5 added to studio test scene

27 Jul

The Canon EOS R5 comes with a newly designed 45MP sensor that we were excited to get in front of our studio test. Here, we’ve put it up against some other high-megapixel heavyweights, but feel free to select the comparison camera of your choice and explore.

$ (document).ready(function() { ImageComparisonWidget({“containerId”:”reviewImageComparisonWidget-14929297″,”widgetId”:775,”initialStateId”:null}) })
Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon EOS R5 added to studio test scene

Posted in Uncategorized

 

Canon EOS R6 added to studio test scene

22 Jul

As we charge ahead with our full review of the Canon EOS R6, we’ve had a chance to see how it performs in front of our studio test scene. See for yourself how its 20MP sensor stacks up, and let us know what you think in the comments.


Our test scene is designed to simulate a variety of textures, colors and detail types you’ll encounter in the real world. It also has two illumination modes to see the effect of different lighting conditions.

$ (document).ready(function() { ImageComparisonWidget({“containerId”:”reviewImageComparisonWidget-15294643″,”widgetId”:772,”initialStateId”:null}) })
Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon EOS R6 added to studio test scene

Posted in Uncategorized

 

Computational photography part III: Computational lighting, 3D scene and augmented reality

09 Jun

Editor’s note: This is the third article in a three-part series by guest contributor Vasily Zubarev. The first two parts can be found here:

  • Part I: What is computational photography?
  • Part II: Computational sensors and optics

You can visit Vasily’s website where he also demystifies other complex subjects. If you find this article useful we encourage you to give him a small donation so that he can write about other interesting topics.

The article has been lightly edited for clarity and to reflect a handful of industry updates since it first appeared on the author’s own website.


Computational Lighting

Soon we’ll go so goddamn crazy that we’ll want to control the lighting after the photo was taken too. To change the cloudy weather to sunny, or to change the lights on a model’s face after shooting. Now it seems a bit wild, but let’s talk again in ten years.

We’ve already invented a dumb device to control the light — a flash. They have come a long way: from the large lamp boxes that helped avoid the technical limitations of early cameras, to the modern LED flashes that spoil our pictures, so we mainly use them as a flashlight.

Programmable Flash

It’s been a long time since all smartphones switched to Dual LED flashes — a combination of orange and blue LEDs with brightness being adjusted to the color temperature of the shot. In the iPhone, for example, it’s called True Tone and controlled by a small ambient light sensor and a piece of code with a hacky formula.

  • Link: Demystifying iPhone’s Amber Flashlight

Then we started to think about the problem of all flashes — the overexposed faces and foreground. Everyone did it in their own way. iPhone got Slow Sync Flash, which made the camera increase the shutter speed in the dark. Google Pixel and other Android smartphones started using their depth sensors to combine images with and without flash, quickly made one by one. The foreground was taken from the photo with the flash while the background remained lit by ambient illumination.

The further use of a programmable multi-flash is vague. The only interesting application was found in computer vision, where it was used once in assembly schemes (like for Ikea book shelves) to detect the borders of objects more accurately. See the article below.

  • Link: Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering using Multi-Flash Imaging

Lightstage

Light is fast. It’s always made light coding an easy thing to do. We can change the lighting a hundred times per shot and still not get close to its speed. That’s how Lighstage was created back in 2005.

  • Video link: Lighstage demo video

The essence of the method is to highlight the object from all possible angles in each shot of a real 24 fps movie. To get this done, we use 150+ lamps and a high-speed camera that captures hundreds of shots with different lighting conditions per shot.

A similar approach is now used when shooting mixed CGI graphics in movies. It allows you to fully control the lighting of the object in post-production, placing it in scenes with absolutely random lighting. We just grab the shots illuminated from the required angle, tint them a little, done.

Unfortunately, it’s hard to do it on mobile devices, but probably someone will like the idea and execute it. I’ve seen an app from guys who shot a 3D face model, illuminating it with the phone flashlight from different sides.

Lidar and Time-of-Flight Camera

Lidar is a device that determines the distance to the object. Thanks to a recent hype of self-driving cars, now we can find a cheap lidar in any dumpster. You’ve probably seen these rotating thingys on the roof of some vehicles? These are lidars.

We still can’t fit a laser lidar into a smartphone, but we can go with its younger brother — time-of-flight camera. The idea is ridiculously simple — a special separate camera with an LED-flash above it. The camera measures how quickly the light reaches the objects and creates a depth map of the scene.

The accuracy of modern ToF cameras is about a centimeter. The latest Samsung and Huawei top models use them to create a bokeh map and for better autofocus in the dark. The latter, by the way, is quite good. I wish every device had one.

Knowing the exact depth of field will be useful in the coming era of augmented reality. It will be much more accurate and effortless to shoot at the surfaces with lidar to make the first mapping in 3D than analyzing camera images.

Projector Illumination

To finally get serious about computational lighting, we have to switch from regular LED flashes to projectors — devices that can project a 2D picture on a surface. Even a simple monochrome grid will be a good start for smartphones.

The first benefit of the projector is that it can illuminate only the part of the image that needs to be illuminated. No more burnt faces in the foreground. Objects can be recognized and ignored, just like laser headlights of some modern cars don’t blind the oncoming drivers but illuminate pedestrians. Even with the minimum resolution of the projector, such as 100×100 dots, the possibilities are exciting.

Today, you can’t surprise a kid with a car with a controllable light.

The second and more realistic use of the projector is to project an invisible grid on a scene to build a depth map. With a grid like this, you can safely throw away all your neural networks and lidars. All the distances to the objects in the image now can be calculated with the simplest computer vision algorithms. It was done in Microsoft Kinect times (rest in peace), and it was great.

Of course, it’s worth remembering here the Dot Projector for Face ID on iPhone X and above. That’s our first small step towards projector technology, but quite a noticeable one.

Dot Projector in iPhone X.

Vasily Zubarev is a Berlin-based Python developer and a hobbyist photographer and blogger. To see more of his work, visit his website or follow him on Instagram and Twitter.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Computational photography part III: Computational lighting, 3D scene and augmented reality

Posted in Uncategorized

 

Olympus OM-D E-M1 III added to studio test scene comparison

23 May

We’re continuing to test the Olympus OM-D E-M1 III, the latest iteration of the company’s sports-focused camera. It inherits a number of features from its big E-M1X sibling, including a 80MP high-res mode. There’s more work to be done on our full review of the E-M1 III, but in the meantime you can now compare its studio test scene results with those of its peers.

$ (document).ready(function() { ImageComparisonWidget({“containerId”:”reviewImageComparisonWidget-65480249″,”widgetId”:765,”initialStateId”:null}) })
Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Olympus OM-D E-M1 III added to studio test scene comparison

Posted in Uncategorized

 

Compare leading 1″ sensor compacts with our studio scene comparison tool

27 Sep

The current crop of 1″ sensor compacts offer varied lens ranges and a suite of attractive features fit for an unobtrusive, carry-everywhere camera. We’ve just added the Canon G7 X Mark III to our studio test scene comparison, making possible to compare the likes of Sony’s latest RX100-series cameras against Canon’s latest entries in the advanced compact market. See for yourself how they stack up against each other.

$ (document).ready(function() { ImageComparisonWidget({“containerId”:”reviewImageComparisonWidget-8390180″,”widgetId”:714,”initialStateId”:null}) })

Note: As of September 26th 2019 the skintone targets in our test scene have been removed and replaced temporarily by fresh prints drawn from our archive. Of the four cameras in this widget, only the Canon G7 X Mark III was shot after this change. As such, these targets should provide an accurate way of assessing the G7 X III’s color response, but should not be used to compare it against previously-tested cameras. This is an interim measure, and we’re working on a permanent solution.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Compare leading 1″ sensor compacts with our studio scene comparison tool

Posted in Uncategorized

 

High resolution Sony a7R IV pixel shift images added to studio scene, sample gallery updated

24 Sep

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_0635778599″,”galleryId”:”0635778599″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

One of the eye-catching features of the Sony a7R IV is its 16-image pixel-shift mode. This shoots four images centered around one position then shifts the sensor half a pixel sideways and takes another four, then another half pixel… until it’s taken 16 images. These 16 images can be turned into 240 megapixel images.

We’ve added pixel-shift images to our studio scene at several different ISO settings, along with a couple of real-world examples to our sample gallery showing both the 4-image demosaicing mode as well as the high-res 16-image mode. Just for good measure, we’ve added more standard images to the gallery as well.

Studio Scene

$ (document).ready(function() { ImageComparisonWidget({“containerId”:”reviewImageComparisonWidget-52074786″,”widgetId”:715,”initialStateId”:4752}) })

Image Processing

We’ve processed the images in the studio scene using PixelShift2DNG, because it allows us to use our standard Adode Camera Raw processing to maximize comparability with other cameras in the scene.

It should be noted that Imaging Edge has a setting called ‘Px Shift Multi Shoot. Correction,’ adjustable in eleven steps between 0 and 1, that smooths some of the stair-stepping and chequerboard errors that can appear in the image. The shots in our test scene effectively have this set to 0.

Before making this decision, we compared this output with the results from Sony’s own Image Edge software. We’ve created a rollover that compares the PixelShift2DNG result to the Imaging Edge output with sharpening, noise reduction and Px Shift Correction minimized, and to the default Imaging Edge result.

DNG -> ACR Imaging Edge Modified Imaging Edge Defaults

We’ve uploaded the Image Edge-combined ‘ARQ’ files to the studio scene, but you can download the combined DNGs here:

16-image files merged using PixelShift2DNG
  • ISO 100
  • ISO 6400
  • ISO 51200
  • ISO 102400

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on High resolution Sony a7R IV pixel shift images added to studio scene, sample gallery updated

Posted in Uncategorized

 

Sony a7R IV added to studio test scene comparison

11 Sep

The 61MP backside-illuminated CMOS sensor on board the Sony a7R IV is the first new chip in an R camera since 2015. Although it’s new to the series, Sony began preparing for its inclusion in a future camera when it released the a7R III, so the a7R IV is able to use the same front-end LSI and Bionz X processor. Now that it’s out in the wild we’ve been able to begin analyzing its performance – starting with our studio test scene. Check out how the a7R IV’s 61 Megapixels perform against its competitors, below.

$ (document).ready(function() { ImageComparisonWidget({“containerId”:”reviewImageComparisonWidget-46122750″,”widgetId”:715,”initialStateId”:null}) })
Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sony a7R IV added to studio test scene comparison

Posted in Uncategorized

 

3 Easy Tips for Photographing Details in a Scene

21 Aug

The post 3 Easy Tips for Photographing Details in a Scene appeared first on Digital Photography School. It was authored by Lily Sawyer.

Regardless of the type of event I photograph, I ALWAYS photograph details. Why? Because details help tell the story of an event that nothing else can. Details aid the recollection of memories: words and conversations, scents and aromas, spoken and unspoken emotions. Details also help cement these memories in our brains.

photographing-details-in-a-scene-lily-sawyer

Newborns grow so fast and often parents are exhausted beyond belief. Photographing details helps them remember the sweet moments, especially of those first days. The first tuft of hair, the tiniest fingers, milk spots, windy smiles, hospital tags, first baby hats, and mittens. Captured in details, these moments can be cherished more often and for much longer.

Wedding days go in a total haze for many a couple. All the weeks and hours they have put into planning the decorations and color scheme down to the minutest detail and they don’t even have a moment to fully appreciate them on the wedding day. I carve out time to capture hundreds of detail photos during a wedding. They are just as important as all the other photojournalistic and documentary captures of people and events unfolding on the day.

dps-3-tips-capturing-details-lily-sawyer-photo

You don’t need to have an expensive kit to take some good, solid photos. All the photos in this article were intentional snapshots taken during a day trip with me as a tourist. No flashes, just a camera, a 60mm fixed lens and an observant eye looking for details.

Here are 3 tips I find helpful when photographing details:

1. Storytelling

Photographs are no doubt one of the most visually exciting ways to tell a story, so tell it well using photographs of details.

Let’s consider the five elements of a story to help us communicate it effectively: setting, characters, plot, conflict, resolution.

Below is a series of photographs I shot with intent to tell a simple short story (real captures and not staged) with the above elements in mind.

Setting

photographing-details-in-a-scene-lily-sawyer

Conflict

photographing-details-in-a-scene-lily-sawyer

Characters and Plot

photographing-details-in-a-scene-lily-sawyer

Resolution

photographing-details-in-a-scene-lily-sawyer

This is just a simplistic way of showing you how a story can be captured beautifully in details. The setting is in Copenhagen on a soaking wet and cold day. It had been heavily raining for a good while. Droplets have collected on the bridge adorned with love locks. It’s summer (as seen on the date in the newspaper), and the map is sodden. Somewhere nice and cozy to dry off and relax would be welcome. A girl wrapped in a thick blanket browses the menu. Hot cocoas put a big smile on the children’s faces. It’s a happy summer once again.

2. Composition

a. Rule of Thirds

The rule of thirds is probably the most well-known and popular composition techniques. It is also my favourite and the easiest that comes to me. The frame is divided into thirds horizontally and vertically. Where the sections intersect are the strongest points where your main image or interest in the image should be placed.

dps-3-tips-capturing-details-lily-sawyer-photo

This is illustrated above left where the yellow building occupies two-thirds of the space and one third on the left is the overcast sky. In the above right image, the two buildings intersect on a third of the frame. The point of the small tower is positioned about a third from the bottom of the image and a third from the left.

Plain snapshots like these look stronger with this rule of thirds composition.

dps-3-tips-capturing-details-lily-sawyer-photo

Can you see how these two images above use the rule of thirds composition?

b. Symmetry/centred

Another favorite of mine is symmetry; where the main point of interest in the image is placed at the center of the frame. A central composition accentuates the importance of the subject and emphasizes its superiority.

dps-3-tips-capturing-details-lily-sawyer-photo

The image above shows the fish at the center and two symmetrical areas on either side of it. More symmetry – the tables and chairs and the windows on either side – strengthens this image further. You can feel the solidity of the structure because of the centered composition.

c. Fill the Frame

As with the above, filling the frame strengthens composition and makes the viewer focus intensely on the subject without unnecessary distractions. The viewer can explore details that otherwise would be lost had the image not filled the frame and cropped distractions. Filling the frame is an effective way of highlighting a point of interest and telling its story from much closer.

dps-3-tips-capturing-details-lily-sawyer-photo

d. Depth and foreground interest

Photographs are two dimensional in nature. Many images only show the subject and background. Including foreground adds a third dimension to the space. It increases its depth and makes the viewer feel as if they are on the outside looking in. I use this technique for anything and everything: portraits, objects, or action.

A foreground interest also invites the viewer to explore other elements in the image and look deeper into the other areas of the frame, not just first thing that meets their eye. Because you are inviting the viewer’s eye to move around the image, you make your image more dynamic.

dps-3-tips-capturing-details-lily-sawyer-photodps-3-tips-capturing-details-lily-sawyer-photo

3. Angles

Change your point of view

We see virtually everything at eye level. We often walk around with our faces looking forward not upwards, downwards or sideways. So whenever we change our point of view into a bird’s eye view or worm’s eye view or perspective, we find ourselves seeing new interesting things in common and familiar objects. It’s something I always try to remind myself when shooting: look up, look down, look right, and look left.

In the image below, you can see I’ve also combined this bird’s eye view with the symmetrical composition and the rule of thirds.

dps-3-tips-capturing-details-lily-sawyer-photo

Perspective and leading lines

Something as mundane as a bird on a bench can be captured with a touch more interest by moving a few steps sideways and photographing it from a perspective viewpoint. Doing this is making use of the line of the bench to lead the viewer’s eye to the bird – the focal point of the image.

I didn’t want to get too close lest I scared it away. It caught my eye because I thought it was a crow. Then I noticed it was wearing some white feathers like a cardigan on its black body. Look out for leading lines, whether they be straight like benches, rails and fences, or curvy like windy paths, a stream of water or patterned tiles on pavements.

dps-3-tips-capturing-details-lily-sawyer-photo

Conclusion

I find photographing details exciting, mentally challenging, and thoroughly enjoyable. It keeps me on my toes, especially when I try to tell a story. I feel a sense of achievement when I’m happy with the results. I hope you try it sometime if you haven’t yet.

If you have any tips for photographing details, do share them in the comments below.

 

photographing-details-in-a-scene

The post 3 Easy Tips for Photographing Details in a Scene appeared first on Digital Photography School. It was authored by Lily Sawyer.


Digital Photography School

 
Comments Off on 3 Easy Tips for Photographing Details in a Scene

Posted in Photography