RSS
 

Posts Tagged ‘Than’

CineStill’s new developer kits make it easier than ever to creatively control slide film development

20 May

CineStill has introduced a new 3-bath processing system for slide film users that allows photographers a choice of three color and contrast profiles for the same type of film. Users can opt for a straight development to bring out the default characteristics of the film, or choose a tungsten balanced look or one with a warm tone and lower contrast. The Cs6 Creative Slide 3-Bath processing system allows photographers to get three quite different looks from the same emulsion by using a different developing solution.

The developers are part of a new 3-bath system that comprises a first developer that determines the color and contrast profile of the end result, followed by a combined the second developer and reversal process, and then a third bath that contains the bleach and the fix. CineStill claims the chemicals are easy to use and that temperature control is not as critical as in normal E6 processes, so successful home processing is much more achievable.

The three choices for the first developer are D6 DaylightChrome, D9 DynamicChrome and T6 TungstenChrome. The DaylightChrome developer is said to produce a neutral result with slightly enhanced color saturation and a usable dynamic range of 6 stops.

DynamicChrome produces a warm tone with enhanced contrast and color saturation but at the same time preserves highlight and shadow detail to offer a useable DR of 9 stops and extended exposure latitude. The TungstenChrome developer shifts the film’s color to correct for the use of warm tungsten lights saving the bother and light-lose of using a color-correction filter over the lens.

Photo by Sandy Phimester on Kodak E100 processed with CineStill Cs6 DynamicChrome

The one-liter kits contain enough chemistry to process 16 rolls or 100ft of film and are priced from $ 39 including all three baths, while the individual developers are available from $ 12.99. The chemicals are available now from the CineStill website, and from retailers in the USA and Europe from the summer.

Press release

Introducing: Cs6 “Creative Slide” 3-Bath process for color-timing E-6 reversal film at home

Chrome unlocked!

Want to shoot slide film? Want it to be quick and easy to process? Want to still have creative control over how your images look? Introducing the CineStill Cs6 “Creative Slide” 3-Bath Process for simple creative control of your E-6 film.

The reversal process is the purest of analog processes and it’s now more creative and accessible than ever! The colors of slide film are unrivaled and now you can color-time and control dynamic range with alternative 1st developers. For the first time ever, you can change the color profile of your slides. With limited slide film options available today, CineStill is tripling the choices available for slide film, and demystifying slide processing with only 3 baths to appreciate a beautiful photograph. Your slides should be superior to color-corrected negative scans, without sacrificing creative control.

For the past several years CineStill has been developing ways to bridge the gap between mail-order photo labs and instant photography. Whether it be a Monobath for B&W film, a simplified 2-Bath process for color negatives, a Temperature Control System for mixing and heating chemicals, or various partnerships to make daylight processing accessible… There’s no longer a need for a darkroom, professional lab or high-tech equipment to create analog photographs. You can now create beautiful color transparencies at home through one simple process.

With the Cs6 “Creative Slide” 3-Bath Process comes 5 new products

1. D6 “DaylightChrome” Neutral-tone 5500K 1st Developer
Renders approximately 6+ stops of usable dynamic-range* with bright whites and moderately enhanced color saturation, just like conventional E-6 processing. Daylight-balanced for conventional slides in daylight or with electronic flash. Single-use 1+1 dilution develops 8-16 rolls or 100ft of slide Film.
Kodak E100 DaylightChrome

2. D9 “DynamicChrome” Warm-tone 1st Developer
Renders approximately 9+ stops of usable dynamic-range*, while maintaining vibrant color-contrast and rich warm tones with preserved highlight and shadow detail (optimized for scanning) for a more cinematic look. Extended exposure latitude increases the usable dynamic-range* of slide film from 6 to 9+ stops! Conventional E-6 processing renders approximately 6 stops of usable dynamic-range*. Perfect for high contrast lighting or backlit subjects in daylight, shade or with electronic flash. Single-use 1+1 dilution develops 8-16 rolls or 100ft of slide Film.

3. T6 “TungstenChrome” Cool-tone 3200K 1st Developer
Renders approximately 6+ stops of usable dynamic-range* with cleaner whites, and moderately enhanced color saturation. Shoot in artificial light without sacrificing 2 stops of exposure to color filtering! Kodak’s published technical data sheet recommends exposing E100 at EI 25 with an 80A Filter in Tungsten (3200 K) light. Now you can expose at box speed in low-light or even push to EI 200 or 400, and color-time your slides in processing. With conventional E-6 processing this would require color filtration and a 2-4 stop exposure compensation. Single-use 1+1 dilution develops 8-16 rolls or 100ft of Ektachrome®.

4. Cr6 “Color&Reversal” 2-in-1 Slide Solution
Combines the reversal step with color development. 6min minimum process time for completion with flexible temperature range of 80-104°f (27-40°C)**. Reusable solution reverses 16+ rolls of developed slide film.

5. Bf6 “Bleaches&Fixer” 3-in-1 Slide Solution
Combines the bleach and conditioner steps with the fixing step. 6min minimum process time for completion with flexible temperature range of 75 -104°f (23-40°C)**. Reusable solution clears 24+ rolls of reversal Film.

* ”Usable dynamic-range” is the amount of full stops of exposure value that renders acceptable detail and color. “Total dynamic-range” however, is the maximum range containing tonal separation rendering any detail, and is often twice the usable-dynamic range. The usable dynamic-range of conventional slide film is between 6-8 stops (total 14-16 stops). Color negative is between 9-13 stops (total 16-21 stops). Digital sensors are mostly between 7-10 stops (total 12-15 stops).

** Maintaining temperature is not essential beyond pouring in a 1st developer. When a temperature control bath is not available, simply preheat the 1st Developer +2ºF warmer, and the other baths will automatically process-to-completion as they cool down. Only the 1st developer is time and temperature critical because it controls contrast and color.

Cs6 “Creative Slide” 3-Bath Kits for Reversal and E-6 Film

CineStill Cs6 3-Bath Kits will be available from $ 39. The 1000ml/Quart Kits can process 16+ Rolls or 100ft of Slide Film and the 3-2-1 Chemical Reuse Kits processes 32+ Rolls of film.
Included In Cs6 3-Bath Kits:

  • D9 “DynamicChrome”, D6 “DaylightChrome”, or T6 “TungstenChrome” 1st Developer
  • Cr6 “Color&Reversal” 2-in-1 Slide Solution
  • Bf6 “Bleaches&Fixer” 3-in-1 Slide Solution

The CineStill Cs6 3-Bath Kits and separate components are available for purchase now at CineStillFilm.com, and throughout the U.S. and E.U. markets later this summer.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on CineStill’s new developer kits make it easier than ever to creatively control slide film development

Posted in Uncategorized

 

Video: How to capture abstract macro photographs using little more than paper and lights

07 Apr

Considering most of the world is well into a few weeks of self-isolation, you’ve probably photographed nearly everything you can in your household, leaving little left to document. Now, it’s time to get really creative and to help make the most of a rough situation is Ben from Adaptalux, who’s shared a 15-minute video showing how you can capture abstract macro photographs using little more than a few lights, a few sheets of paper and a little bit of creative thinking.

Throughout the 15-minute video, Ben walks through a number of different setups and other variations you can experiment with to capture the macro abstract photographs, but the basic tools you’ll need on hand include a camera, a close-up capable lens, a tripod (not necessary, but will very much simplify the process), at least one light source, several sheets of white paper, paper clips (or bobby pins) and a flat surface that’s either transparent or translucent.

While Adaptalux lights are unsurprisingly mentioned in the video, any lights should do and if you have a few gels sitting around, you can get extra creative by mixing up colors.

If you prefer a non-visual explainer, Adaptalux has also published an accompanying blog post on its website that details the process. You can find more tutorials on Adaptalux’s YouTube channel, where it offers up a complete playlist dedicated to ‘Macro Photography Tutorials.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: How to capture abstract macro photographs using little more than paper and lights

Posted in Uncategorized

 

Intel announces first mobile CPUs capable of more than 5GHz clock speeds

03 Apr

For photographers, one of the most important components of their computer is the processor (CPU). When considering how software such as Adobe Lightroom performs, maximum single and multi-core CPU performance is critical. This makes Intel’s announcement today that it is releasing the world’s fastest mobile processor particularly exciting for creatives on the go.

The 10th generation Intel Core H-Series introduces half a dozen H-Series mobile processors, including four which can surpass 5 GHz frequency from a single core in Turbo performance mode. These chips are built using Intel’s 14nm Comet Lake architecture, rather than the 10nm process that Intel teased at CES earlier this year. The top of the line processor, the Intel Core i9-10980HK, has a base clock speed of 2.4 GHz and can reach 5.3 GHz speeds at its maximum performance. This processor, along with the 5.1 GHz i7-10875H, delivers 16 threads across 8 cores and include a 16 MB Intel Smart Cache.

Another pair of new i7 processors, the 10850H and 10750H, can reach 5.1 and 5.0 GHz respectively. These processors are both 6-core CPUs with a dozen threads. Rounding out the new lineup are the Intel Core i5-10400H and i5-10300H. These four-core CPUs have eight threads and have maximum speeds of 4.6 and 4.5 GHz respectively.

You can view a comparison of the six Intel 10th generation mobile processors in the chart below:

Image credit: Intel Corporation. Click for a larger view.

What do all these numbers mean for creatives? On the photography side of things, Photoshop and other photography applications heavily utilize your computer’s CPU relative to the GPU. Software such as Photoshop is getting better at using a computer’s GPU to accelerate certain tasks, but the CPU is particularly important. Further, the maximum frequency of CPU chips is more important than the number of cores for most photo editing tasks. All else equal, a faster CPU results in better performance when importing, processing and editing image files.

Thus, the new 10th generation Intel i9 processors represent a very powerful CPU for CPU-intensive applications such as Adobe Photoshop and Lightroom. Lightroom, for example, is optimized to utilize multiple cores for handling tasks, so Intel’s eight-core chips are exciting. The quicker your computer’s CPU can work through tasks, the less time you must spend waiting.

For video editors, Intel has published specific performance gain numbers. When compared to a similar Intel chip from three years ago, the top-of-the-line i9-10980HK can render and export 4K resolution video up to twice as fast. The i7-10750H fares well too, exporting 4K video up to 70 percent faster compared to its predecessor from three years ago. It will be interesting to see how the new chips perform in the real world when rendering 4K and even 8K video.

This image shows the wafer of Intel’s 10th generation H-series processors. Image credit: Intel Corporation

Of the Intel Core i9-10980HK, Intel states that it features ‘unparalleled performance across the board with up to 5.3 GHz Turbo, eight cores, 16 threads and 16MB of Intel Smart Cache. The unlocked 10th Gen Intel Core i9-10980HK processor powers the ultimate laptops for gamers and creators, allowing further customization, optimization and tuning of the CPU’s performance.’

Additional features of the Intel 10th generation chips include Intel’s proprietary Speed Optimizer one-click overclocking feature, Thermal Velocity Boost and Adaptix Dynamic Tuning. For a full breakdown of all the key features in the new Intel chips, you can download a PDF briefing by clicking here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Intel announces first mobile CPUs capable of more than 5GHz clock speeds

Posted in Uncategorized

 

6 Reasons Why Dedicated Cameras are Better than Smartphones for Photography

09 Mar

The post 6 Reasons Why Dedicated Cameras are Better than Smartphones for Photography appeared first on Digital Photography School. It was authored by Simon Ringsmuth.

6 Reasons Why Dedicated Cameras are Better than Smartphones for Photography

Modern smartphone cameras are amazing! They have facilitated an explosion in photography that shows no signs of stopping. Mobile phone cameras, apps, editing, and sharing have given people access to creative outlets that were unthinkable a mere 15 years ago. If the best camera is the one you have with you, then 9 times out of 10, the best camera is right in your pocket! Despite the advances in smartphone cameras, there are still few things smartphone cameras lack. So, in this article, we’ll explore why dedicated cameras are better than smartphones for photography. In other words, there are some photos you just can’t get with a smartphone.

Reasons why dedicated cameras are better than smartphones for photography

why dedicated cameras are better than smartphones-butterfly
Nikon D7100, 85mm, f/2.8, 1/1000 second, ISO 100

1. Software vs. Physics

The first of the 5 reasons why dedicated cameras are better than smartphones for photography, is software vs physics.

I don’t want to sound like an old man yelling at clouds, decrying all modern technologies that might make my life better. Smartphone cameras and computational photography are incredible! They can use software and artificial intelligence to capture incredible images of night skies and portrait-style images with blurry backgrounds.

But digital trickery and software manipulations are no match for a mastery of light and physics, and this is where dedicated cameras still have an edge.

Most smartphones have lenses that approximate roughly a 28mm field of view on a full-frame camera. Some have second lenses that go a bit wider, usually about 15mm. It’s also not uncommon for higher-end phones to have a telephoto lens as well, which is roughly equivalent to a 50mm lens.

Nearly all smartphone cameras are stuck at a single aperture value as well, which gives you limited control over a key element of exposure. While there is much that can be done in software to overcome the inherent limitations of these lenses and focal lengths, sometimes you just need a separate camera to get the shot.

2. Foreground Blur

why dedicated cameras are better than smartphones-flower-bokeh
Nikon D7100, 50mm, f/1.8, 1/2000 second, ISO 100

Any smartphone can take pictures of flowers. This particular image shows a backlit flower whose petals are glowing with sunlight streaking in from above and behind, and a mobile phone could capture that just fine. However, there is one key element of this image that’s impossible on a smartphone – the foreground blur.

Smartphones have come a long way with so-called portrait-style photography. Portrait mode involves software combined with depth data that allows a smartphone to blur the background.

But not the foreground.

This is one of the things smartphone cameras lack. Try it for yourself!

Take a portrait-style photo with your smartphone but include objects in the foreground that you would like to blur. The background will get blurry, but the foreground will remain in focus.

Blurring both the foreground and background is a time-honored technique to add a sense of depth and perspective to your photos. Perhaps one day the software and AI techniques used on mobile phones will be able to replicate this. But, for now, if you’re using a smartphone, you’re stuck with just background blur.

3. Telephoto Zoom

6 Reasons Why Dedicated Cameras are Better than Smartphones for Photography
Nikon D500, 200mm, f/8, 1/400 second, ISO 900

While smartphone cameras have had pinch-to-zoom capabilities for over a decade, it amounts to little more than just cropping your pictures. Modern smartphones do a better job of interpolating data between pixels and adjusting exposure values on the fly, but at the end of the day, you’re still just cropping.

In the process, you lose a lot of detail. And even then, you just can’t zoom in very far. It’s definitely one of the things smartphone cameras lack, despite some recent advances.

One classic example of this is a picture of the moon.

Smartphone lenses, and the laws of physics, make pictures like this impossible. You have probably noticed if you have ever tried to do a pinch-and-zoom photo of our nearest celestial neighbor.

You’ll need a dedicated camera if you want to get crisp, detailed photos of faraway objects. And this is just another reason dedicated cameras are better than smartphones.

why dedicated cameras are better than smartphones-horse
Nikon D500, 200mm, f/2.8, 1/3000 second, ISO 100

Smartphones aren’t great for most long-distance shooting scenarios, such as this picture of a horse in the pasture.

While pinch-and-zoom can make it seem like you’re getting closer, you won’t get a tack-sharp, high-resolution image suitable for printing and framing.

Like everything tech-related, this is getting better and will improve with time. Some phones now are using stacked periscope-style lenses combined with software and AI processing to mimic 10x or even 100x zoom lenses. Right now, these make interesting tech demos, but the results don’t have the same level of clarity, color, and fidelity as you would get from a DSLR or mirrorless camera with a zoom lens attached.

4. Background compression

why dedicated cameras are better than smartphones-walking
Nikon D7100, 200mm, f/2.8, 1/1500 second, ISO 100

Another reason dedicated cameras are better than smartphones is background compression.

Something interesting happens when you shoot photos with a telephoto zoom: the background appears to move closer to your subject.

It’s called background compression and is a time-honored compositional technique to make your subjects stand out and take your images up to another level. It’s also impossible to do on a smartphone.

In the picture above, the building is very far away from the woman walking in the foreground. Shooting with a telephoto lens compresses the background and makes it seem much closer.

-family-forest
Nikon D750, 200mm, f/4, 1/400 second, ISO 3200

In this family photo, you can see the trees and leaves in the background, which are very far away. However, they appear closer as a result of background compression.

While some smartphone cameras do have some limited zoom capability, their smaller lenses and image sensors simply do not allow for these types of pictures.

5. Fast action

Before I get too far in this section, I want to point out that smartphones are good at capturing some types of fast action. These conditions are fairly limited, though.

You have to be close to your subject, which isn’t possible in a lot of action situations. It also helps if you can lock focus on a specific area where you know the subject will be, or else have a smartphone with amazing autofocus capabilities. And if you can meet those challenges, then your phone could produce some good results.

For a lot of fast action, though, you need a DSLR or mirrorless camera. It helps to have a good lens attached too.

This will let you stand on the sidelines while getting up close and personal with your subjects. It helps to shoot with a wide aperture too, which will let you get a fast shutter speed and freeze the action.

action photo
Nikon D750, 185mm, f/4, 1/500 second, ISO 100

These types of action shots are impossible on smartphones because pinch-to-zoom just can’t get the job done. You’ll get pictures that are pixellated, blurry, or out of focus because smartphones are not able to match the speed and capability of a dedicated camera.

In the picture below, I was sitting in the stern of a boat zoomed in to 200mm. I had to use tracking autofocus to keep the picture sharp. My brother was also in the boat with his smartphone, and he didn’t like any of the shots he got.

why dedicated cameras are better than smartphones- action photo of tubing
Nikon D7100, 200mm, f/2.8, 1/2000 second, ISO 100

6. Portraits

The last of my reasons that dedicated cameras are better than smartphones relates to portrait photography.

This one might ruffle some feathers because phones have gotten so much better at portraits in recent years. In fact, some people can’t even tell the difference between portrait-style images shot on mobile phones and actual portraits taken with a dedicated camera. I have trouble sometimes too. In the coming years, mobile phones are going to keep getting better and better.

For now, and into the foreseeable future, dedicated cameras still have a significant advantage.

Software and AI, and computational horsepower can do a lot, but they can’t keep up with a good lens and physics.

In the picture below, the girl’s eyes are tack sharp but there is a subtle falloff as you look towards the edge of her face. Her hair goes from sharp to blurry in a smooth, even fashion.

The background isn’t just blurry – it’s obliterated. Mobile phones can’t do that.

girl-red-shirt
Nikon D750, 170mm, f/2.8, 1/180 second, ISO 100

You don’t need expensive gear to take great portraits either.

In fact, you can spend far less on a used DSLR or Mirrorless camera than you would on a mobile phone with portrait mode.

The shot below was taken on a Nikon D200, which came out in 2006, and can be found today for about $ 150.

The lens is a cheap 50mm f/1.8. And the results blow away anything you can get from a mobile phone.

All the subtle details, like the way her eye is in focus but her ears are slightly blurry, to her hair slowly fading away, to the bokeh in the background, make this image a cut above what you could get from a smartphone. Just another reason that dedicated cameras are better than smartphones for photography.

why dedicated cameras are better than smartphones-girl-park
Nikon D200, 50mm, f/1.8, 1/250 second, ISO 400

Conclusion

Before anyone gets out a bucket of tar and some feathers, please understand that I think smartphone cameras are amazing!

Despite the things smartphone cameras lack, they can take incredible pictures and technology will only make them better with time. I just think it’s important to understand their limitations and have a sense of some of the pictures they can’t yet achieve.

What about you?

I’m curious what your experience has been with smartphone pictures. Does your smartphone take the kinds of shots you want, or have you found that it can’t yet replace your DSLR or mirrorless camera?

I’d love to hear from you. Feel free to share your thoughts and example images in the comments below.

The post 6 Reasons Why Dedicated Cameras are Better than Smartphones for Photography appeared first on Digital Photography School. It was authored by Simon Ringsmuth.


Digital Photography School

 
Comments Off on 6 Reasons Why Dedicated Cameras are Better than Smartphones for Photography

Posted in Photography

 

Analysis predicts drone Remote ID will cost 9X more than expected, DJI urges FAA to reconsider ruling

06 Mar

After numerous delays, the Federal Aviation Administration (FAA) released its Notice of Proposed Rulemaking (NPRM) for the Remote Identification of Unmanned Aircraft Systems at the end of last year. The 60-day public commenting period closed this past Monday, March 2nd, with over 52,000 comments submitted during that time.

DJI, the world’s leading drone manufacturer, has supported the need for Remote ID since 2017. In the interest of moving the industry forward, a proper ruling would allow flights at night, over people and beyond visual line of sight. When the NPRM was released, however, DJI publicly chastised the FAA for not incorporating recommendations submitted by the 74 stakeholders that make up the Aviation Rulemaking Committee.

In its 89-page comment to the FAA, DJI cites independent economic analysis that was prepared by Dr. Christian Dippon, Managing Director at NERA Economic Consulting. The study concludes that the societal costs associated with the Remote ID NPRM would total $ 5.6 billion. This makes it 9 times more costly than the $ 582 million the FAA predicts for the next decade.

“I worry about an impact on innovation, with fewer people interested in using drones,” – Brendan Schulman

The long-term ramifications, should the Remote ID NPRM pass in its current form, will extend beyond financial burdens. ‘I worry about an impact on innovation, with fewer people interested in using drones. Our economist’s survey found at least a 10% drop in drone activity if the proposal were implemented, but I think it could be much higher as the full impact is felt by operators,’ Brendan Schulman, DJI’s Vice President of Policy & Legal Affairs, tells DPReview.

Remote ID, simply put, is a digital license plate for drones. It allows authorities to identify the location, serial number, and a remote pilot’s identity in near real-time. The FAA is proposing that almost all drones should transmit that information over wireless networks to a service provider’s database. NERA’s study concludes that the monthly cost of a network-based service for a remote pilot would be $ 9.83 instead of the FAA’s $ 2.50 estimate.

A few vocal critics have suggested that DJI’s involvement in drafting Remote ID rulemaking has served their own interests, and that regulations will amount to a multi-billion dollar gain for the company. ‘The critics missed the context and history. Since 2017 we knew Remote ID was inevitable as a government mandate, and have been advocating for the best possible result for all drone users: low costs and burdens. Everything we have done on this topic has been focused on those goals. Keep costs low and respect drone user privacy. For example, in March 2017 we released a whitepaper strongly advocating for pilot privacy,’ Schulman explains.

DJI has advocated for a ‘drone-to-phone’ solution that provides Remote ID information on common smartphones without burdening drone operators with any extra costs or effort. DJI says that its solution is cheaper and easier than what the FAA is proposing. Any new ruling on Remote ID will not likely take effect until 2024.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Analysis predicts drone Remote ID will cost 9X more than expected, DJI urges FAA to reconsider ruling

Posted in Uncategorized

 

Why Leica’s M10 Monochrom is more than just a gimmick

17 Jan
The M10 Monochrom is Leica’s third mono-only digital rangefinder, but the lower base ISO of the latest camera extends its flexibility.

The Leica M10 Monochrom is the company’s third mono-only rangefinder. It uses an entirely new 40MP sensor, rather than borrowing the 24MP chip from the other M10 models.

We think the Bayer filter array is an amazing creation, producing results that massively outweigh its drawbacks, but there are a few reasons why going without color filters is more than just a gimmick.

Higher detail capture

The obvious benefit of a monochrome sensor is that you don’t need to demosaic: each pixel you capture becomes one pixel in your final image. You don’t need to interpolate missing color values for each pixel, so you don’t need to call on neighboring pixels, so don’t experience the slight blurring effect that this has.

The final image will be inherently sharper than most color cameras can achieve (Foveon sensors being the key exception to this).

Higher base ISO

The color filters used on most sensors absorb around 1EV of the light, since each filter has to absorb the two colors it’s not allowing to pass through to the sensor (the green filter absorbs the red and blue light, for instance).

The M10 Monochrom’s base ISO of 160 is lower than previous mono cameras but higher than a camera with a color filter array would be.

This means that the silicon of a monochrome sensor receives around one stop more light at any given exposure. The consequence is that it becomes saturated and clips highlights around one stop earlier, at its lowest amplification setting. The result is that its base ISO tends to be rated one stop higher than a chip with a CFA would be. On the M10 Monochrom, the base ISO is given as 160 (rather than 320 on previous models).

This can be challenging, since it means having to use exposures that are 1EV lower than you’d expect on a color camera. In bright light, this is likely to mean stopping down when you hit the M10’s 1/4000th sec maximum shutter speed. But it’s worth noting that there isn’t any image quality cost to this.

Better tonal quality, ISO for ISO

Usually, reducing exposure by 1EV results in photon shot noise being one stop more visible (this reduction in light capture is the main cause of high ISO images looking noisier).

But, although the M10 Monochrom’s base ISO of 160 means using an exposure that’s half as bright as the ISO 80 exposure you’d expect to need on a color version, the Monochrom’s sensor still experiences the same amount of light: there’s no filter stealing half of it.

The tonal quality of this ISO 12500 shot is likely to be more comparable to that of an ISO 6400 shots on a color camera, since the sensor will be seeing the same amount of light, despite a darker exposure. The dynamic range is likely to be similar, too, since less amplification will have been applied.

In other words, you’ll get the same tonal quality as a color camera shot at 1EV lower ISO. And, while the higher base ISO presents an exposure challenge in bright light, it means you get tonal quality that’s a stop better in low-light situations.

And that’s before you consider the fact that all noise will present as luminance noise, rather than the chroma noise that most people find more objectionable. So you get a one stop improvement in noise in low light and the noise that is present is less visually distracting, which means less need to apply detail-degrading noise reduction.

New sensor

The big unknown with the M10 Monochrom is the specific sensor performance. We’ve not seen a 40MP full-frame sensor before, so can’t yet be sure what its performance will be like. The 24MP sensor used in the existing M10 models is pretty good, but slightly underperforms the standard set by the 24MP sensor in cameras such as the Nikon D750, meaning it’s even further behind the newer chip used by the likes of Nikon, Panasonic, Sigma and Sony.

We can’t yet be sure how Leica has managed to reduce the base ISO, compared to the previous model. An ISO of 160 is very low for a Mono camera, since it would equate to around ISO 80 if a color filter array was applied. So it’ll be interesting to assess the dynamic range, when the camera becomes available.

We won’t know how well the M10 Monochrom’s sensor performs until we get a chance to go out and shoot with it. Probably out in one of the classic sports cars Leica seems to expect us to have.

All Leica has said is that the chip in the M10 Monochrom has been ‘designed from the ground up with Mono in mind,’ which we’re a little skeptical about. It’s true that we’ve not seen this 40MP chip elsewhere, but it’s hard to imagine that (even at Leica prices), the M10 Monochrom will ever generate enough money to cover the cost of the development of a dedicated chip.

What is true, though, is that the smaller pixels of a 40MP sensor will make it less prone to aliasing than a 24MP sensor would be, since higher resolution sensors can accurately portray higher frequency detail, before getting overwhelmed and rendering an alias, rather than a correct representation. That said, simply being a monochrome sensor massively reduces the risk of aliasing (Bayer sensors sample red and blue at 1/4 their full pixel count, so can produce color aliasing with relatively low frequency detail).

Beyond the technical

It feels a bit strange writing about the technical advantages of a Leica rangefinder, since that’s not historically been an area in which they’ve excelled, and probably isn’t high on the list of why anyone buys one.

Of course, we’re DPReview, so we’re always going to consider the technical aspects of camera performance. But we recognize that a monochrome camera is about more than this. If you go out knowing that every photo has to be black and white, you look at the world in a different way: you start to concentrate on compositions of light and shade, not just compelling color or the warmth of the light. It’s a different way of thinking.

Which is to say: we’re really looking forward to getting a chance to go shooting with the M10 Monochrom.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Why Leica’s M10 Monochrom is more than just a gimmick

Posted in Uncategorized

 

DPReview TV: Why electronic image stabilization works better on your GoPro than your camera

21 Nov

Have you ever looked at your smartphone or GoPro and said, “I wish my camera could stabilize an image like that?!” Chris explains the limits of electronic image stabilization, and why your camera probably can’t stabilize like that.

Subscribe to our YouTube channel to get new episodes of DPReview TV every week.

  • Introduction
  • What is electronic stabilization?
  • The effect of shutter speed
  • The effect of rolling shutter
  • The effect of frame rate
  • Conclusion

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on DPReview TV: Why electronic image stabilization works better on your GoPro than your camera

Posted in Uncategorized

 

The iPhone 11 is more than just Apple catching up to Android

18 Sep

Apple announces iPhone 11 (Pro)

Major smartphone manufacturers introduce new models on a yearly cadence. Camera upgrades tend to be a major focus, with little else apparently to differentiate new models from old ones. Often, what seem like small, incremental upgrades can have significant impact on photo and video quality. The iPhone XS, for example, dramatically improved image quality in high contrast scenes thanks to the sensor’s ability to capture ‘inter-frames’ – short exposures in between longer ones – to improve dynamic range and noise. Similarly, 4K video was improved with multi-frame HDR capture.

Last week, Apple announced numerous updates to the cameras in the iPhone 11, some of which will inevitably be seen as attempts to catch up to capabilities of contemporary Android offerings. But, taken together, we think they stack up to meaningful upgrades that potentially make an already very capable camera one of the most compelling ones on the market.

See beyond your frame

The iPhone 11 offers a whopping 13mm 35mm equivalent field-of-view with its wide-angle, ‘0.5x’ lens. The iPhone 11 is the first Apple phone to feature an ultra-wide angle lens, a feature that’s been present on numerous Android phones. Wide angle lenses often add drama and a sense of depth to everything from landscapes, portraits, architecture and still life. They also allow for creative framing options, juxtaposing interesting foreground objects and distant ones. They’re also useful when you simply can’t step any further back from your subject.

The iPhone 11 models alert you to the potential presence of objects of interest beyond your current framing by showing you the wider field-of-view within the camera user interface. Simply tap the ‘1x’ focal length multiplier button to zoom out (or zoom in, on the Pro models).

Refined image processing pipeline

Newer, faster processors often mean increased photo and video capability, and the iPhone 11 is no exception. Its image processing pipeline, which handles everything from auto white balance to auto exposure, autofocus, and image ‘development’, gets some new features: a 10-bit rendering pipeline upgraded from the previous 8-bit one, and the generation of a segmentation mask that isolates human subjects and faces, allowing for ‘semantic rendering’.

10-bit rendering should help render high dynamic range images without banding, which could otherwise result from the extreme tone-mapping adjustments required. Semantic rendering allows faces to be processed differently from other portions of the scene, allowing for more intelligent tone mapping and local contrast operations in images with human subjects (for example, faces can look ‘crunchy’ in high contrast scenes if local contrast is uniformly preserved across the entire image). The end result? More pleasing photos of people.

Night mode

The general principle of night modes on smartphones is to use burst photography to capture multiple frames. Averaging pixels from multiple exposures reduces noise, allowing the camera software to brighten the image with less noise penalty.

Google set the bar for low light photography with its Night Sight mode. Other Android phones soon added their own similar modes, making the iPhone’s lack of such a mode particularly conspicuous (third party solutions like Hydra haven’t offered quite the level of detail as the best Android implementations, and of course require you to launch a separate app).

Apple has developed its own Night mode on the iPhone 11 phones, which turns on automatically under dim conditions. Apple’s approach is slightly different from Google’s, using ‘adaptive bracketing’ to capture and fuse multiple exposures with potentially differing shutter speeds (the Pixel takes a burst of images at the same shutter speed).

Varying shutter speeds to capture both short and long exposures can help reduce blur with moving subjects. Information from shorter exposures is used for moving subjects, while longer exposures – which are inherently brighter and contain less noise – can be used for static scene elements. Each frame is broken up into many small blocks before alignment and merging. Blocks that have too much motion blur are discarded, with a noise penalty resulting from fewer averaged frames for that scene element.

Deep Fusion

Google’s Night Sight mode isn’t just about better photos in low light. Night Sight uses burst photography and super resolution techniques to generate images with more detail, less noise, and less moiré thanks to the lack of demosaicing (slight shifts from frame to frame allow the camera to sample red, green and blue information at each pixel location). ‘Deep Fusion’, available in a soon-to-be-released update later this year, seems to be Apple’s response to Google’s Night Sight mode.

Deep Fusion captures up to 9 frames and fuses them into a higher resolution 24MP image. Four short and four secondary frames are constantly buffered in memory, throwing away older frames to make room for newer ones. The buffer guarantees that the ‘base frame’ – the most important frame to which all other frames are aligned – is taken as close to your shutter press as possible. The buffer ensures a very short, or zero, shutter lag, enabling the camera to capture your desired moment.

After you press the shutter, one long exposure is taken (ostensibly to reduce noise), and subsequently all 9 frames are combined – ‘fused’ – presumably using a super resolution technique with tile-based alignment (described in the previous slide) to produce a blur and ghosting-free high resolution image. Apple’s SVP of Worldwide Marketing Phil Schiller also stated that it’s the ‘first time a neural engine is responsible for generating the output image’. We look forward to assessing the final results.

Portrait mode

Apple is famous for using its technologies to perfect ideas that other companies introduced. That could be said about iPhone’s Portrait mode. Fake blurred background modes have been around for years, even on some compact cameras, but they were never convincing. By applying depth mapping technology to the problem, Apple made a portrait mode that worked, and soon the feature was everywhere. And we’re slowly seeing a similar trend with portrait re-lighting: researchers and companies have been quick to develop relighting techniques, some of which do not even require a depth map.

The iPhone 11 updates portrait mode in a few significant ways. First, it offers the mode even with the main 26mm equivalent main camera, allowing for shallow depth-of-field wide-angle portraiture. The main camera modules always have better autofocus, and its sensor has been updated to have ‘100% focus pixels’ (hinting at a dual pixel design), so wide-angle portrait mode will benefit from this as well.

Second, on the Pro models, the telephoto lens used for more traditional portraits has been updated: it’s F2.0 aperture lets in 44% more light than the F2.4 aperture on previous telephoto modules. That’s a little over a half stop improvement in low light light gathering ability, which should help both image quality and autofocus. Telephoto modules on most smartphone cameras have struggled with autofocus in low light, resorting to hunting and resulting in misfocused shots, so this is a welcome change.

And thirdly…

Portrait relighting

The iPhone 11 offers a new portrait relighting option: ‘High-Key Light Mono’. This mode uses the depth map generated from the stereo pair of lenses to separate the subject from the background, blow out the background to white, and ‘re-light’ the subject while making the entire photo B&W. Presumably, the depth map can aid in identifying the distance of various facial landmarks, so that the relighting effect can emulate the results of a real studio light source (nearer parts of the face receive more light than farther ones). The result is a portrait intended to look as if it were shot under studio lighting.

We’ve now talked a bit about the new features iPhone 11 brings to the table, but let’s turn our attention backwards and take a look at the ways in which iPhone cameras are already leading the industry, if not setting standards along the way.

Sublime rendering

Straight out of camera, iPhone photos are, simply put, sublime. Look at the iPhone XS shot above: with a single button press, I’ve captured the bright background and it looks as if my daughter has some fill light on her. If you want to get technical, then, white balance is well judged (not too cool), skintones are great (not too green), and wide dynamic ranges are preserved without leading to crunchy results, a problem that can result from tone-mapping a large global contrast range while retaining local contrast.

Much of this is thanks to segmented processing techniques that treat human subjects differently to others when processing the image. Digging deeper and looking at images at pixel-level, Apple’s JPEG engine could do a better job in balancing noise reduction and sharpening: often images can appear overly smoothed in some areas with aggressive sharpening and overshoot in others. This may be done in part because results have been optimized for display on high DPI retina devices, and a Raw option – that still utilizes all the computational multi-frame ‘smarts – would go a long way to remedying this for enthusiasts and pros.

But it’s hard to argue that iPhone’s default color and rendition aren’t pleasing. In our opinion, Apple’s white balance, color rendering, and tone-mapping are second to none. The improvements to image detail, particularly thanks to Apple’s as-of-yet unreleased ‘Deep Fusion’ mode, should (we hope) remedy many of our remaining reservations regarding pixel-level image quality.

HDR Photos

No, not the HDR you’re thinking about, that creates flat images from large dynamic range scenes. We’re talking about HDR display of HDR capture. Think HDR10 and Dolby Vision presentations of 4K UHD video. Traditionally, when capturing a high contrast scene, we had two processing options for print or for display on old, dim 100-nit monitors: (1) preserve global contrast, often at the cost of local contrast, leading to flat results; or (2) preserve local contrast, often requiring clipping of shadows and highlights to keep the image from looking too unnatural. The latter is what most traditional camera JPEG engines do in the absence of dynamic range compensation modes.

With the advent of new display technology like OLED, capable of 10x or higher brightness compared to old displays and print, as well as nearly infinite contrast, the above trade-off need no longer exist. The iPhone X was the first device ever to support the HDR display of HDR photos. Since then, iPhones can capture a wide dynamic range and color gamut but then also display them using the full range of its class-leading OLED displays. This means that HDR photos need not look flat, retaining both large global contrast from deep shadows to bright highlights, while still looking contrasty, with pop. All without clipping tones and colors, in an effort to get closer to reproducing the range of tones and colors we see in the real world.

It’s hard to show the effect, and much easier to experience it in person, but in the photo above we’ve used a camera to shoot an iPhone XS displaying HDR (left) vs. non-HDR (right) versions of the same photo. Note how the HDR photo has brighter highlights, and darker midtones, creating the impression that the sky is much brighter than the subject (which it is!). The bright displays on modern phones mean that the subject doesn’t look too dark compared to the non-HDR version, she just looks more appropriately balanced against the sky, rather than appearing almost the same brightness as the sky.

Wide color (P3)

Apple is also leading the field in wide gamut photography. Ditching the age-old sRGB color space, iPhone images can now fully utilize the P3 color gamut, which means images can contain far more saturated colors. In particular, more vivid reds, oranges, yellows and greens. You won’t see them in the image above because of the way our content management system operates, but if you do have a P3 display and a color managed workflow, you can download and view the original image here. Or take a look at this P3 vs. sRGB rollover here on your iPhone or any recent Mac.

Apple is not only taking advantage of the extra colors of the P3 color space, it’s also encoding its images in the ‘High Efficiency Image Format’ (HEIF), which is an advanced format intended to replace JPEG that is more efficient and also allows for 10-bit color encoding (to avoid banding while allowing for more colors) and HDR encoding to allow the display of a larger range of tones on HDR displays.

Video

The video quality from the iPhone XS was already class-leading, thanks to the use of a high quality video codec and advanced compression techniques that suppressed common artifacts like macro-blocking and mosquito noise. 4K video up to 30 fps also had high dynamic range capture: fusing both bright and dark frames together to capture a wider range of contrast. Check out this frame grab of a very high contrast scene, with complex motion due to constant camera rotation. Note the lack of any obvious artifacts.

The iPhone 11 takes things further by offering extended dynamic range (EDR) for 4K 60p capture. Android devices are still limited to standard dynamic range capture. While a few Android devices offer a high dynamic range (HDR) video output format – such as HDR10 or HLG – to take advantage of the larger contrast of recent displays, without HDR capture techniques (fusion of multiple frames) this benefit is limited.

To sum up: we expect that the iPhone 11 will offer the highest quality video available in a smartphone, with natural looking footage in even high contrast scenes, now at the highest resolution and frame rates. We do hope Apple pairs its HDR video capture with an HDR video output format, like HDR10 or HLG. This would allow for a better viewing experience of the extended dynamic range (EDR) of video capture on the latest iPhones, with more pop and a wider color gamut.

Video experience

Apple is also looking to change the experience of shooting video in the iPhone 11 models. First, it’s easier to record video than it was in previous iterations: just hold down the shutter button in stills mode to start shooting a video. Instant video recording helps you capture the moment, rather than miss it as you switch camera modes.

Perhaps more exciting are the new video editing tools built right into the camera app. These allow for easy adjustment of image parameters like exposure, highlights, shadows, contrast and color. And the interface appears to be intuitive as ever.

Multiple angles and ‘Pro’ capture

Advanced video shooters are familiar with the FiLMiC Pro app, which allows for creative and total control over movie shooting. The CEO of FiLMiC was invited on stage to talk about some of the new features, and one of the coolest was the ability to record multiple streams from multiple cameras. The app shows all four streams from all four cameras (on the Pro), allowing you to choose which ones to record from. You can even record both participants of a face-to-face conversation using the front and rear cameras. This opens up new possibilities for creative framing, in some cases obviating the need for A/B cameras.

Currently it’s unclear how many total streams can be recorded simultaneously, but even two simultaneous streams opens up creative possibilities. Some of this capability will come retroactively to 2018 flagship devices, as we describe here.

Conclusion

Much of the sentiment after the launch of the iPhone 11 has centered around how Apple is playing catch-up with Android devices. And this is somewhat true: ultra-wide angle lenses, night modes, and super-resolution burst photography features have all appeared on multiple Android devices, with Google and Huawei leading the pack. No-one is standing still, and the next iterations from these companies – and others – will likely leap frog respective capabilities even further.

Even if Apple is playing catch up in some regards though, it’s leading in others, and we suspect that when they ship, the combination of old features and new – like Deep Fusion and Night mode – will make the iPhone 11 models among the most compelling smartphone cameras on the market.

As the newest iPhone, the iPhone 11 camera is by inevitably the best Apple has made. But is the iPhone 11 Pro the best smartphone camera around currently? We’ll have to wait until we have one in our hands. And, of course, the Google Pixel 4 is a wildcard, and just around the corner…

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The iPhone 11 is more than just Apple catching up to Android

Posted in Uncategorized

 

DPReview TV: What is shutter angle, and why is it better than shutter speed when shooting video?

27 Jun

Have you heard video pros talk about using shutter ‘angle’ instead of shutter speed? Chris explains what shutter angle is and why it’s often more useful than shutter speed for video work.

Get new episodes of DPReview TV every week by subscribing to our YouTube channel!

  • Introduction
  • A bit of history
  • 360-degree shutter
  • 180-degree shutter
  • 90-degree shutter
  • Downsides of using shutter speed
  • Why shutter angle is more useful than shutter speed
  • Wrap-up

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on DPReview TV: What is shutter angle, and why is it better than shutter speed when shooting video?

Posted in Uncategorized