RSS
 

Posts Tagged ‘technology’

Rumor: Samsung Galaxy S11+ sensor to use Nonacell technology

20 Dec

Yesterday we posted a story about the technology in Samsung’s 108MP ISOCELL Bright HMX image sensor and said the chip was likely to make an appearance in the Korean manufacturer’s upcoming Galaxy S11 series.

It looks like this might not be entirely correct. According to a tweet by infamous mobile industry leakster Ice Universe the Galaxy S11+ will use a customized version of the chip that uses a technology that Samsung calls Nonacell.

The standard sensor comes with the company’s tetracell technology, also known as Quad-Bayer, that uses pixel merging for better detail and lower noise levels in low light. Nonacell follows the same concept but instead of four combines — you guessed it — nine pixels into one.

The sensor is said to be called ISOCELL Bright HM1 and will be the successor to the HMX variant that we’ve seen in the Xiaomi Mi Note 10. On the latter four 0.8µm pixels are combined into one 1.6µm effective pixel. On the new sensor the effective pixel size would increase to 2.4µm, theoretically allowing for significantly improved low light performance at a still more than acceptable 12MP output size.

The Galaxy S11 series is scheduled to launch in February 2020, so hopefully, we’ll be able to have a closer look at then sensor and its performance then.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Rumor: Samsung Galaxy S11+ sensor to use Nonacell technology

Posted in Uncategorized

 

Sigma’s new Classic Art Prime Cine and /i Technology PL lens kits to sell for $44K

07 Dec

Sigma has announced the pricing and availability for its upcoming Classic Art Prime Cine and /i Technology-compatible Cine Art Prime PL-mount lenses. These are variants of its Art Cine Prime with simpler coatings for a classic cinema aesthetic. The company plans to release the Classic Art Prime Cine line as a set of 10 lenses in January 2020 for $ 43,999; these lenses will only be available as part of the full set.

Unlike the Classic Art Prime Cine lenses, the /i Technology-compatible versions will be released for individual purchase in two different batches, the first going up for sale later this month and the second going up for sale in late January 2020. The lenses will be available from authorized dealers.

The /i Technology versions communicate shooting metadata to camera bodies that are compatible with Cooke’s communication protocol.

The following /i Technology-compatible lenses will be priced at $ 3,899 each with availability listed below:

  • 20mm T1.5 (late December 2019)
  • 24mm T1.5 (late December 2019)
  • 28mm T1.5 (late January 2020)
  • 35mm T1.5 (late December 2019)
  • 40mm T1.5 (late January 2020)
  • 50mm T1.5 (late December 2019)
  • 85mm T1.5 (late December 2019)

The remaining three new /i Technology-compatible lenses will be priced at $ 5,499 each:

  • 14mm T2 (late January 2020)
  • 105mm T1.5 (late January 2020)
  • 135mm T2 (late January 2020)

The movie Top Gun: Maverick scheduled to hit theaters early next year was shot using early versions of Sigma’s new FF High Speed Prime /i Technology-compatible lens, according to the company. As the name indicates, these lenses are compatible with the /i Technology communication protocol from Cooke Optics.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sigma’s new Classic Art Prime Cine and /i Technology PL lens kits to sell for $44K

Posted in Uncategorized

 

Canon patent application sheds more light on its upcoming IBIS technology

17 Sep

Rumors about Canon’s much-anticipated in-body stabilization (IBIS) are a dime a dozen, but a recent patent application from Canon dives into more detail than we’ve seen before, further lending credence to the rumors the technology could make it into Canon’s next R-series camera body.

First discovered by Canon News, Japan patent application 2019-152785 details how in-body stabilization technology can be improved by more accurately moving and positioning the sensor along its axes. According to the patent, Canon plans to do this through the use of a magnetic circuit known as the Halbach array.

An illustration from the patent showing how in-lens stabilization would work alongside the in-body stabilization to achieve optimal results.

The Halbach array, believed to have first been discovered by John C. Mallinson in 1973, is a collection of magnets that is particularly arranged so that one side of the magnetic field is magnified while the opposite side is effectively canceled out. Halbach arrays have multiple uses ranging from something as simple as a refrigerator magnet to something as intricate as a particle accelerator (where it’s used to focus particle accelerator beams).

Canon’s implementation, however, would use Halbach arrays to ensure that when a correction is applied to one axis, it won’t negatively affect another axis. Particularly, Canon’s patent application details how it would use a Halbach array on the vertical (y-axis) stabilization unit to ensure that the horizontal correction (x-axis) isn’t skewed when applying y-axis corrections.

A pair of illustrations from the patent showing how the Halbach array would be positioned.

The patent application also explains how the IBIS would work hand-in-hand with in-lens stabilization units to create the most effective stabilization possible. Specifically, the patent says the in-lens stabilization would account for corrections on the XY planes (2-axis stabilization) while the in-body stabilization would be able to account for shake on XY-theta planes (3-axis stabilization with vertical, horizontal and roll compensation). Similarly, gyro units within both the lens and camera would work alongside one another to account for angular corrections so the image stabilization element in the lens could be adjusted in coordination with the image sensor to most accurately correct the optical axis.

Below is a brief illustration of XY-theta alignment at work:

It’s unknown, of course, if this particular patent application will be used down the road in a future IBIS arrangement, but it is one of the more detailed patents we’ve come across from Canon regarding the technology. Based on this particular patent application, it would be a 5-axis IBIS unit, similar to those found in Sony and Nikon mirrorless cameras.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon patent application sheds more light on its upcoming IBIS technology

Posted in Uncategorized

 

Xiaomi provides more information about under-screen front camera technology

05 Jun

Earlier this week smartphone manufacturers OPPO and Xiaomi both teased under-screen front-cameras that would allow for the design of ‘notch-less’ devices with uninterrupted edge-to-edge displays.

Xiaomi Senior VP followed up with several tweeted slides that provided some additional information about the technology. According to the slides the display area covering the front camera is capable of turning transparent when the camera is activated, allowing for light to pass and hit the lens and sensor, but looks and works normally when the front shooter is not in use.

To achieve this, Xiaomi is using a using ‘special-low-reflective glass with high transmittance’. The company claims the technology can capture better images than the pinhole solutions we have seen previously although OPPO VP Brian Shen stated that the technology was still new and ‘there’s bound to be some loss in optical quality.’

The slides also hint at a 20MP camera hidden under the display but at this point we don’t know which new model the technology could be implemented in. Given the current buzz around the subject it likely won’t be too long before we find out, however.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Xiaomi provides more information about under-screen front camera technology

Posted in Uncategorized

 

Luminar 3.1.0 update introduces ‘human-aware’ Accent AI 2.0 technology

29 Apr

Skylum Software has announced the latest update to its Luminar 3 photo editing software, Luminar 3.1.0. Four significant upgrades have been added including improvements to its proprietary artificial intelligence tool that enhances photos. Upgraded from version 1.0, Accent AI 2.0 features facial and object recognition technology that help photographers create a more authentic effect in their images.

Accent AI 2.0 upgrade

Accent AI was developed to help photographers speed up their workflow. By automatically handling common tasks like shadows, highlights and contrast, time spent editing an image takes seconds instead of a few minutes. Accent AI 2.0 boasts improved presets and is “human aware,” meaning it recognizes people in the photo and provides skin tone adjustments selectively for a more natural look.

Accent AI 2.0 also includes more accurate color correction and detail boosts. If it can’t make a specific detail in a photo look better, it’ll remain untouched. While the artificial intelligence suggests enhancements including color, depth, detail, and exposure that eliminate the need to adjust several sliders in the development process, the photographer has the flexibility to customize all aspects of the image.

Improved Sync Adjustments command

Photographers can adjust one image, using Luminar’s image-aware filters or Looks, select a series of images that they want to apply the same changes to, and synchronize them. Filters and Looks are transferred while image-specific adjustments such as cloning and cropping remain untouched.

RAW + JPEG organization

For those shooting in RAW + JPEG mode, photos are easier to organize and view. When importing pairs of images into Luminar 3.1.0, the option to choose RAW, JPEG, or both files is available. Select one for less clutter or both for side-by-side comparison. If both RAW and JPEG versions of an image are uploaded, the option to delete one file and keep the other in that pair is available. Changes made in one file can be transferred to the other with the Sync Adjustments command.

Improved sorting method

When images are uploaded to Luminar, attributes such as ratings, file size, and color labels can be applied for organization purposes. When using the Gallery view, it’s now easier to locate images as a second organizational label is automatically applied. When sorting through images, they will be displayed by the new category first followed by the date.

Windows updates

The Windows version of Luminar 3.1.0 received a slew of updates. They include the ability to import images from a memory card or hard drive and copy them to a folder, post images directly to SmugMug, add folders and user albums to the shortcut list, rotate images by 90-degree increments in the gallery, and install the Luminar plugin into Photoshop Elements.

How to update

Users with Luminar 3 can update for free to version 3.1.0.

  • Mac users can update by choosing Luminar 3 in the top menu bar, then clicking “Check for updates.”
  • Windows users can choose Help > Check for updates on the top toolbar.

Pricing

Skylum is offering special, limited-time pricing through May 14th on its photo editing software and courses. Mixed-computer households can share the same product key for Mac and PC. The software can be operated on up to five devices.

  • $ 60 / €60 /.£56 – Luminar 3.1.0
  • $ 70 / €70 / £65 – Luminar 3.1.0 & Photography 101 Video course by SLR lounge ($ 99 value)
  • $ 129 / €129 / £118 – Bundle (Luminar & Aurora) + Photography 101 Course by SLR Lounge ($ 99 value)
  • Standard pricing: $ 70/€70/£65 for all new users
  • Free trial and 60-day money back guarantee.
  • Free “Photography 101: A-Z Guide to Photography” course from SLR Lounge

DPReview will be independently reviewing Luminar 3.1.0, so stay tuned. To get a walkthrough of the improvements described above, check out the above video by professional photographer and Skylum’s Vice President of Product, Richard Harrington.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Luminar 3.1.0 update introduces ‘human-aware’ Accent AI 2.0 technology

Posted in Uncategorized

 

Report: Huawei P30 Pro uses Sony image sensors and technology from Corephotonics

17 Apr

With its quad-camera (triple-camera plus ToF-sensor) the Huawei P30 Pro is, from an imaging perspective, definitely the most exciting new smartphone this year.

The analysts from French company System Plus Consulting now have had a closer look at the camera hardware, which was co-designed with Leica, and talked about their findings with EE Times. According to costing analyst expert Stéphane Elisabeth, all four image sensors have been supplied by market leader Sony.

The primary camera module uses a RYYB color filter (Red, Yellow, Yellow, Blue) instead of the usual RGGB, which increases light sensitivity, while the wide-angle and tele camera units still rely on an RGB filter. The green channel is usually used to make up the luminance (detail) information in an image so yellow filters, which let in red as well as green light, would give cleaner results than an RGGB sensor, at the cost of some ability to distinguish between colors.

Unlike some other devices, the time-of-flight (ToF) sensor is not only used for augemented reality applications but also to measure subject distance for autofocusing. Signals from all three cameras are processed to create a map of a scene and let the photographer focus on a specific object.

Arguably the most innovative element of the camera is the periscope-style tele lens, though. It is placed horizontally inside the body and a mirror angled at 45 degrees channels light into the optics and onto the sensor. This allows for an extended optical unit – generally a requirement for telephoto lenses. The result is the first 5X tele zoom in a smartphone. Super resolution and computational techniques allow for 10x digital zoom using the 5x tele unit, though image quality drops. The analysts also believe the entire camera unit has been assembled by Chinese company Sunny Optical Technology using IP from Corephotonics in Israel. The latter is particularly interesting as Corephotonics has just been acquired by Huawei rival Samsung.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Report: Huawei P30 Pro uses Sony image sensors and technology from Corephotonics

Posted in Uncategorized

 

EyeQ acquires image optimization company Athentech and its Perfectly Clear technology

05 Apr

Athentech, the company behind the Perfectly Clear image optimization algorithm, which is used by printing services and deployed in Bibble and Corel´s PaintShop Pro among other applications, has been acquired by Canadian company EyeQ. The new owners say they will maintain all of Athentech’s current business and will continue to offer the Perfectly Clear technology. They are also planning to invest in areas such as artificial intelligence and innovative workflow solutions.

‘Athentech was built by a team of leading scientists, physicists, and photographers on a mission to make every photo as brilliant, vibrant, and clear as possible, just like our human eye captured, all while maintaining color integrity. Our acquisition is an exciting inflection point that adds more financial muscle and expertise to allow us to upscale this 15-year mission and reach more companies worldwide,’ said Brad Malcolm, President and CEO, EyeQ.

As the first post-acquisition move the company has announced a new Web API which offers cloud-based access to the same technologies available in the latest SDK, without a need for any integration. The solution is aimed at business users who can send original JPG files and receive corrected image pretty much immediately.

EyeQ is a venture-capital backed company and on its website describes itself as ‘an innovative digital imaging company focused on evolving the way businesses correct and process batch imagery.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on EyeQ acquires image optimization company Athentech and its Perfectly Clear technology

Posted in Uncategorized

 

How Google developed the Pixel 3’s Super Res Zoom technology

18 Oct

In a blog post on its Google AI Blog, Google engineers have laid out how they created the new Super Res Zoom technology inside the Pixel 3 and Pixel 3 XL.

Over the past year or so, several smartphone manufacturers have added multiple cameras to their phones with 2x or even 3x optical zoom lenses. Google, however, has taken a different path, deciding instead to stick with a single main camera in its new Pixel 3 models and implementing a new feature it is calling Super Res Zoom.

Unlike conventional digital zoom, Super Res Zoom technology isn’t simply upscaling a crop from a single image. Instead, the technology merges many slightly offset frames to create a higher resolution image. Google claims the end results are roughly on par with 2x optical zoom lenses on other smartphones.

Compared to the standard demosaicing pipeline that needs to interpolate missing colors due to the Bayer color filter array (top), gaps can be filled by shifting multiple images one pixel horizontally or vertically. Some dedicated cameras implement this by physically shifting the sensor in one pixel increments, but the Pixel 3 does it cleverly by essentially finding the correct alignment in software after collecting multiple, randomly shifted samples. Illustration: Google

The Google engineers are using the photographer’s hand motion – and the resulting movement between individual frames of a burst – to their advantage. Google says this natural hand tremor occurs for everyone, even those users with “steady hands”, and has a magnitude of just a few pixels when shooting with a high-resolution sensor.

The pictures in a burst are aligned by choosing a reference frame and then aligning all other frames relative to it to sub-pixel precision in software. When the device is mounted on a tripod or otherwise stabilized natural hand motion is simulated by slightly moving the camera’s OIS module between shots.

As a bonus there’s no more need to demosaic, resulting in even more image detail. With enough frames in a burst any scene element will have fallen on a red, green, and blue pixel on the image sensor. After alignment R, G, and B information is then available for any scene element, removing the need for demosaicing.

For full technical detail of Google’s Super Res Zoom technology head over to the Google Blog. More information on the Pixel 3’s computational imaging features can be found here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on How Google developed the Pixel 3’s Super Res Zoom technology

Posted in Uncategorized

 

Gitzo announces Gitzo Mini Traveler tripod with patent-pending ‘Pull & Fix’ technology

29 Sep

Gitzo has released the details of its upcoming travel tripod, the Gitzo Mini Traveler.

Designed for ‘professional photographers and promising amateurs who use mirrorless cameras or DSLR cameras with small lenses,’ the Gitzo Mini Traveler is made of Gitzo’s ‘state-of-the-art’ carbon eXact tubes and an aluminum head.

The tripod weighs 265g/0.63lbs and measures 22.1cm/8.7in in length when closed with the tripod head attached. When used with the aluminum tripod head, the tripod holds 3kg/6.6lbs of gear. If you don’t mind losing the articulating head, the legs alone hold an impressive 25kg/55lbs of gear.

The legs use Gitzo’s patent-pending Pull & Fix leg angle selector system with two built-in leg angles. ‘ The ergonomic rubberized gear easily locks and controls the strong stainless steel sphere of the aluminum ball head,’ according to Gitzo.

The Gitzo Mini Traveler is available in two colors: black and Gitzo’s noir decor. Both colors are available for pre-order on B&H for $ 200. No specific release date has been given.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Gitzo announces Gitzo Mini Traveler tripod with patent-pending ‘Pull & Fix’ technology

Posted in Uncategorized

 

Understanding Sensor-Shift Technology for High-Resolution Images

11 Sep
rock silhouette sunset - Sensor-Shift Technology

Georgian Bay – Summer Landscape

Changing How Photographs are Taken

In recent years, a number of manufacturers have produced cameras that are capable of producing higher-resolution images through something called Sensor-Shift Technology. This technology has been made possible with the advent of in body image stabilization (IBIS). Camera designers have used the IBIS as a way to get incredible increases in image resolution or to improve the color information for the images that are taken.

There are a number of names for this technology including High-Resolution Mode, Pixel Shifting Resolution System, Pixel Shift Multi Shooting Mode or the more generic names of pixel-shift/sensor-shift but in the end, the concepts behind this technology are all the same. Multiple images of the same view are taken in such a way that the images are stacked and blended to create a single, usually large, high-resolution image.

There are strengths and weaknesses of this new technology and understanding how it works can help you make better images yourself if you have a camera that is capable of doing this.

NOTE: Because websites use lower resolution images, the images used in this article have been downsized and modified to simulate the differences between the high-resolution images and the standard output from the cameras. When looking at the images in full, the images look similar but when you get closer to the details in the images that is when you start to see the differences.

gerbera daisies - Sensor-Shift Technology

Gerbera daisies indoors, regular resolution (20 MP) Olympus OMD EM 1 Mark II

Gerbera daisies - Sensor-Shift Technology

Gerbera daisies indoors, high-resolution (50MP) Olympus OMD EM 1 Mark II

Many Approaches to Sensor-Shift Images

Sensor-shift image capture has been transformed from expensive specialty cameras to become an increasingly available feature on newer, resolution-oriented cameras. Today, in addition to Hasselblad’s monster H6D-400c (400 Megapixel images), there are offerings from Olympus, Pentax, Sony, and Panasonic.

These versions generally use the same conceptual approach but at much more accessible prices.

Sensor-Shift Technology diagram

Sensor-Shift Movement

Who Uses Sensor-Shift?

Regardless of the manufacturer, the basic action of sensor-shift image capture remains the same. Take multiple images but move the camera’s sensor slightly for each image to capture more image data and then put the image together.

By moving the sensor around, the image color data improves allowing for more detail to be resolved by overcoming the inherent problems with color specific photosites. Ignoring the Hasselblad, the systems that use this technology include cameras such as the Olympus OM-D E-M1 Mark II (Micro Four Thirds), Pentax K-1 Mark II DSLR, Sony a7R III, and Panasonic Lumix DC-G9 (Micro Four Thirds) although there are others from the same manufacturers.

Three of these lines are mirrorless cameras with the Pentax being a crop sensor DSLR. It is interesting to note that the Panasonic/Olympus cameras take one approach and Pentax/Sony take a different approach to the same concepts.

The Olympus/Panasonic systems use an approach that makes very large high-resolution images whereas the Pentax and Sony systems use the sensor-shift to improve the color information of same size images. Both the Pentax and Sony systems also allow for the separation out of the individual sensor-shifted images whereas the Olympus and Panasonic blend the stacked images into a single photograph.

Sensor-Shift Technology Olympus camera

Olympus OMD EM5 Mark II has the sensor-shift technology.

How does sensor technology work?

To understand how sensor-shift technology works you need to also understand how a sensor generally works at a very small scale. In the good old days of film photography, cameras used light-sensitive film to record images. Digital cameras use a very different approach to record light.

Digital cameras use light-sensitive photodiodes to record the light striking the sensor. In most digital cameras, each photodiode has a specific color filter (red, green, or blue), forming a photosite. These photosites are arranged to allow the light to be blended to see the color from the image coming onto the sensor.

The red, green, and blue photosites on a sensor are generally arranged in a specific pattern known as a Bayer array (a.k.a. Bayer matrix, filter). There are also other configurations such as the Fuji X-Trans sensor (used on several of their camera models) or Sigma that uses a Foveon sensor.

With a Bayer arrangement, there are twice as many green photosites as red or blue because human vision is most attuned to resolving detail in green. This arrangement generally works well but if you think about it, on an image, a color pixel is created by blending these photosites together.

The sensor does not know how much red there is on a green sensor location or a blue sensor location so interpolation is required. This can create some artifacts in photographs if you look very closely and tends to mean that RAW images have an ever so slightly soft focus. All RAW images need some sharpening in post-processing (the green, the red and the blue for a pixel are blended together).

Sensor-Shift Technology

Bayer pattern of photosites

Static Sensors

In a regular camera without IBIS, each photosite only records the light from one color in that one spot, so the data that it records is technically incomplete. It is like a bucket that only collects light from a particular color. A cluster of light buckets in the Bayer pattern is used to create a single pixel in the digital image but within that pixel, there are two green buckets, one blue and one red.

To meld the image together and put a single color into that one pixel, the signals from the cluster of photodiodes are resolved together. The collected data is interpolated via a de-mosaicing algorithm either in-camera (jpeg) or on a computer (from a RAW image), a process that assigns values for all three colors for each photosite based upon the collective values registered by neighboring photosites.

The resulting colors are then outputted as a grid of pixels and a digital photograph is created. This is partly why RAW images have a slightly softer focus and need to be sharpened in the post-production workflow.

Moving Sensors

IBIS means that the sensors now move ever so slightly to adjust for subtle movements of a camera to keep the image stable. Some manufacturers claim that their systems are capable of stabilizing the sensor and/or lens combination for an equivalent of 6.5 stops.

Sensor-Shift Technology

Moving the sensor allows all the color photosites to record the data for each location on the sensor.

This stabilization is accomplished by micro adjustments of the position of the sensor. For sensor-shift images, those same micro adjustments are used to have each photosite exposed to the light from the single image recording. In essence, the sensor is moved around not to adjust for external perturbations but to have each portion of an image contain full-color information.

Photosites Rather Than Pixels

You may have noticed the term photosites instead of pixels. Cameras are often rated by their megapixels as a measure of their resolving power, but this is confusing because cameras do not have actually have pixels only photosites.

Pixels are in the image produced when the data from the sensor is processed. Even the term “pixel-shift” which is sometimes used, is misleading. Pixels don’t move, it is the sensors that have photosites on them that move.

In single-image capture, each photosite records data for red, green, or blue light. This data is interpolated by a computer so that each pixel in the resulting digital photograph has a value for all three colors.

Shifting Sensors

Sensor-shift cameras attempt to reduce the reliance on interpolation by capturing color data for red, green, and blue for each resulting pixel by physically moving the camera’s sensor. Consider a 2×2 pixel square taken from a digital photograph.

Conventional digital capture using a Bayer array will record data from four photosites: two green, one blue, and one red. Technically that means there is missing data for blue and red light at the green photosites, green data and red at the blue photosites and blue and green at the red photosites. To fix this problem, the missing color values for each site will be determined during the interpolation process.

But what if you didn’t have to guess?  What if you could have the actual color (red, blue and green) for each photosite?  This is the concept behind sensor-shift technology.

Sensor-Shift Technology

A normal resolution image.

Diving Deeper

Consider a 2×2 -pixel square on a digital photograph that is created using pixel-shift technology. The first photo begins as normal with data recorded from the four photosites. However, now the camera shifts the sensor to move the photosites around and takes the same picture again but with a different photosite.

Repeat this process so that all the photosites have all the light for each exact spot on the sensor. During this process, light data from four photosites (two green, one red, one blue) has been acquired for each pixel, resulting in better color values for each location and less of a need for interpolation (educated guessing).

Sensor-Shift Technology

A high-resolution image at the same ISO, aperture, and shutter speed.

The Sony and Pentax Approach

Sony’s Pixel Shift Multi Shooting Mode and Pentax’s Pixel Shifting Resolution System operate in this manner. It is important to note that using these modes does not increase the total number of pixels in your final image. The dimensions of your resulting files remain the same, but color accuracy and detail are improved.

Sony and Pentax take four images moved one full photosite per image to create a single image. It really is simply improving color information in the image.

The Olympus and Panasonic Approach

The High-Resolution Mode of Panasonic and Olympus cameras, which both use Micro Four Thirds sensors, takes a slightly more nuanced approach, combining eight exposures taken ½ pixel apart from one another. Unlike Sony and Pentax, this significantly increases the number of pixels in the resulting image.

From a 20 megapixel sensor, you get a 50-80 megapixel RAW image. There is only a single image with no ability to access the individual images of a sequence.

What are the Advantages of Using Sensor-Shift?

Using sensor-shift technology has several advantages. By taking multiple images, knowing the color information for each photosite location and increasing the resolution you accomplish three main things. You decrease noise, reduce moire, and increase the overall resolution of the images.

Noise and Improved Resolution

By taking multiple images with a subtle change in position of the sensor, the resolution of the image goes up but so does the color information in the images. This allows similar images to allow for a greater drilling down into the image with smoother colors, less noise, and better detail.

Sensor-Shift Technology - pink gerbera daisy

A normal resolution image.

Sensor-Shift Technology - flower

A high-resolution image.

Sensor-Shift Technology

Cropped in tight to the normal resolution image, you start to see noise showing up like grain and color variation.

Sensor-Shift Technology

Here is the same crop on the high-resolution version, the color and detail are better with less noise.

Less Moire

Moire is the appearance of noise or artifact patterns that appear in images with tight regular patterns. Newer sensors tend to have fewer issues with Moire than in the past but it will still appear in some images.

The cause of the moire tends to be related to the tight patterns being recorded and the camera having problems resolving the pattern because it is having problems with the sensor photosite patterns. The color information for the Red, Green and Blue photosites have troubles with edges in these tight patterns because not all the color for a single location is recorded.

With sensor-shift, all the color for each location is there, so moire tends to disappear.

Sensor-Shift Technology

Normal resolution image.

Sensor-Shift Technology

High-resolution Image with crop area highlighted

Sensor-Shift Technology

The cropped area on the standard resolution image – noise starting to appear (the scratches on the paper were there before).

Sensor-Shift Technology

The higher-resolution image has less noise and more detail.

So Why Not Use This for Every Image?

Well, the main reason is that you have to take multiple images of a single scene. This means that this really doesn’t work well for moving subjects. The process requires, at a minimum, four times the exposure time of single image capture. This translates into four opportunities for a part of your composition and/or your camera to move during image capture, degrading image quality.

Such constraints limit the technology’s application to still life and (static) landscape photography. Any movement in the scene being captured is going to create a blurry or pixelated area. This is a problem for landscape photography if there is a wind moving plants or clouds as well as areas where running water is present.

This also means that usually, you need to be very stable and use a tripod, although there are some clear intentions from manufacturers to make available versions that will allow for handheld shooting of the camera (Pentax has this feature).

Sensor-Shift Technology

High-resolution image shot on a tripod.

Sensor-Shift Technology

Movement artifacts are visible when viewed more closely.

Quirks of some of the systems

As sensor-shift technology has been implemented in different ways and depending upon the system used, the problems are a bit different. The main quirk is that you generally need a tripod, so no run and gun.

The Sony system has other limitations that you cannot see the image until you process the four separate images together. This means you cannot review your resolved image on the camera. In addition, due to the high pixel count on the A7R mark III, any subtle movement of the tripod is particularly noticeable on the resultant image. In order to edit the images, you also need to use proprietary Sony Software to merge the images together.

Pentax has some interesting features. Using the software application that comes with the camera allows for addressing movement by using an algorithm within the software for removing movement artifacts. This works better than software commonly used for image manipulation such as Adobe.

The Olympus system has been around a while and in the most recent iteration on the Olympus OMD EM1 Mark II, any detected movement will have those affected pixels replaced with parts of one of the single regular resolution images in areas of movement. This creates uneven resolution but makes the image look better for things like wind. It also limitations particularly if there is a lot of movement. Often the images look a little pixelated.

trees - Sensor-Shift Technology

Standard resolution image of a tree – everything is sharp.

Sensor-Shift Technology

A high-resolution image of the same tree but it was windy… Cropped area is shown in the yellow box.

Sensor-Shift Technology

Cropped area expanded – the wind movement generated some artifacts on the image.

Limitations

The greatest challenge facing sensor-shift image capture is moving subjects. Additionally, trying to pair a strobe with a camera using pixel-shift image capture can be complicated by the speed of image capture, flash recycle limitations, and general compatibility problems. Manufacturers are aware of these problems and are working to resolve them.

Overall the Technology is Only Going to Get Better

More and more systems are using algorithms to produce these higher resolution images. As the technology matures, the implementations will get better and better results, potentially able to deal with movement and handheld conditions.

The advantage to manufacturers is that better quality images are produced without the need for really expensive high pixel density sensors (cheaper). The advantages to the user are that the images can have better noise and color information for better final results.

Happy hunting for that perfect high-resolution image!

The post Understanding Sensor-Shift Technology for High-Resolution Images appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on Understanding Sensor-Shift Technology for High-Resolution Images

Posted in Photography