RSS
 

Posts Tagged ‘sensors’

Samsung details new 65/14nm stacked sensor design for improving power efficiency, density of mobile image sensors

08 Oct
Stacked Architecture of the chipset Samsung details in its new paper.

Samsung has published a paper detailing a new stacked CMOS mobile image sensor that uses a 14nm processing layer to deliver high-resolution images while reducing power consumption.

The stacked sensor consists of two chips: a 12MP backside-illuminated (BSI) pixel chip on the top that uses 65nm process and a bottom chip for analog and logic circuits that uses 14nm process. By using the super-fine 14nm process on the processing layer, Samsung says it could achieve a 29% drop in power consumption compared to current conventional sensors that use a 65nm/28nm process.

Microphotograph of Implemented Sensor (Left: Top Chip & Right: Bottom Chip)

Samsung says the chip is capable of outputting at 120 frames per second while consuming just 612mW of power. The analog and digital power supply requirements also drop to 2.2V and 0.8V, respectively, compared to conventional 65nm/28nm process chipsets.

What this all translates to is a more energy-efficient stacked sensor for future smartphones that also manages to improve data throughput and reduce noise. It also paves the way for creating sensors with smaller pixel pitches, maximizing the potential for even higher-resolution sensors without increasing the size of mobile sensors. As illustrated in the below graphic, a 16MP sensor with a 1.0um pixel pitch is the same size as a 13MP sensor with a 1.12um pixel pitch.

Of course, smaller pixels means each pixel will be less sensitive, but Samsung emphasizes this shortcoming can be overcome through its pixel-merging technologies such as its Tetracell (2×2) and Nonacell (3×3) technologies, which will merge data from neighboring pixels together to achieve better image quality when light is scarce.

Specifications of the 12MP sensor Samsung details in its paper.

Samsung specifically notes the power-saving nature of stacked sensors using the 65nm/14nm process will be ‘critical’ for 8K video capture and even higher-resolution sensors, as power consumption is one of the biggest factors limiting 8K capture on current smartphones.

As tends to be the case with developments of this kind, there’s no knowing when we might see this 65nm/14nm stacked sensor design inside a consumer smartphone.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Samsung details new 65/14nm stacked sensor design for improving power efficiency, density of mobile image sensors

Posted in Uncategorized

 

Computational photography part II: Computational sensors and optics

08 Jun

Editor’s note: This is the second article in a three-part series by guest contributor Vasily Zubarev. The first and third parts can be found here:

  • Part I: What is computational photography?
  • Part III: Computational lighting, 3D scene and augmented reality (coming soon)

You can visit Vasily’s website where he also demystifies other complex subjects. If you find this article useful we encourage you to give him a small donation so that he can write about other interesting topics.


Computational Sensor: Plenoptic and Light Fields

Well, our sensors are crap. We simply got used to it and trying to do our best with them. They haven’t changed much in their design from the beginning of time. Technical process was the only thing that improved — we reduced the distance between pixels, fought read noise, increased readout speeds and added specific pixels for phase-detection autofocus systems. But even if we take the most expensive camera to try to photograph a running cat in the indoor light, the cat will win.

  • Video link: The Science of Camera Sensors

We’ve been trying to invent a better sensor for a long time. You can google a lot of research in this field by “computational sensor” or “non-Bayer sensor” queries. Even the Pixel Shifting example can be referred to as an attempt to improve sensors with calculations.

The most promising stories of the last twenty years, though, come to us from plenoptic cameras.

To calm your sense of impending boring math, I’ll throw in the insider’s note — the last Google Pixel camera is a little bit plenoptic. With only two pixels in one, there’s still enough to calculate a fair optical depth of field map without having a second camera like everyone else.

Plenoptics is a powerful weapon that hasn’t fired yet.

Plenoptic Camera

Invented in 1994. For the first time assembled at Stanford in 2004. The first consumer product — Lytro, released in 2012. The VR industry is now actively experimenting with similar technologies.

Plenoptic camera differs from the normal one by only one modification. Its sensor is covered with a grid of lenses, each of which covers several real pixels. Something like this:

If we place the grid and sensor at the right distance, we’ll see sharp pixel clusters containing mini-versions of the original image on the final RAW image.

  • Video link: Muted video showing RAW editing process

Apparently, if you take only one central pixel from each cluster and build the image only from them, it won’t be any different from one taken with a standard camera. Yes, we lose a bit in resolution, but we’ll just ask Sony to stuff more megapixels in the next sensor.

That’s where the fun part begins. If you take another pixel from each cluster and build the image again, you again get a standard photo, only as if it was taken with a camera shifted by one pixel in space. Thus, with 10×10 pixel clusters, we get 100 images from “slightly” different angles.

The more the cluster size, the more images we have. Resolution is lower, though. In the world of smartphones with 41-megapixel sensors, everything has a limit, although we can neglect resolution a bit. We have to keep the balance.

  • Link: plenoptic.info – about plenoptics, with python code samples

Alright, we’ve got a plenoptic camera. What can we do with it?

Fair refocusing

The feature that everyone was buzzing about in the articles covering Lytro is the possibility to adjust focus after the shot was taken. “Fair” means we don’t use any deblurring algorithms, but rather only available pixels, picking or averaging in the right order.

A RAW photo taken with a plenoptic camera looks weird. To get the usual sharp JPEG out of it, you have to assemble it first. The result will vary depending on how we select the pixels from the RAW.

The farther the cluster is from the point of impact of the original ray, the more defocused the ray is. Because the optics. To get the image shifted in focus, we only need to choose the pixels at the desired distance from the original — either closer or farther.

The picture should be read from right to left as we are sort of restoring the image, knowing the pixels on the sensor. We get a sharp original image on top, and below we calculate what was behind it. That is, we shift the focus computationally.

The process of shifting the focus forward is a bit more complicated as we have fewer pixels in these parts of the clusters. In the beginning, Lytro developers didn’t even want to let the user focus manually because of that — the camera made a decision itself using the software. Users didn’t like that, so the feature was added in the late versions as “creative mode”, but with very limited refocus for exactly that reason.

Depth Map and 3D using a single lens

One of the simplest operations in plenoptics is to get a depth map. You just need to gather two different images and calculate how the objects are shifted between them. The more the shift — the farther away from the camera the object is.

Google recently bought and killed Lytro, but used their technology for its VR and… Pixel’s camera. Starting with the Pixel 2, the camera became “a little bit” plenoptic, though with only two pixels per cluster. As a result, Google doesn’t need to install a second camera like all the other cool kids. Instead, they can calculate a depth map from one photo.

Images which top and bottom subpixels of the Google Pixel camera see. The right one is animated for clarity (click to enlarge and see animation). Source: Google
The depth map is additionally processed with neural networks to make the background blur more even. Source: Google
  • Link: Portrait mode on the Pixel 2 and Pixel 2 XL smartphones

The depth map is built on two shots shifted by one sub-pixel. This is enough to calculate a rudimentary depth map and separate the foreground from the background to blur it out with some fashionable bokeh. The result of this stratification is still smoothed and “improved” by neural networks which are trained to improve depth maps (rather than to observe, as many people think).

The trick is that we got plenoptics in smartphones almost at no charge. We already put lenses on these tiny sensors to increase the luminous flux at least somehow. Some patents from Google suggest that future Pixel phones may go further and cover four photodiodes with a lens.

Slicing layers and objects

You don’t see your nose because your brain combines a final image from both of your eyes. Close one eye, and you will see a huge Egyptian pyramid at the edge.

The same effect can be achieved in a plenoptic camera. By assembling shifted images from pixels of different clusters, we can look at the object as if from several points. Same as our eyes do. It gives us two cool opportunities. First is we can estimate the approximate distance to the objects, which allows us easily separate the foreground from the background as in life. And second, if the object is small, we can completely remove it from the photo since we can effectively look around the object. Like a nose. Just clone it out. Optically, for real, with no photoshop.

Using this, we can cut out trees between the camera and the object or remove the falling confetti, as in the video below.

“Optical” stabilization with no optics

From a plenoptic RAW, you can make a hundred of photos with several pixels shift over the entire sensor area. Accordingly, we have a tube of lens diameter within which we can move the shooting point freely, thereby offsetting the shake of the image.

Technically, stabilization is still optical, because we don’t have to calculate anything — we just select pixels in the right places. On the other hand, any plenoptic camera sacrifices the number of megapixels in favor of plenoptic capabilities, and any digital stabilizer works the same way. It’s nice to have it as a bonus, but using it only for its sake is costly.

The larger the sensor and lens, the bigger window for movement. The more camera capabilities, the more ozone holes from supplying this circus with electricity and cooling. Yeah, technology!

Fighting with Bayer filter

Bayer filter is still necessary even with a plenoptic camera. We haven’t come up with any other way of getting a colorful digital image. And using a plenoptic RAW, we can average the color not only by the group of nearby pixels, as in classic demosaicing, but also using dozens of its copies in neighboring clusters.

It’s called “computable super-resolution” in some articles, but I would question it. In fact, we reduce the real resolution of the sensor in these some dozen times first in order to proudly restore it again. You have to try hard to sell it to someone.

But technically it’s still more interesting than shaking the sensor in a pixel shifting spasm.

Computational aperture (bokeh)

Those who like to shoot bokeh hearts will be thrilled. Since we know how to control the refocus, we can move on and take only a few pixels from the unfocused image and others from the normal one. Thus we can get an aperture of any shape. Yay! (No)

Many more tricks for video

So, not to move too far away from the photo topic, everyone who’s interested should check out the links above and below. They contain about half a dozen other interesting applications of a plenoptic camera.

  • Video link: Watch Lytro Change Cinematography Forever

Light Field: More than a photo, less than VR

Usually, the explanation of plenoptics starts with light fields. And yes, from the science perspective, the plenoptic camera captures the light field, not just the photo. Plenus comes from the Latin “full”, i.e., collecting all the information about the rays of light. Just like a Parliament plenary session.

Let’s get to the bottom of this to understand what a light field is and why we need it.

Traditional photos are two-dimensional. When a ray hits a sensor there will be a corresponding pixel in the photo that records simply its intensity. The camera doesn’t care where the ray came from, whether it accidentally fell from aside or was reflected off of another object. The photo captures only the point of intersection of the ray with the surface of the sensor. So it’s kinda 2D.

Light field images are similar, but with a new component — the origin and angle of each ray. The microlens array in front of the sensor is calibrated such that each lens samples a certain portion of the aperture of the main lens, and each pixel behind each lens samples a certain set of ray angles. And since light rays emanating from an object with different angles fall across different pixels on a light field camera’s sensor, you can build an understanding of all the different incoming angles of light rays from this object. This means the camera effectively captures the ray vectors in 3D space. Like calculating the lighting of a video game, but the other way around — we’re trying to catch the scene, not create it. The light field is the set of all the light rays in our scene — capturing both the intensity and angular information about each ray.

There are a lot of mathematical models of light fields. Here’s one of the most representative.

The light field is essentially a visual model of the space around it. We can easily compute any photo within this space mathematically. Point of view, depth of field, aperture — all these are also computable; however, one can only reposition the point of view so much, determined by the entrance pupil of the main lens. That is, the amount of freedom with which you can change the field of view depends upon the breadth of perspectives you’ve captured, which is necessarily limited.

I love to draw an analogy with a city here. Photography is like your favorite path from your home to the bar you always remember, while the light field is a map of the whole town. Using the map, you can calculate any route from point A to B. In the same way, knowing the light field, we can calculate any photo.

For an ordinary photo it’s overkill, I agree. But here comes VR, where light fields are one of the most promising areas of development.

Having a light field model of an object or a room allows you to see this object or a room from multiple perspectives, with motion parallax and other depth cues like realistic changes in textures and lighting as you move your head. You can even travel through a space, albeit to a limited degree. It feels like virtual reality, but it’s no longer necessary to build a 3D-model of the room. We can ‘simply’ capture all the rays inside it and calculate many different pictures from within that volume. Simply, yeah. That’s what we’re fighting over.

  • Link: Google AR and VR: Experimenting with Light Fields

Vasily Zubarev is a Berlin-based Python developer and a hobbyist photographer and blogger. To see more of his work, visit his website or follow him on Instagram and Twitter.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Computational photography part II: Computational sensors and optics

Posted in Uncategorized

 

Samsung is aiming to develop 600MP image sensors

21 Apr

In the last couple of years or so we have seen the size of image sensors in high-end smartphones increase quite dramatically. At the same time pixel counts have skyrocketed, driven, at least in part, by the use of pixel-binning technology to capture images with lower noise levels and a wider dynamic range than would be possible with conventional sensor technology.

The sensor in the main camera of Samsung’s latest flagship smartphone, the Galaxy S20 Ultra, is a prime example for both these trends. At 1/1.33″ it’s one of the currently largest (only the 1/1.28″ chip in the Huawei P40 Pro is bigger) and a whopping 108MP resolution allows for pixel binning and all sorts of computational imaging wizardry to produce 12MP high-quality default output.

In terms of pixel binning this latest Samsung sensor taken thins even one step further than previous generations. Instead of four it combines 9 pixels into one for an effective pixel size of 2.4µm.

Now we’ve learned that the South Korean company has no intentions to stop there. In a blog post on the company website, Samsung’s Head of Sensor Business Team Yongin Park explains that it is the company’s goal to design and produce image sensors that go beyond the resolution of the human eye which is said to be around 500MP.

However, Yong is aware that numerous challenges have to be overcome to achieve this goal.

‘In order to fit millions of pixels in today’s smartphones that feature other cutting-edge specs like high screen-to-body ratios and slim designs, pixels inevitably have to shrink so that sensors can be as compact as possible.

On the flip side, smaller pixels can result in fuzzy or dull pictures, due to the smaller area that each pixel receives light information from. The impasse between the number of pixels a sensor has and pixels’ sizes has become a balancing act that requires solid technological prowess,’ he writes.

Launched in 2013, Samsung’s ISOCELL technology has been paramount in allowing for more and more pixels to be implemented on smartphone image sensors, by isolating pixels from each other and thus reducing light spill and reflections between them. first this was done using metal ‘barriers’. Later generations used an unspecified ‘innovative material’.

Tetracell technology came along in 2017 and used 2×2 pixel binning to increase the effective pixel size. It was superseded by the company’s Nonacell tech and its 3×3 pixel arrays earlier this year. At the same time Samsung engineers were also able to reduce pixel size to a minuscule 0.7?m. According to Park this was previously believed to be impossible.

So, what can we expect from Samsung’s sensor division in the medium and long term? Park says that the company is ‘aiming for 600MP for all’ but doesn’t provide much detail on how this could be achieved. These sensors would not necessarily be exclusive to use in smartphones, however, and could be implemented for a wide range of applications.

‘To date, the major applications for image sensors have been in the smartphones field, but this is expected to expand soon into other rapidly-emerging fields such as autonomous vehicles, IoT and drones,’ he explains.

In addition the company is looking at applications for its sensors that go beyond photography and videography. According to Park, sensors that are capable of detecting wavelengths outside of the range of human eyes are still rare, but could benefit in areas such as cancer diagnosis in medicine or quality control in agriculture.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Samsung is aiming to develop 600MP image sensors

Posted in Uncategorized

 

Do Larger Camera Sensors Create Different Looking Images? [video]

05 Jul

The post Do Larger Camera Sensors Create Different Looking Images? appeared first on Digital Photography School. It was authored by Caz Nowaczyk.

In this video from fstoppers, they show you whether large sensors create different-looking images to smaller sensors in cameras.

?

In the video, Lee Morris photographs his friend Keith Bradshaw with four different cameras each with different sensor sizes.

Lee uses the following cameras and settings:

FujiFilm GFX 50R/ 43.mm x 32.9mm sensor/ 64mm lens f/8

Canon 6D/ 35mm ff sensor/ 50mm f5.6

FujiFilm XT-3/ 23.6mm x 15.6mm sensor/ 35mm f4

Panasonic GH5/ Micro 4/3 sensor/ 25mm f2.8

He shot each image in RAW and only changed the white balance. he also cropped in on all images to hide the 4/3 aspect ratio of the GH5 and GFX.

You may be surprised by the results (or perhaps you already knew this).

Check it out.

You may also find the following helpful:

  • Full Frame Sensor vs Crop Sensor: Which is Right For You?
  • Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained
  • Is it Time to go Full-Frame? Weigh These Pros and Cons Before You Decide
  • Is Full Frame Still the Best?
  • Making Sense of Lens Optics for Crop Sensor Cameras

 

The post Do Larger Camera Sensors Create Different Looking Images? appeared first on Digital Photography School. It was authored by Caz Nowaczyk.


Digital Photography School

 
Comments Off on Do Larger Camera Sensors Create Different Looking Images? [video]

Posted in Photography

 

Report: Apple stops development of quantum dot image sensors

27 Jun

Last week, shares of Nanoco Technology, a UK company specializing in quantum dot (QD) technology, dropped by nearly 80 percent after news broke that a high-volume supply-contract had been canceled by a major customer.

The Telegraph now reports this customer is Apple which has been working with Nanoco on the development of QD technology for image sensors that could have been used in future iPhone generations. According to market research firm BlueFin Research, Apple decided to stop the development of QD image sensors because it was too expensive for mass production.

Nanoco first announced a partnership with a ‘large, undisclosed U.S. listed corporation’ in 2018. In January of this year it announced the contract had been expanded to cover stress testing and refinements. According to the report, the contract had a volume of £17.1 million ($ 21.7 million) which is more than half of Nanoco’s total revenue.

The UK company specializes in cadmium-free QDs, which are currently predominantly used to improve image quality on TVs and other high-resolution large screens where the dots’ light-emitting properties allow for more accurate color rendering. In an image sensor Apple and Nanoco were hoping to apply the technology to enhance image quality and help with the development of advanced augmented reality features.

With QD technology off the table, it remains to be seen if Apple’s iPhone cameras will rely on more conventional technologies for the foreseeable future or if the US company has another innovative image sensor card up its sleeve.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Report: Apple stops development of quantum dot image sensors

Posted in Uncategorized

 

Report: Huawei P30 Pro uses Sony image sensors and technology from Corephotonics

17 Apr

With its quad-camera (triple-camera plus ToF-sensor) the Huawei P30 Pro is, from an imaging perspective, definitely the most exciting new smartphone this year.

The analysts from French company System Plus Consulting now have had a closer look at the camera hardware, which was co-designed with Leica, and talked about their findings with EE Times. According to costing analyst expert Stéphane Elisabeth, all four image sensors have been supplied by market leader Sony.

The primary camera module uses a RYYB color filter (Red, Yellow, Yellow, Blue) instead of the usual RGGB, which increases light sensitivity, while the wide-angle and tele camera units still rely on an RGB filter. The green channel is usually used to make up the luminance (detail) information in an image so yellow filters, which let in red as well as green light, would give cleaner results than an RGGB sensor, at the cost of some ability to distinguish between colors.

Unlike some other devices, the time-of-flight (ToF) sensor is not only used for augemented reality applications but also to measure subject distance for autofocusing. Signals from all three cameras are processed to create a map of a scene and let the photographer focus on a specific object.

Arguably the most innovative element of the camera is the periscope-style tele lens, though. It is placed horizontally inside the body and a mirror angled at 45 degrees channels light into the optics and onto the sensor. This allows for an extended optical unit – generally a requirement for telephoto lenses. The result is the first 5X tele zoom in a smartphone. Super resolution and computational techniques allow for 10x digital zoom using the 5x tele unit, though image quality drops. The analysts also believe the entire camera unit has been assembled by Chinese company Sunny Optical Technology using IP from Corephotonics in Israel. The latter is particularly interesting as Corephotonics has just been acquired by Huawei rival Samsung.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Report: Huawei P30 Pro uses Sony image sensors and technology from Corephotonics

Posted in Uncategorized

 

The Insta360 Titan is an 11K 360-degree camera with 8 Micro Four thirds sensors

09 Jan

Insta360 has most user types of 360-degree cameras covered, offering cameras from the consumer-level One X all the way up to pro-level 8K models. However, it seems there is demand for even more powerful cameras with higher resolutions.

With the new 11K Titan camera Insta360 is catering to VR cinema professionals with the highest demands. The camera features eight lenses with Micro Four Thirds sensors, which is the largest sensor size on any stand-alone VR camera.

The camera supports 10-bit color and in video mode can shoot 11K or 10K 3D at 30 fps, 8K at 60 fps or 5.3K at 120 fps. In still mode it can capture 11K 360-degree images in 3D and monoscopic.

To cope with the amounts of image data that is captured, each lens/sensor combo requires a high-speed SD card. Gyroscopic metadata for Insta360’s FlowState stabilization and low-resolution proxy files, which can be used for quicker editing with Insta360’s Adobe Premiere Pro plug-in, are stored on an additional card.

In addition to the company’s very efficient FlowState stabilization, the Titan also supports Insta360’s Farsight radio technology which allows for remote control of the camera and was first introduced with the Pro 2 model. The CrystalView conversion tool can be used to play back and watch the camera’s 11K video output.

This much technology does not come cheap, of course, and priced at $ 14,999, the Titan is squarely aimed at the VR professionals camp. If you think the camera could be a profitable investment for your business, you can reserve one now with a $ 150 deposit. Shipment is expected for April. If you’d like to get an idea of the image quality the camera is capable of, head to the Insta360 Youtube channel for resolution, low-light and stabilization comparisons.

This Is Titan: Insta360 Opens Reservations on 11K, Eight-Lens VR Cinema Camera

Insta360 today opened reservations for the Insta360 Titan, an eight-lens cinematic VR camera that captures 360 photos and video at up to 11K resolution.

The Titan uses eight Micro Four Thirds (MFT) sensors, the largest sensors available in any standalone VR camera. These sensors maximize image quality, dynamic range, low-light performance and color depth, raising the bar for realism in high-end professional VR capture.

Insta360 will show the new camera at the CES show in Las Vegas this week. The Titan is set to ship in April, following a pilot program with select filmmaker partners.

Creators can reserve their Titan here today to secure a priority shipment in the first batch.

Introducing Insta360 Titan: https://youtu.be/9VhxtmV7mZQ

Turn it up to 11.

The Titan is the first standalone 360 camera ready to record in 11K. Key shooting modes include 11K at 30 FPS, 10K 3D at 30 FPS, as well as 11K 360 photos in 3D and monoscopic formats.

Additional shooting modes include 8K at 60 FPS and 5.3K at 120 FPS.

A sensor so good you’ll want eight.

360 cameras use an array of sensors to cover every direction of the action. Most use small sensors on par with those found in smartphones or action cameras — useful for installing in a smaller camera body but not for maximizing image quality.

The Titan bucks this trend, sporting eight optimized Micro Four Thirds sensors that combine the benefits of a large sensor area with a relatively compact design. These high-performance sensors are the key to achieving a cinematic image quality that’s been unattainable to VR creators until now.

Natural light and color.

The Titan supports shooting in 10 bit color, allowing for billions of color combinations and extreme color accuracy, while its high dynamic range lets creators capture natural lighting and low-light scenes to achieve an unparalleled realism in VR.

Make your move with FlowState Stabilization.

The Titan uses Insta360’s signature onboard FlowState Stabilization technology, allowing for gimbal-like 9-axis stabilization with no accessories or added effort from the user. Onboard stabilization lets creators achieve the stunning dynamic shots necessary to tell a compelling story in VR.

Not in the shot. But still in the action.

The Titan comes standard with Insta360’s Farsight live monitoring system, consisting of a transmitter and a receiver that can be easily attached to a phone or tablet.

Using Farsight, VR filmmakers can easily preview and control their shoots remotely. VR directors used to have to yell “action” and then jump behind a tree to avoid ruining the shot. Farsight saves them the sweat.

Deliver what you shoot.

Ultra-high-res VR content presents a distribution challenge. Most playback systems aren’t ready to decode immersive video at higher than 4K, let alone 11K.

CrystalView, Insta360’s proprietary playback tech, lets creators deliver what they really shot. It renders in real time exactly the part of a video a viewer is watching – with no computing power wasted on displaying what’s behind their head — so that even mainstream smartphones can play back full-quality Titan content.

Reservations open now.

Starting today, VR filmmakers can reserve their Insta360 Titan and be among the first owners when the camera ships this April.

The Titan is priced at $ 14,999 USD, and the reservation is confirmed with a fully refundable deposit.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The Insta360 Titan is an 11K 360-degree camera with 8 Micro Four thirds sensors

Posted in Uncategorized

 

Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

05 Dec

The post Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained appeared first on Digital Photography School. It was authored by Kunal Malhotra.

1 - Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

‘DSLR Camera, Full-Frame, Crop Sensor’- Just 3 terms which are prevalent in virtually every discussion involving photography. The two terms in use to classify sensor sizes of a DSLR camera are ‘Full-Frame’ and ‘Crop-Sensor.’ A Full-Frame camera contains a sensor size equivalent to a 35mm film format whereas a Crop-Sensor camera has a sensor size smaller than a full-frame sensor or a 35mm film format.

Micro-Four-Thirds (4/3) is a relatively new format (and term). First introduced around 2008, this sensor is slightly smaller and compact in nature. However, owing to a variety of factors, this format is now considered almost equal to, if not better than, the Crop Sensor format.

Apart from the physical size difference, there are several other points of difference between a full-frame sensor, a crop-sensor, and a micro-four-thirds sensor. Let’s take a look at a comparison between them under the following characteristics, to get an accurate understanding of their differences.

Crop Factor

As mentioned above, a full-frame camera has a 35mm sensor based on the old film-format concept. Whereas, a crop-sensor (also called APS-C) has a crop factor of 1.5x (Nikon) or 1.6x (Canon). Micro-Four-Thirds are even smaller sensors having a crop factor of 2x.

This crop factor also directly affects our field of view. Simply put, an APS-C sensor would show us a cropped (tighter) view of the same frame as compared to a full-frame sensor, and a Micro-Four-Thirds sensor would show an even tighter (more cropped) output of the same frame.

2 - Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

LEFT: Photo clicked using a Full-Frame camera. CENTER: Photo clicked using a Crop-Sensor camera. RIGHT: Photo clicked using a Micro-Four-Thirds camera.

Focal Length

The focal length obtained by different sensors is also directly associated with crop-factor. The focal length measurement of any given lens is based on the standard 35mm film format. Whenever we use any crop-sensor camera, its sensor crops out the edges of the frame, which effectively increases the focal length. However, this is not the case with any full-frame sensor, as there is no cropping involved with a full-frame field of view.

For example, in the Nikon eco-system, a crop-sensor camera such as the D5600 has a ‘multiplier factor’ of 1.5x. Thus, if I mount a 35mm f/1.8 lens on my Nikon D5600, it would multiply the focal length by 1.5x, thus effectively giving me a focal length output of around 52.5mm. If you mount the same lens on a full-frame Nikon body such as the D850, it gives an output of 35mm.

Similarly, if you mount a 35mm lens on a Micro-Four-Thirds sensor, which has a crop factor of 2x, it effectively doubles the focal length obtained to around 70mm.

3 - Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

LEFT: Photo clicked at 35mm on a Full-Frame camera. CENTER: Photo clicked at 35mm on a Crop-Sensor camera. RIGHT: Photo clicked at 35mm on a Micro-Four-Thirds camera.

Depth of Field

Similar to focal length, the aperture or f-stop measurement of a lens is based on the full-frame 35mm format. Similar to focal length, a ‘multiplier effect’ gets applied to the f-stop when using crop-sensors. As we know, the f-stop or aperture is the singular most important factor that affects the Depth of Field.

Thus, a Micro-Four-Thirds camera gives us less (shallow) Depth of Field at similar focal lengths when compared with a full-frame camera. For example, an image shot at f/1.8 on a Micro-Four-Thirds camera would give an output similar to an image shot at f/3.6 on a full-frame camera, and f/2.7 on a crop sensor camera. This is assuming that the effective focal length, and other shooting conditions, are the same.

Low Light Performance

Generally, full-frame cameras provide not only better low light & high ISO performance, but a better dynamic range. These factors combined eventually produces a much better image output than any crop-sensor camera can achieve.

Full-frame cameras are capable of capturing the most light and will almost always out-perform an APS-C or Micro-Four-Thirds camera body under low-light conditions. Micro-Four-Thirds sensors don’t perform well under low-light conditions where the ISO needs to be cranked up to say, above 2000.

For these reasons, despite full-frame camera kits being expensive, bulky and heavy to carry around, they are still industry-standard and the preferred cameras for virtually all professional photography work.

Conclusion

Thus, while full-frame DSLR’s remaining the industry standard even today, we cannot ignore the undeniable advantages of the Micro-Four-Thirds cameras. Micro-Four-Third cameras, such as the Olympus EP-5 & the Panasonic GH5, are affordable and easy to carry around. Thus, enabling a much larger group of people (who are hobbyists and enthusiasts but not professionals) to have access to DSLR-like shooting conditions at a fraction of the price.

Ultimately, factors such as your budget, use and other criteria define whether you choose either Full-Frame, Crop-Sensor, or Micro-Four-Thirds cameras.

Read more info on sensors here.

The post Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained appeared first on Digital Photography School. It was authored by Kunal Malhotra.


Digital Photography School

 
Comments Off on Full Frame VS Crop Sensor VS Micro Four Thirds: Camera Sensors Explained

Posted in Photography

 

LG patent describes smartphone camera with 16 sensors and lenses

26 Nov

With a total of five, LG’s current flagship smartphone V40 ThinQ already offers more camera modules than most of its competitors. However, it appears the current dual and triple-camera setups are only the beginning of a multi-camera arms race in the mobile industry.

According to a recent patent filing discovered by Dutch technology publication LetsGoDigital, the Korean manufacturer could be working on a smartphone with 16 sensors and lenses in the main camera array. The individual modules are arranged in a 4 x 4 matrix and designed to record the same scene from slightly offset angles, enabling the capture of 3D video or interesting image manipulation effects.

In an example it is shown how it would be possible to rotate the head of a portrait subject (or a teddy bear in the illustration below) to the angle you like best after capture.

Another example shows how the multi-cam technology could be combined with artificial intelligence features. Users could select a subject by drawing around it. The system then searches for other photos of the same person and then offers other, potentially more flattering, “heads” to replace the original with. This could be a useful function when one person in a group shot has their eyes closed, for example.

The patent also describes how the main camera array could be used to take selfies using a simple mirror, a concept that users of older flip-phones or compact cameras might still be familiar with.

As usual there is no way of knowing if or when the technology will make it into a final product but it’s great to see manufacturers are working on new ways of making the cameras in our pockets even more powerful. For more detail, you can have a look at the patent documents on the USPTO website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on LG patent describes smartphone camera with 16 sensors and lenses

Posted in Uncategorized

 

Samsung announces two new 1/2-inch sensors likely destined for future Galaxy devices

31 Oct

Recent flagship smartphones have shown the newest arms race in the world of mobile photography is how many lenses you can stick on a device, but Samsung isn’t giving up on the megapixels yet. Samsung has announced a new pair of half-inch image sensors destined for future smartphones: the 48-megapixel GM1 and 32-megapixel GD1.

Both the 48MP ISOCELL Bright GM1 and 32MP ISOCELL Bright GD1 have 0.8?m pixels and are backside illuminated (BSI) CMOS sensors that use Samsung’s latest pixel isolation technology, nicknamed ISOCELL Plus. They also use Samsung’s Tetracell technology, which merges four pixels together to create a single pixel that’s more effective in low-light environments. Samsung claims “the GM1 and GD1 can deliver light sensitivity equivalent to that of a 1.6?m-pixel image sensor at 12MP and 8MP resolution, respectively.”

Both sensors support gyro-based electronic image stabilization and the 32MP GD1 supports real-time HDR image capture.

Samsung expects the ISOCELL Bright GM1 and GD1 to be in mass production by the end of 2018, which would likely pave the way for an appearance in future Samsung Galaxy devices in 2019.

Samsung Introduces Two New 0.8?m ISOCELL Image Sensors to the Smartphone Market

Ultra-small pixel size combined with ISOCELL Plus and Tetracell technologies enhance sharpness and detail in smartphone photos

Samsung Electronics, a world leader in advanced semiconductor technology, today introduced two new 0.8-micrometer (?m) pixel image sensors – the 48-megapixel (Mp) Samsung ISOCELL Bright GM1 and the 32Mp ISOCELL Bright GD1.

“Demand for ultra-small, high-resolution image sensors are growing as smartphones evolve to deliver new and more exciting camera experiences for users,” said Ben K. Hur, vice president of System LSI marketing at Samsung Electronics. “With the introduction of our cutting-edge 0.8?m-pixel Samsung ISOCELL Bright GM1 and GD1 image sensors, we are committed to continue driving innovation in image sensor technologies.”

As cameras are becoming a key distinguishing feature in today’s mobile devices, smartphone makers are faced with the challenge to fit multiple cameras into the sleek designs of their latest flagships. At a reduced pixel size, the new sensors provide greater design flexibility, enabling camera module manufacturers to build smaller modules or pack more pixels into existing designs, and consequently allowing smartphone makers to maximize space utilization in slim, bezel-less smartphones.

The GM1 and the GD1 sensors are based on the company’s latest pixel isolation technology – the ISOCELL Plus* – which optimizes performance especially for smaller-dimension pixels, making them the ideal solution for today’s super-resolution cameras. In addition, thanks to Tetracell technology, where four pixels are merged to work as one to increase light sensitivity, the GM1 and GD1 can deliver light sensitivity equivalent to that of a 1.6?m-pixel image sensor at 12Mp and 8Mp resolution, respectively. The sensors also support Gyro-based electronic image stabilization (EIS) for fast and accurate image capture.

A real-time high dynamic range (HDR) feature is added to the GD1 to deliver more balanced exposure, richer color and detail when filming selfie-videos or streaming live video content even in low-light, high-contrast environments.

The Samsung ISOCELL Bright GM1 and GD1 are expected to be in mass production in the fourth quarter of this year.

*Samsung first announced its ISOCELL technology in 2013, which reduces color-cross talk between pixels by placing a physical barrier, allowing small-sized pixels to achieve higher color fidelity. Based on this technology, Samsung introduced the industry’s first 1.0um-pixel image sensor in 2015 and 0.9-pixel sensor in 2017. In June 2018, Samsung introduced an upgraded pixel isolation technology, the ISOCELL Plus.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Samsung announces two new 1/2-inch sensors likely destined for future Galaxy devices

Posted in Uncategorized