RSS
 

Posts Tagged ‘into’

Frii Designs’ new Conda Strap is a camera strap that turns into a flexible mount

07 Nov

Frii Designs, a company known for its unconventional photography accessories, has announced a forthcoming campaign to help fund its new Conda Strap, a camera shoulder strap that also doubles as a flexible mount and grip.

Somewhere between a Joby Gorillapod and a shoulder strap, the Conda starts as a flexible shoulder strap that attaches directly to your camera via the tripod mount. When you need to use it as a tripod, simply flip the lever at the top and the Conda stiffens up for moulding into whatever shape you need.

You can even separate the strap, wrap it around objects for further security and lock it back in place with the lever engaged to ensure your gear doesn’t take any unwanted tumbles.

It’s an interesting concept and certainly carves out a niche in a market that’s fairly saturated. Due to the components required to turn it into a mounting solution, it doesn’t necessarily look like the most comfortable shoulder strap — and certainly not the lightest — but if it means you don’t have to carry around even a small tripod or mounting solution, it might be worth the compromise.

The Conda Strap will come in two versions: Conda Strap and Conda Strap Plus. The Conda Strap is the ‘light’ version of the two, designed for mirrorless cameras and light DSLR cameras, while the Conda Strap Plus is the more heavy-duty model for larger mirrorless setups or heavier DSLRs.

The Kickstarter campaign for the Conda Strap and Conda Strap Plus will go live on November 12. Frii Designs notes the Conda Strap be available for a pledge of $ 97. We have inquired about the price of the Conda Strap Plus and will update this article accordingly when we receive a response.

If successfully funded, the first units are expected to ship out in March 2021. You can sign up to be notified as soon as the campaign goes live on Frii Designs’ website.


Disclaimer: Remember to do your research with any crowdfunding project. DPReview does its best to share only the projects that look legitimate and come from reliable creators, but as with any crowdfunded campaign, there’s always the risk of the product or service never coming to fruition.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Frii Designs’ new Conda Strap is a camera strap that turns into a flexible mount

Posted in Uncategorized

 

Video: Weird lens guru turns $20 Carl Zeiss projector lens into a swirly-bokeh camera lens

26 Oct

Weird lens guru Mathieu Stern is back at it with a new video that shows images captured with two $ 20 Carl Zeiss projector lenses he converted into camera lenses.

As with many of Stern’s DIY projector lens projects, both of these lenses — a 120mm F1.9 and a 105mm F1.9 — lack any way to focus and don’t have any adjustable aperture. While the adjustable aperture isn’t quite so easy to address, the video briefly shows how he uses an M65 Helicoid ring adapter to give manual focus abilities to the lens. Although not shown in the video, Stern then uses an M65 to Sony E-mount adapter to use the custom lens to his Sony camera.

The resulting imagery captured with the lenses produces pronounced ‘swirly’ bokeh and gives a very sharp separation between the subject and the background. It’s not going to win any resolution or edge-to-edge sharpness contests, but considering you can pick up similar projector lenses for around $ 20 or so online and a set of adapters for your camera for roughly $ 50 or so, it’s a cheap way to get some unique shots.

Stern has a full list of the components he used in the video’s description on YouTube. You can find more of his work on his YouTube channel and website, which also features his always-growing ‘Weird Lens Museum.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: Weird lens guru turns $20 Carl Zeiss projector lens into a swirly-bokeh camera lens

Posted in Uncategorized

 

NASA translates Milky Way images into sound using sonification

09 Oct

NASA has used sonification, the process of turning data into audio in order to perceive it in a new way, to reveal the ‘sounds’ of our universe. A video containing the generated audio was recently published by NASA’s Marshall Space Flight Center in Alabama. The data, in this case, comes from NASA’s Chandra X-Ray Observatory and other telescopes that imaged the Milky Way in optical and infrared light in addition to observing X-rays.

NASA creates composite images of space using the data gathered by its observatories, providing the public with a visual look at things that are otherwise beyond the means of human perception. Sight represents only one way that humans can perceive data, however, with NASA pointing out that sonification makes it possible to experience the same data through hearing.

The space agency explains:

The center of our Milky Way galaxy is too distant for us to visit in person, but we can still explore it. Telescopes gives us a chance to see what the Galactic Center looks like in different types of light. By translating the inherently digital data (in the form ones and zeroes) captured by telescopes in space into images, astronomers create visual representations that would otherwise be invisible to us.

But what about experiencing these data in other senses like hearing? Sonification is the process that translates data into sound, and a new project brings the center of the Milky Way to listeners for the first time.

This project represents the first time data from the center of the Milky Way has been processed as audio, something that involves playing the ‘sounds’ of space from left to right for each image. In this case, NASA set the intensity of the light in the images as the volume control, while stars and other ‘compact sources’ are translated as individual notes. The space dust and gases are played as a fluctuating drone, and the vertical position of light controls the pitch.

NASA has provided multiple different versions of its sonification project, including solo tracks that provide audio for observations made by each source individually (Hubble, Spitzer, Chandra, etc.), plus there’s a version where all of the data is combined together to form an ensemble with each telescope source serving as a different instrument. Listeners can ultimately hear audio that translates data observed across a massive 400 light-years, according to the space agency.

‘Sound plays a valuable role in our understanding of the world and cosmos around us,’ NASA says, pointing out that the observations from each telescope represent different aspects of the galaxy around us. The image sourced from Hubble represents the energy in parts of the Milky Way where stars are forming, whereas the image from Spitzer provides data on the ‘complex structures’ within the galaxy’s dust clouds.

NASA has a website dedicated to sound produced from Chandra observation data called ‘A Universe of Sound.’ Additional audio tracks can be found on this website, including ones of various pulsars, star systems and notable celestial features like the ‘Pillars of Creation.’

Via: Laughing Squid

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NASA translates Milky Way images into sound using sonification

Posted in Uncategorized

 

Earth from 100,000 feet: Sigma sent the fp mirrorless camera into near space

03 Oct

Sigma UK recently collaborated with the company Sent Into Space to send a pair of Sigma fp full frame mirrorless cameras into the upper atmosphere. Sigma 14mm F1.8 lenses were used on each camera. It’s a notable kit because it combines the world’s smallest and lightest full frame mirrorless camera with the brightest full frame 14mm prime lens available.

The Sigma fp cameras and 14mm F1.8 lenses were attached to weather balloons and sent up to an altitude of roughly 19 mi. (about 30.5km). At altitude, the cameras captured high-resolution photos and 4K RAW video of Earth.

No good marketing operation is complete without stunning media to share with prospective customers. Sigma UK published a video to document the process of sending Sigma fp cameras into near space and show off the amazing results of the project.

The launches took place in Sheffield and the first Sigma fp to gain altitude was dedicated to recording 12-bit 4K UHD Raw video and the second camera was dedicated to capturing 24.6MP still images. Each camera was part of a kit that includes on-board equipment to provide data and telemetry back to the Sent Into Space team back on the ground.

The balloons, filled with hydrogen, expand considerably during the ascent. As the atmosphere gets thinner, the gas inside the balloon tries to escape to fill the vacuum. At a certain altitude, the balloon will fail and burst, and the equipment will return to the surface aided by onboard parachutes. As Chris Rose of Sent Into Space points out in the video above, the payload will actually descend at up to 250 mph before the atmosphere gets thick enough to act against the parachute.

Each camera was sent into space with an attached 2TB SSD drive. Even with that much storage capacity, the fp couldn’t record 4K UHD RAW video for the entire flight. The stills camera was set up with an interval timer to capture a still image every five seconds for the entire journey.

To learn more about the Sigma fp, head to our First Impressions. For more on the Sigma 14mm F1.8 DG HSM Art lens and its applications for space photography, check out Jose Francisco Salgado’s ‘Astrophotography with the Sigma 14mm F1.8 Art lens’ article.

(DIY Photography)

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Earth from 100,000 feet: Sigma sent the fp mirrorless camera into near space

Posted in Uncategorized

 

Astropad announces Luna Display for Windows, turning your iPad into a wireless display for your PC

01 Oct

Astropad successfully funded Luna Display for Mac on Kickstarter in late 2019. The company is back with another project, Luna Display for Windows. The new project was fully funded in its first hour on Kickstarter. Luna Display for Windows allows iPad owners to turn their Apple device into a wireless PC display.

Astropad states that Luna Display for Windows is the only hardware solution available to turn any modern iPad into a wireless second display for a PC or Mac. You can connect your iPad via Wi-Fi or with a wired connection. For a physical connection, you can select between USB-C or HDMI Luna Display units. Astropad promises ‘lag-free lightning-fast speeds’ and compatibility with any Windows or Mac application

Luna Display for Windows boasts many features, including support for iPad touch gestures. Image credit: Astropad

Luna Display for Windows is fully compatible with external keyboard and mouse peripherals, full iPad touch gestures and supports Apple Pencil pressure sensitivity. Astropad promises low latency and a ‘crystal clear’ display.

When working on a single display, especially a smaller notebook display, it can be difficult to fit your entire workspace. With a second display, you can instantly and easily expand your workspace, allowing additional flexibility when working. If you don’t need a second display but would like to be untethered from a desktop computer, you can use Luna Display wirelessly to work on your iPad anywhere you can connect to your Wi-Fi network, such as in a more comfortable room in your home or maybe even outside.

In order to ensure a low-latency and clear wireless image, Luna leverages its own custom video compression technology, LIQUID. The rendering system adjusts in real-time to prevailing network conditions to ensure fast performance. Luna Display promises latency as low as 16 milliseconds, which is considerably faster than the 204ms of Windows Connect and 64ms performance when using Apple’s Airplay technology. Further, LIQUID uses GPU acceleration when available to ensure stable performance.

Luna Display requires the use of a small device, which you can plug into an HDMI port (Windows compatibility only) or USB-C (compatible with Windows and Mac). The device’s size varies slightly with the selected port type, but in either case, it weighs a little over an ounce.

Once your Luna Display is inserted into your computer, you will need to open a dedicated Luna Display app on your computer and your iPad. As soon as the applications are running, you’re good to go.

Luna Display for Windows requires Microsoft Windows 10 64-bit, Build 1809 or later. It requires an Intel or AMD processor with 64-bit support that is 2 GHz or faster. As for RAM, Luna Display requires at least 4 GB. Luna Display for Windows is compatible with Intel HD Graphics 520, AMD Radeon RX Vega 3, NVIDIA GeForce 820m or later; or an equivalent DirectX 11 compatible GPU. On the iPad side of the equation, nearly any iPad will work. Luna Display is compatible with iPad Mini 2 (2013 or later), iPad Pro (2016 or later), iPad 5th generation (2017 or later) and iPad Air (2013 or later). Your iPad must be running iOS 9.1 or later and 32-bit devices are not supported.

Image credit: Astropad

You can pledge $ 49 USD to support Luna Display for Windows and save up to $ 31 off retail price. Luna Display for Windows is scheduled to begin shipping in May 2021. For additional information and full pledge details, head to Luna Display for Windows Kickstarter project page.


Disclaimer: Remember to do your research with any crowdfunding project. DPReview does its best to share only the projects that look legitimate and come from reliable creators, but as with any crowdfunded campaign, there’s always the risk of the product or service never coming to fruition.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Astropad announces Luna Display for Windows, turning your iPad into a wireless display for your PC

Posted in Uncategorized

 

Coming into focus: how Panasonic’s DFD gamble may yet pay off

08 Sep
Panasonic’s DFD autofocus system tries to determine distance information without masking pixels as most on-sensor phase detection systems do.

We’ve been impressed by what we’ve seen so far of the autofocus changes Panasonic introduced with its new S5. The latest version doesn’t iron-out all the quirks but continuous AF for stills, in particular, appears much improved. But beyond this, the details we were given about how these improvements had been achieved are interesting. They help to highlight both the benefits and the continued challenges of the company’s Depth-from-Defocus system.

It’s a system with a poor reputation in some quarters but one that’s continued to improve significantly in recent years. The S5 shows both how far DFD has come as well as hinting at what’s still needed.

What is depth-from-defocus?

Fundamentally, focus is a a question of distance: adjusting the lens optics until the light rays from a subject at a particular distance converge on the sensor plane.

The alternative: phase detection

Most manufacturers have settled on phase-detection as the heart of their AF systems: this views the target from two different perspectives then works out how much the focus needs to be moved in order to bring those two perspectives into phase with one another (the point at which that subject is in focus).

In mirrorless cameras, this is usually done by having partial pixels that only receive light from one or other half of the lens, to provide two differing perspectives. The downsides of these systems tend to be that these partial pixels either receive less light than a full pixel or that the complexity of the electronics (and the noise they produce) increases, in systems that combine pairs of half pixels. The performance can be excellent, but to a degree you’re trading away some light capture or noise performance to attain that AF performance.

There are two broad approaches used by cameras to conduct autofocus: ones that hunt until they find the point that’s in focus and those that try to interpret the depth in the scene, so that they can drive the focus without the same need to hunt.

DFD is Panasonic’s system for interpreting depth. It works by making a tiny focus adjustment and analyzing how the image has changed as a result. With an understanding of the out-of-focus characteristics of the lens being used, the camera can interpret these changes and build a depth map of the scene.

This challenge is made more difficult if elements in the scene are moving: the camera’s depth map needs to be constantly updated, because the distances are changing. This is where subject-recognition and algorithms designed to anticipate subject movement come into play, since they allow the camera to understand which bits of the scene are moving and what’s likely to happen next.

What’s new with the S5

Panasonic told us that the S5’s autofocus has been improved by a number of fundamental changes. Part of it comes from improved subject recognition. This is based on deep learning (an algorithm trained to recognize specific types of subject) which helps the camera know what to focus on and to not refocus away from it. For instance, teaching the algorithms to recognize human heads when they’re looking away means the camera understands it doesn’t need to find a new subject or refocus when the face it had recognized suddenly ‘disappears.’

Another part comes from re-writing the AF code to make better use of the available processing power. During the development of the S5, Panasonic’s engineers discovered they didn’t have to lean on the machine-learning trained algorithms for both subject recognition and movement tracking: they could combine the machine-learned recognition with their existing, faster, distance and movement algorithms, which freed-up processing power to run the process much more frequently.

This video shows the view though the viewfinders of the S5 (left) and older S1 (right). Note that even when the S1 is in focus, there’s still some very obvious pulsing and fluttering, this is much less noticeable in the S5.

Finally, other software improvements allowed the entire AF system to be run faster: providing more up-to-date information to the processor. The combined result of these changes, for stills shooters at least, is much improved autofocus with less reliance on the trial-and-error hunting of contrast detection AF. This, in turn, reduces the focus flutter in the viewfinder, making it easier for a photographer to follow the action they’re trying to capture, so you get an improved experience as well as improved focus accuracy.

Video is a greater challenge

But this approach is primarily a benefit for stills photography. Video is a more difficult challenge, partly because the focusing process is visible in the resulting video but on a technical level, because you have to read out the sensor in a manner that’s similar to the video you’re trying to produce. In stills mode you can reduce the resolution of the sensor feed (in terms of spatial resolution or bit-depth), to increase the readout rate, which increases how often the AF system receives new information about what’s happening. This low-res feed during focus doesn’t have any impact on the final image.

For video you need to run the sensor in a mode that’s tied to that of the footage you’re trying to capture

In high res video modes you need to run the sensor at a bit depth, pixel resolution and frame rate tied much more closely to those of the footage you’re trying to capture. At best, you get to read the sensor at double the output frame rate. Video is typically shot using shutter speeds at least twice as fast as the frame rate, meaning you can read the sensor out at 60 fps for 30p output, because each frame of video is usually made up from 1/60th second chunk of time or less, leaving you time to conduct another readout for the AF system before you have to expose your next frame.

The problem is that full frame sensors are big and slow to read out. The sensor in the S5 is very similar to the ones used in the likes of the Sony a7 III, which typically take over 21ms to read-out in 12-bit mode: not quite fast enough to run at 48 fps for double-speed capture of 24p footage. This has the unfortunate side-effect of meaning the camera’s worst AF performance comes in the mode most likely to be used by the most demanding video shooters.

Unfortunately for a brand so associated with video, the S5’s full-frame 4K/24p is the mode that delivers its weakest AF performance.

Despite this challenge, Panasonic has re-worked the AF response even in this weakest mode, to be less prone to unnecessary refocusing.

A bright new tomorrow

The updates in the S5 show us a couple of things. Firstly, that Panasonic is well aware of the criticisms being leveled at its cameras and is continuing to fine-tune its software to squeeze everything it can out of the current hardware.

DFD is not there yet but, in principle, staying committed to an AF method that gets better as hardware gets faster may prove a good choice

But, more significantly, the improvements we’re seeing when shooting stills and when using AF-C during bursts of stills in particular suggest that some of the downsides we’ve seen in the past aren’t necessarily inherent flaws of the DFD concept. Instead they’re aspects that can improve as sensor readout and processing power improve. You don’t need to be a semiconductor physicist to recognize that improvements in those areas are always coming.

In principle, in the long run, staying committed to an AF method that gets better as hardware gets faster may prove to be a better choice than an approach that trades-off light capture for AF performance. But the S5’s performance, particularly in video, shows DFD is not there yet. The risk for Panasonic is whether these fast-readout sensors and powerful processors arrive before the majority of full frame buyers have already committed themselves to other camera systems.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Coming into focus: how Panasonic’s DFD gamble may yet pay off

Posted in Uncategorized

 

Sony’s Imaging Edge Webcam utility turns 35 of its camera into webcams on Windows 10

21 Aug

It might be one of the last manufacturers to the party, but Sony has just released its Imaging Edge Webcam utility, making it possible to use select Sony cameras as webcams with compatible livestreaming and video conferencing programs.

Similar to webcam utilities for other companies, Sony’s Imaging Edge Webcam utility is only available for Windows 10 computers, for now. We’ve inquired with Sony about a macOS version and will update with more information when we have it.

Below is a list of the cameras supported by Imaging Edge Webcam utility as of version 1.0.0:

?: E-mount?ILCE-?

  • ILCE-7M2
  • ILCE-7M3
  • ILCE-7RM2
  • ILCE-7RM3
  • ILCE-7RM4
  • ILCE-7S
  • ILCE-7SM2
  • ILCE-7SM3
  • ILCE-9
  • ILCE-9M2
  • ILCE-5100
  • ILCE-6100
  • ILCE-6300
  • ILCE-6400
  • ILCE-6500
  • ILCE-6600

?: A-mount?ILCA-?

  • ILCA-77M2
  • ILCA-99M2
  • ILCA-68

Digital Still Camera?DSC-/Vlog camera)

  • DSC-HX95
  • DSC-HX99
  • DSC-RX0
  • DSC-RX0M2
  • DSC-RX100M4
  • DSC-RX100M5
  • DSC-RX100M5A
  • DSC-RX100M6
  • DSC-RX100M7
  • DSC-RX10M2
  • DSC-RX10M3
  • DSC-RX10M4
  • DSC-RX1RM2
  • DSC-WX700
  • DSC-WX800
  • ZV-1

The utility is free to download on Sony’s website. Simply select the camera you intend to use the program with and click the download link. Sony has also provided a thorough guide on how to install the utility and set your camera up for use.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sony’s Imaging Edge Webcam utility turns 35 of its camera into webcams on Windows 10

Posted in Uncategorized

 

Photographer turned his front door into a large format camera to capture portraits during the pandemic

15 Aug
The ongoing COVID-19 pandemic has made social distancing critically important. While creating distance is good for our physical health, it is difficult for photographers, especially portrait photographers. To overcome this challenge, Kyle Roper, the producer behind The Skyscraper Camera Project, transformed the front door of his home into a large-format analog camera. This has allowed him to safely capture portraits and launch a new photo series, Door Frames.
A look at the makeshift front door camera form inside Roper’s living room.

Given ample time at home and the desire to continue creating images while observing social distancing restrictions, Roper converted his front door into a camera obscura using magnetic dry erase board, gaffer’s tape, cardboard boxes, a dark cloth, c-stand, clamps and sandbags. For photo paper and film, Roper uses Ilford RC IV Multigrade Photo Paper, Ilford Direct Positive Paper and Ilford Ortho 80 Plus. His lens of choice is a Nikkor-W 300mm F5.6 lens in a Copal shutter.

An overview of all the elements of the front door camera.

Roper states that he was inspired by his friend, Brendan Barry, an artist and camera builder we’ve featured many times before. Roper was also inspirited by the work of Dorothea Lange and Francesca Woodman. The former is a particularly interesting inspiration given Lange’s famous documentary and photojournalism work for the Farm Security Administration during the Great Depression.

The conveniently-located window in Roper’s front door.

Of Door Frames, Roper says, ‘When you have nothing but an abundance of time, you take the time and slow things down. You find that these antiquated processes can reveal and create such beauty.’ Below is a collection of portraits Roper captured with his front door camera:

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_5564799738″,”galleryId”:”5564799738″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

In order to communicate with his subjects outside, Roper speaks to them from inside his home using a speakerphone. Roper then affixes his photographic paper on the image box using the magnetic dry erase board and captures an image with his Nikkor lens wide open because his photo paper is ISO 3 or 6. Once an image is captured, Roper develops it in his bathroom, which he has converted into a darkroom.

Prints in the process of being made in Roper’s makeshift darkroom.

To view more Kyle Roper’s work, visit his website and follow him on Instagram.


Image credits: All photos used with permission from Kyle Roper

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Photographer turned his front door into a large format camera to capture portraits during the pandemic

Posted in Uncategorized

 

A mysterious firmware update turns the Viltrox 85mm F1.8 lens into an even faster F1.6 prime

07 Aug

The $ 400 Viltrox 85mm F1.8 lens is a popular choice for Sony E and Fujifilm X users due to its compelling blend of performance and value. Owners have remarked that the lens delivers sharp image quality even when shot wide open. It now appears that wide open can be made even wider with a firmware update allowing the lens to become an F1.6 prime.

Photographer Stefan Malloch has published a video tutorial, seen below, which shows how to use the USB port on the lens to update the lens. This update allows the lens to open its aperture wider, changing the maximum aperture from F1.8 to F1.6. With a simple firmware update, you can get an extra one-third of a stop of light gathering capability.

As PetaPixel notes, there are conflicting reports as to the origin of the firmware. Sony Addict reported that the firmware was released officially in China. FujiRumors, on the other hand, reached out to Viltrox and was told that firmware to turn the F1.8 lens into an F1.6 lens had not been released. All this is to say that installing (possibly unofficial) firmware into your lens is a risk with unknown consequences.

Supposing you still want to update your lens using Malloch’s video above, what can you expect from the Viltrox 85mm F1.6 lens? Malloch also published an overview video of the lens, including sample images.

As mentioned earlier, the Viltrox 85mm F1.8 (or F1.6) lens is available as a full-frame lens for Sony E mount or for the APS-C Fujifilm X system. The fast, autofocus-capable prime lens can focus as closely as 2.62′ (0.8m). The lens includes 10 elements across 7 groups, including 1 ED lens element and 4 ‘short-wavelength and high-transparency’ lens elements. The lens has a 72mm filter thread and weighs 636g (1.4 lbs.). You can learn more about the lens on Viltrox’s website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on A mysterious firmware update turns the Viltrox 85mm F1.8 lens into an even faster F1.6 prime

Posted in Uncategorized

 

Canon R5 / R6 overheat claims tested: Stills shooting, setup quickly cut into promised capture times [UPDATED]

07 Aug
Testing conducted in Seattle by our Technical Editor Richard Butler. Real-world production experiences by Jordan Drake: the director and editor of many of our ‘DPRTV’ videos.

Originally published Aug 3, updated Aug 6: conclusions and analysis revised based on additional experience with the camera

If you have any interest in cameras, you may have witnessed the heated discussions lately around the new Canon EOS R5 and R6’s tendency to overheat when capturing video internally. The Internet tends to amplify the most extreme version of any story or phenomenon, which might have lead to you getting the impression that the cameras are unusable.

Jordan’s EOS R5 experience

We shot for 10 hours at a variety of locations, which I thought would give the camera ample opportunity to cool down. I planned to shoot the episode in the 4K HQ mode, with occasional 4K/120P and 8K shots peppered throughout. Quickly I realized that setting up a shot and menu-diving would reduce the amount of record time I had for HQ, so I found myself spending far less time previewing the shot before rolling, adding a layer of stress.

Eventually, I realized couldn’t record all the talking points in 4K HQ, and settled on using 4K HQ for wide shots and standard, line-skipped 4K for closeups. This made shooting sustainable, though I found myself avoiding trying to capture any spontaneous establishing shots or cutaways, lest I drop the dreaded overheating clock a bit lower. While our host Chris took it in his stride, I can only imagine how frustrating it would be for the talent to not know if the camera will last until the end of a take.

I also found myself heavily rationing the 4K/120P as it really chews up your remaining shooting minutes. I spent two minutes capturing the seagull footage in the episode: beforehand I the camera said it would shoot 15 minutes of 4K HQ, when I returned I had only five minutes remaining!

If the quality difference between 4K HQ and standard 4K capture were not so dramatic, this would bother me less. However, once you start viewing and editing the gorgeous 4K HQ footage, it makes it that much harder to go back to inferior line skipped 4K, and that’s a type of disappointment I don’t want to be dealing with on a shoot.

After extensive testing of both cameras, our conclusions with regards internal recording are:

  • From a cold start, the Canon EOS R5 and R6 perform in line with the company’s video performance claims.
  • Non-video use cuts into available shooting time, adding significant uncertainty for video shooters

We tested a pair of R5s and an R6 in a variety of warm conditions and found they consistently performed in line with the limitations that Canon acknowledged at the point of launch. However, the practical implications are that the cameras are prone to overheating if you shoot for extended periods and if you have crew or talent waiting to re-start shooting, they may take too long to recover.

It should be noted that Canon did not design either the EOS R5 or R6 to be professional video tools, nor does it primarily market them as such. But based on our testing and real-world usage we would caution against using them as a substitute.

So why is YouTube saying the sky is falling?

Our testing suggests that the cameras perform in exactly the way that Canon said they would. However, there is an important caveat that Canon’s figures don’t address: although the cameras can repeatedly deliver the amount of video promised, they may not always do so in real-world usage.

Even set to the mode designed to limit pre-recording temperature build-up, the clock is essentially running from the moment you turn the camera on. Video recording is the most processor-intensive (and hence most heat generating) thing you can do, but any use of the camera will start to warm it up, and start chipping away at your recording times. Consequently, any time spent setting up a shot, setting white balance, setting focus or waiting for your talent to get ready (or shooting still images) will all cut into your available recording time, and you won’t reliably get the full amount Canon advises.

Not only does this make R5 a poor fit for many professional video shoots, it also means that you can’t depend on the cameras when shooting video alongside stills at, say, a wedding, which is a situation that the EOS R5 clearly is intended for.

Even when left in direct sunshine, the cameras continued to record for the duration Canon promised. However, this is only true when you’re not using the camera for anything else.

The one piece of good news is that the camera’s estimates appear to be on the conservative side: every time the camera said it would deliver X minutes of footage, it delivered what it’d promised. You can also record 4K footage for much longer if you can use an external recorder but again, this probably isn’t going to suit photographers or video crews looking for a self-contained, do-everything device.

Click here if you want to see our test methods and results.

EOS R5 suggestions:

  • Expect to shoot line-skipped 30p for the bulk of your footage
  • Only use 8K or oversampled HQ 4K for occasional B-Roll
  • 4K/120 and 8K will cut into your shooting time quickest of all
  • Be aware of your setup time and cumulative usage (including stills shooting)

EOS R6 suggestions:

  • Don’t expect to be able to shoot for extended periods
  • Be aware of the need for extensive cooling periods between bursts of shooting

Analysis: Why hadn’t Canon thought about this?

It’s easy to fall into the trap of thinking this means Canon didn’t put enough thought into thermal management for these cameras. Our testing suggests this isn’t the case, but that the cameras’ specs are rather over-ambitious.

Jordan’s EOS R6 experience

I had done some testing prior to my shoot, and was comfortable that overheating wouldn’t be a problem if I stuck to 4K/24p. Unfortunately, my experience on a warm day was quite different to that room-temperature test. There’s no line-skipped 4K mode on the R6, so if the camera overheats, you’re back to 1080P, which will be a jarring transition for viewers watching on larger screens.

While I was able to record much longer with the R6 before encountering the overheat warning, once it appears the camera takes far longer to cool down again than the R5. Our regular drives in an air conditioned car allowed Chris and Levi’s R5 to function throughout the day, but at one point I was left sitting in the car, babysitting a hot R6 while they went out to shoot. During a one hour lunch, the R5 had returned back to normal, but the R6 had a twenty minute warning still on.

This was hugely disappointing as, rolling shutter aside, the R6 video quality is excellent, and I’d be perfectly happy using it over the R5. However, the longer cool down times would probably lead me to use the R5, dropping to line-skipped 4K from time to time.

While I enjoyed most aspects of using these two cameras, I have no intention of using either of them as a primary video camera. They would be great for grabbing occasional, very high quality video clips, but I’d never want to rely on them for paid work.

With the exception of specialist video models, most cameras that shoot 4K are prone to overheating, regardless of the brand. Some companies let you extend the recording time by ignoring overheat warnings (and risk ‘low-temperature burns’ if you handhold the camera), while others simply stop when they get too hot. This should make it clear that shooting 4K for an extended period is difficult. For instance, Sony says the a7 III will shoot around 29 minutes of 4K video with the temperature warnings set to ‘Std,’ while the Fujifilm X-T4 promises 30 minutes of 4K/30 and 20 minutes of 4K/60.

The cumulative heat is constantly counting against you

8K is four times as much data as natively-sampled 4K and seventeen times more than the 1080 footage that older cameras used to capture so effortlessly. Perfect 2:1 oversampled 4K (downsampled 8K) requires this same amount of data, which is still 1.7x more data than is used to create 4K oversampled video from a 24MP sensor. Data means processing, which means heat.

What’s interesting is that the exterior of the cameras don’t get especially hot when shooting for extended periods. We’re only speculating, but this could indicate that Canon has tried to isolate the camera’s internals from external temperature fluctuations, with the down-side that they can’t then dissipate internally produced heat.

This would be consistent with us getting the full recording period out of the camera, even when tested well above the 23°C (73°F) conditions specified by Canon. And with the fact that leaving the camera’s doors closed and battery in place didn’t change the recovery time. However, while this appears to be workable for the line-skipping 4K mode, the added workload of the higher quality settings seems to present a problem. Dealing with 1.7x more data than the a7 III and X-T4 is a step too far: the R5 will match them for promised recording duration but only from cold. This leaves it much more sensitive to any other use beyond video recording.

The EOS R6 is a slightly different matter. It can shoot 40 minutes of 4K taken from 5.1K capture, which is a pretty good performance and may be enough that you won’t often hit its temperature limits. However even after a 30 minute cooling period, it has only recovered enough to deliver around half of its maximum record time, whereas the EOS R5 recovered nearly its full capability. The more extensive use of metal in the construction of the R5 seems to help it manage heat better than the R6 can.

And, as both Jordan’s and Richard’s experiences show: if you don’t have time to let the cameras cool, that cumulative heat is constantly counting against you.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon R5 / R6 overheat claims tested: Stills shooting, setup quickly cut into promised capture times [UPDATED]

Posted in Uncategorized