RSS
 

Archive for the ‘Uncategorized’ Category

PhotoStatistica is a macOS app for visualizing the EXIF data of your photos

09 Jun

Looking at the metadata for a single image can be helpful, but sometimes you need to get a more macro-level view of your work. Enter PhotoStatistica, a new macOS app that parses through the EXIF data of your photos and breaks it down into infographics and statistical analyses.

The app is developed by Bristol Bay Code Factory and is designed to offer a more visual representation of how you shoot. This information can be used to not only improve your photography and better know what settings you might need to gravitate towards (or avoid), but to also guide you in purchasing future gear. If you find out you tend to shoot around 135mm with your long-range kit lens, it might be worth picking up a 135mm prime; if you tend to shoot at high ISO ratings, maybe you should prioritize low-light capabilities with your next camera or pick up an F1.8 or F1.4 lens.

PhotoStatistica supports JPEGs, TIFFs, DNGs and most proprietary Raw formats. It can sort through nested directories and seek out images or directly look through Capture One Pro libraries or Apple’s Photos app libraries. Once PhotoStatistica sorts through the EXIF data of the images you’ve selected, you can use the options at the top of the app to visualize the results using bar, pie and pivot charts. You can even export all of the data in CSV format for save the current EXIF set for analyzing at a later date.

PhotoStatistica is currently available to purchase in the macOS App Store for $ 2.99 / £2.99 / €3.49.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on PhotoStatistica is a macOS app for visualizing the EXIF data of your photos

Posted in Uncategorized

 

Visionox is ready to mass produce under-display front cameras

08 Jun

Chinese smartphone manufacturers Xiaomi and Oppo first showcased new technology that allows for the front camera in smartphones to be installed under, and capture images through the display in the middle of last year. The main benefit of this new technology is the eradication of unsightly large display bezels, notches or punch holes for housing the front camera.

The technology eventually made it into prototype devices but in January 2020 Xiaomi VP Lu Weibing said it should not be expected to arrive in a production device any time soon, as there were still a number of challenges to overcome.

The principal reason given at the time was the high pixel-density of modern smartphone displays which blocked too much of the incoming light. In combination with the small image sensors of most front cameras, this meant severely limited light gathering capabilities and ultimately sub-par image quality, especially in low light conditions. In addition diffraction from the protective glass could lead to color issues.

However, it appears it’s taken less time than expected to solve this issue. According to reports in Chinese media, Visionox, a major OLED manufacturer, is ready to start mass production of displays with under-screen cameras.

Visionox claims it has been able to increase light transmittance and reduce diffraction by using different organic and non-organic film materials that offer higher transparency. On the software side of things, a new algorithm is capable of correcting brightness and color casts as well as viewing angle issues. It also removes the glare that could be seen in sample images from early prototypes.

The pixel density on the portion of the display covering the camera lens has also been modified to allow for better light transmission. On a Full-HD display, the resolution in the specific area where the camera is located could be reduced to HD or even SD levels. We’ll have to wait for the first production devices to see if the change in resolution will be noticeable on the display and if image quality is comparable to more conventional front camera implementations.

Visionox says hundreds of new technologies have been applied in order to get to the mass production stage. Even with the company in a position to manufacture the new type of displays now it’ll still be a while before we can expect devices equipped with the technology. The first models with under-display cameras are expected to see the light of day in Q1 2021.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Visionox is ready to mass produce under-display front cameras

Posted in Uncategorized

 

Apple patent shows how you may one day be able to capture ‘synthetic group selfies’

08 Jun

In the age of physical distancing, taking selfies with friends has become challenging to say the least, due to the worldwide suggestion to keep six feet apart to prevent the spread of the novel coronavirus. A recently-discovered patent from Apple, however, shows how we might one day be able to take a group selfie without needing to be next to each other — or even in the same room for that matter.

First discovered by Patently Apple, the ‘synthetic group selfie’ patent wasn’t created in response to the COVID-19 pandemic, as it was originally filed back in July 2018. However, its usefulness is more valuable than ever, as the desire to feel connected in an age of physical distancing is growing.

A pair of illustrations from the patent showing how the layers within the composited scene could be moved around to better frame people within the selfie.

According to the patent, you could create a ‘synthetic group selfie’ by inviting friends and family to a shared photo session. The group selfie mode would then place those invited to the session next to one another in the image to give the appearance that everyone is right there in the frame. The patent also notes this mode could be used for video and livestreaming options with other options for changing the arrangement of people within the frame.

An illustration from the patent showing how the composition process would work.

As with all patents, it’s unclear if we’ll ever see this option in a forthcoming iOS update. But it would be a convenient option now more than ever, making virtual interactions more social while still staying distanced.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Apple patent shows how you may one day be able to capture ‘synthetic group selfies’

Posted in Uncategorized

 

Video: Netflix makes its Platon documentary episode free on Youtube

08 Jun

Although Netflix uploaded the video back in April, we’ve only now discovered the Platon episode of its ‘Abstract: The Art of Design’ miniseries is now available to view for free on YouTube.

The 2017 docuseries consists of two series and features a total of 14 45-minute episodes that cover the work of some of the best artists across the globe in their respective fields, from architecture and automotive design to stage design and typography. For its ‘Photography’ episode, Netflix features Platon Antoniou, more commonly known by his mononym Platon, a renowned portrait photographer whose portfolio features some of the most prominent and powerful figures the world over.

While the other episodes require a Netflix subscription, the Platon episode is now free to view on YouTube. Throughout the 45-minute episode, we get an inside look at the man behind the camera and follow along as he captures a portrait of Colin Powell, former National Security Advisor and retired four-star general.

It’s a fantastic watch from beginning to end. If you have a Netflix subscription, we also suggest watching some of the other episodes in the series.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Video: Netflix makes its Platon documentary episode free on Youtube

Posted in Uncategorized

 

Computational photography part II: Computational sensors and optics

08 Jun

Editor’s note: This is the second article in a three-part series by guest contributor Vasily Zubarev. The first and third parts can be found here:

  • Part I: What is computational photography?
  • Part III: Computational lighting, 3D scene and augmented reality (coming soon)

You can visit Vasily’s website where he also demystifies other complex subjects. If you find this article useful we encourage you to give him a small donation so that he can write about other interesting topics.


Computational Sensor: Plenoptic and Light Fields

Well, our sensors are crap. We simply got used to it and trying to do our best with them. They haven’t changed much in their design from the beginning of time. Technical process was the only thing that improved — we reduced the distance between pixels, fought read noise, increased readout speeds and added specific pixels for phase-detection autofocus systems. But even if we take the most expensive camera to try to photograph a running cat in the indoor light, the cat will win.

  • Video link: The Science of Camera Sensors

We’ve been trying to invent a better sensor for a long time. You can google a lot of research in this field by “computational sensor” or “non-Bayer sensor” queries. Even the Pixel Shifting example can be referred to as an attempt to improve sensors with calculations.

The most promising stories of the last twenty years, though, come to us from plenoptic cameras.

To calm your sense of impending boring math, I’ll throw in the insider’s note — the last Google Pixel camera is a little bit plenoptic. With only two pixels in one, there’s still enough to calculate a fair optical depth of field map without having a second camera like everyone else.

Plenoptics is a powerful weapon that hasn’t fired yet.

Plenoptic Camera

Invented in 1994. For the first time assembled at Stanford in 2004. The first consumer product — Lytro, released in 2012. The VR industry is now actively experimenting with similar technologies.

Plenoptic camera differs from the normal one by only one modification. Its sensor is covered with a grid of lenses, each of which covers several real pixels. Something like this:

If we place the grid and sensor at the right distance, we’ll see sharp pixel clusters containing mini-versions of the original image on the final RAW image.

  • Video link: Muted video showing RAW editing process

Apparently, if you take only one central pixel from each cluster and build the image only from them, it won’t be any different from one taken with a standard camera. Yes, we lose a bit in resolution, but we’ll just ask Sony to stuff more megapixels in the next sensor.

That’s where the fun part begins. If you take another pixel from each cluster and build the image again, you again get a standard photo, only as if it was taken with a camera shifted by one pixel in space. Thus, with 10×10 pixel clusters, we get 100 images from “slightly” different angles.

The more the cluster size, the more images we have. Resolution is lower, though. In the world of smartphones with 41-megapixel sensors, everything has a limit, although we can neglect resolution a bit. We have to keep the balance.

  • Link: plenoptic.info – about plenoptics, with python code samples

Alright, we’ve got a plenoptic camera. What can we do with it?

Fair refocusing

The feature that everyone was buzzing about in the articles covering Lytro is the possibility to adjust focus after the shot was taken. “Fair” means we don’t use any deblurring algorithms, but rather only available pixels, picking or averaging in the right order.

A RAW photo taken with a plenoptic camera looks weird. To get the usual sharp JPEG out of it, you have to assemble it first. The result will vary depending on how we select the pixels from the RAW.

The farther the cluster is from the point of impact of the original ray, the more defocused the ray is. Because the optics. To get the image shifted in focus, we only need to choose the pixels at the desired distance from the original — either closer or farther.

The picture should be read from right to left as we are sort of restoring the image, knowing the pixels on the sensor. We get a sharp original image on top, and below we calculate what was behind it. That is, we shift the focus computationally.

The process of shifting the focus forward is a bit more complicated as we have fewer pixels in these parts of the clusters. In the beginning, Lytro developers didn’t even want to let the user focus manually because of that — the camera made a decision itself using the software. Users didn’t like that, so the feature was added in the late versions as “creative mode”, but with very limited refocus for exactly that reason.

Depth Map and 3D using a single lens

One of the simplest operations in plenoptics is to get a depth map. You just need to gather two different images and calculate how the objects are shifted between them. The more the shift — the farther away from the camera the object is.

Google recently bought and killed Lytro, but used their technology for its VR and… Pixel’s camera. Starting with the Pixel 2, the camera became “a little bit” plenoptic, though with only two pixels per cluster. As a result, Google doesn’t need to install a second camera like all the other cool kids. Instead, they can calculate a depth map from one photo.

Images which top and bottom subpixels of the Google Pixel camera see. The right one is animated for clarity (click to enlarge and see animation). Source: Google
The depth map is additionally processed with neural networks to make the background blur more even. Source: Google
  • Link: Portrait mode on the Pixel 2 and Pixel 2 XL smartphones

The depth map is built on two shots shifted by one sub-pixel. This is enough to calculate a rudimentary depth map and separate the foreground from the background to blur it out with some fashionable bokeh. The result of this stratification is still smoothed and “improved” by neural networks which are trained to improve depth maps (rather than to observe, as many people think).

The trick is that we got plenoptics in smartphones almost at no charge. We already put lenses on these tiny sensors to increase the luminous flux at least somehow. Some patents from Google suggest that future Pixel phones may go further and cover four photodiodes with a lens.

Slicing layers and objects

You don’t see your nose because your brain combines a final image from both of your eyes. Close one eye, and you will see a huge Egyptian pyramid at the edge.

The same effect can be achieved in a plenoptic camera. By assembling shifted images from pixels of different clusters, we can look at the object as if from several points. Same as our eyes do. It gives us two cool opportunities. First is we can estimate the approximate distance to the objects, which allows us easily separate the foreground from the background as in life. And second, if the object is small, we can completely remove it from the photo since we can effectively look around the object. Like a nose. Just clone it out. Optically, for real, with no photoshop.

Using this, we can cut out trees between the camera and the object or remove the falling confetti, as in the video below.

“Optical” stabilization with no optics

From a plenoptic RAW, you can make a hundred of photos with several pixels shift over the entire sensor area. Accordingly, we have a tube of lens diameter within which we can move the shooting point freely, thereby offsetting the shake of the image.

Technically, stabilization is still optical, because we don’t have to calculate anything — we just select pixels in the right places. On the other hand, any plenoptic camera sacrifices the number of megapixels in favor of plenoptic capabilities, and any digital stabilizer works the same way. It’s nice to have it as a bonus, but using it only for its sake is costly.

The larger the sensor and lens, the bigger window for movement. The more camera capabilities, the more ozone holes from supplying this circus with electricity and cooling. Yeah, technology!

Fighting with Bayer filter

Bayer filter is still necessary even with a plenoptic camera. We haven’t come up with any other way of getting a colorful digital image. And using a plenoptic RAW, we can average the color not only by the group of nearby pixels, as in classic demosaicing, but also using dozens of its copies in neighboring clusters.

It’s called “computable super-resolution” in some articles, but I would question it. In fact, we reduce the real resolution of the sensor in these some dozen times first in order to proudly restore it again. You have to try hard to sell it to someone.

But technically it’s still more interesting than shaking the sensor in a pixel shifting spasm.

Computational aperture (bokeh)

Those who like to shoot bokeh hearts will be thrilled. Since we know how to control the refocus, we can move on and take only a few pixels from the unfocused image and others from the normal one. Thus we can get an aperture of any shape. Yay! (No)

Many more tricks for video

So, not to move too far away from the photo topic, everyone who’s interested should check out the links above and below. They contain about half a dozen other interesting applications of a plenoptic camera.

  • Video link: Watch Lytro Change Cinematography Forever

Light Field: More than a photo, less than VR

Usually, the explanation of plenoptics starts with light fields. And yes, from the science perspective, the plenoptic camera captures the light field, not just the photo. Plenus comes from the Latin “full”, i.e., collecting all the information about the rays of light. Just like a Parliament plenary session.

Let’s get to the bottom of this to understand what a light field is and why we need it.

Traditional photos are two-dimensional. When a ray hits a sensor there will be a corresponding pixel in the photo that records simply its intensity. The camera doesn’t care where the ray came from, whether it accidentally fell from aside or was reflected off of another object. The photo captures only the point of intersection of the ray with the surface of the sensor. So it’s kinda 2D.

Light field images are similar, but with a new component — the origin and angle of each ray. The microlens array in front of the sensor is calibrated such that each lens samples a certain portion of the aperture of the main lens, and each pixel behind each lens samples a certain set of ray angles. And since light rays emanating from an object with different angles fall across different pixels on a light field camera’s sensor, you can build an understanding of all the different incoming angles of light rays from this object. This means the camera effectively captures the ray vectors in 3D space. Like calculating the lighting of a video game, but the other way around — we’re trying to catch the scene, not create it. The light field is the set of all the light rays in our scene — capturing both the intensity and angular information about each ray.

There are a lot of mathematical models of light fields. Here’s one of the most representative.

The light field is essentially a visual model of the space around it. We can easily compute any photo within this space mathematically. Point of view, depth of field, aperture — all these are also computable; however, one can only reposition the point of view so much, determined by the entrance pupil of the main lens. That is, the amount of freedom with which you can change the field of view depends upon the breadth of perspectives you’ve captured, which is necessarily limited.

I love to draw an analogy with a city here. Photography is like your favorite path from your home to the bar you always remember, while the light field is a map of the whole town. Using the map, you can calculate any route from point A to B. In the same way, knowing the light field, we can calculate any photo.

For an ordinary photo it’s overkill, I agree. But here comes VR, where light fields are one of the most promising areas of development.

Having a light field model of an object or a room allows you to see this object or a room from multiple perspectives, with motion parallax and other depth cues like realistic changes in textures and lighting as you move your head. You can even travel through a space, albeit to a limited degree. It feels like virtual reality, but it’s no longer necessary to build a 3D-model of the room. We can ‘simply’ capture all the rays inside it and calculate many different pictures from within that volume. Simply, yeah. That’s what we’re fighting over.

  • Link: Google AR and VR: Experimenting with Light Fields

Vasily Zubarev is a Berlin-based Python developer and a hobbyist photographer and blogger. To see more of his work, visit his website or follow him on Instagram and Twitter.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Computational photography part II: Computational sensors and optics

Posted in Uncategorized

 

Is the Sony ZV-1 the best vlogging camera, and what’s it like for photography?

07 Jun

For vlogging, and beyond?

The Sony DC-ZV-1 is an interesting camera. It re-arranges some familiar components into a camera explicitly designed with vloggers in mind.

However, while it’s not part of the RX100 series (or even part of the Cyber-shot lineup), enough of its technology comes from those cameras that we think some people will at least consider it as a stills camera.

We’re going to look at how the ZV-1 stacks up against the Canon PowerShot G7 X Mark III and the Sony Cyber-shot DSC-RX100 V (specifically the ‘M5A’ variant), first as tools for vlogging and then as compact stills cameras.

For vlogging vs. Canon G7 X III

We’ve already detailed the vlogging-specific features that the ZV-1 offers, and many of these give it a clear advantage over the Canon G7 X III, when it comes to shooting facing-the-camera video.

Underpinning most of the ZV-1’s benefits over the G7 X III is its autofocus system. Part of this is the inclusion of phase detection elements, meaning the camera can assess depth before refocusing the lens (which is critical for keeping video in focus, without too much hunting), but also Sony’s AF algorithms, which have got very, very good at both subject tracking and face / body recognition. There are other features that distinguish between the two cameras but dependable autofocus is perhaps the most compelling.

Beyond that, the ZV-1’s other key benefit is its vlogging-friendly microphone setup. The three-capsule mic is designed specifically to pick up the sounds of someone addressing the camera. The results are much better than the G7 X III.

For vlogging vs. Canon G7 X III

The ZV-1’s fully articulated screen is also likely to be preferable to the G7 X III’s flip-up screen for most vloggers. The ZV-1’s screen doesn’t extend totally to be totally in-line with the camera body (it’s angled 4 degrees back, even when fully pulled forward), but that’s not a difference likely to have any real-world impact.

Both cameras shoot 4K in both 30p or 24p (or 25p in PAL regions), should you decide your vlog would benefit from a more cinematic look.

In terms of endurance, Canon says it expects the G7 X III to record 4K footage for up to 10 minutes per clip, whereas the ZV-1 by default stops after 5. However, disengaging the overheat warnings on the Sony removes this restriction.

For vlogging vs. Canon G7 X III

The G7 X III can broadcast straight to YouTube if it’s connected to a wireless network (including your phone, if it can operate as a hotspot). However, the utility of this feature is a little questionable. For a start, how often will you be trying to vlog from a situation where you have Wi-Fi but can’t connect your camera to a computer and use either camera? But, more pressingly for most of us, YouTube only allows live streaming from mobile devices (including the G7 X III) if you have 1000+ people following you on your account.

This isn’t a big hurdle if you’re already established to any degree, but it reduces the value of the feature if you’re trying to choose a camera to start vlogging with. If you’re looking for a device to start an empire from, both can livestream if you connect them to a computer (though the ZV-1 is only promising Microsoft Windows support at the moment).

Both can directly Wi-Fi their video footage to a smartphone, for anything you’ve pre-recorded, in FullHD or 4K.

For vlogging vs. the iPhone

Another rival device for vlogging is a good smartphone, not least because there’s a chance that most of us already have one.

In their recent video, DPRTV’s Chris and Jordan used an older iPhone XR to shoot some footage alongside the Sony. Its lens offers a similarly wide angle-of-view to the Sony, while the iPhone 11 goes wider. The selfie camera on the iPhone 11 has focus fixed in a way that covers vlogging distances, but has no way to imitate Sony’s ‘Product Showcase’ AF mode if you want to focus on something nearer to the camera.

As Chris discovered when testing the two side-by-side, the iPhone appears to be rather better at stabilizing its footage than the ZV-1. And, for all Sony’s talk about improved skintones, the iPhone version looks pretty good, to our eyes.

Ultimately, while Sony appears to have more money than most camera companies to develop technologies such as machine-learning-derived AF systems, it seems to be some way behind Apple, which has been working hard to apply processing power and extreme cleverness to the output of its phones for several generations. The iPhone’s exposure and processing, while perhaps edging towards over-tone-mapped ‘bad HDR’ territory, generally looks really good. You’d have to shoot Log or HLG and color grade the ZV-1’s footage to get a comparable result.

The larger sensor of the ZV-1 should give it an edge when it comes to indoor video and, of course, it can provide a shallower depth-of-field look (which phones don’t yet even attempt to simulate in video mode) but is that enough to counteract the convenience offered by an internet-connected smartphone?

For stills vs. RX100 VA

The ZV-1 isn’t supposed to be a stills camera, in the sense that Sony isn’t particularly promoting it that way. But it shares enough with the RX100-series that we’d expect at least some people to see it as a means of getting something like a viewfinderless RX100 V without having to forego multiple generations of improvements by opting for the RX100 II.

Instead, in many respects the ZV-1 could be seen as an RX100 V without a viewfinder but with all the updates of the RX100 VII (including things like a touchscreen, that weren’t added in the M5A revision to the RX100 V). These updates include what Sony calls ‘Real-time Tracking’ and ‘Real-time AF,’ which refer the the camera’s ability to track a subject, switch to face or eye AF if that subject is a person, and continue to track them even if they face away from the camera.

The RX100 VA’s AF system is recognizably older: Tracking isn’t as sophisticated, eye AF requires you hold down a custom button to activate it and there’s a separate (and even less good) tracking system in video mode.

So what else do you gain or lose?

For stills vs. RX100 VA

As you’d expect, the ZV-1 omits a number of features that we’d expect from an RX100-series camera. There’s no EVF, no built-in flash and no control ring around the lens. There’s also no exposure mode dial (it’s replaced by a Mode button).

But in their place you get a more prominent grip to hold the camera with and a flash hotshoe if you want to attach an external flash or other accessories. And, as we say, you get another feature that the RX100 V was missing: a touchscreen.

The more prominent [REC] button on the top of the ZV-1 allows the removal of the tiny version set into the thumb rest of the RX100 V. Neither camera lets you re-purpose this button if you’re really not interested in video.

The control ring around the lens, the built-in flash and the EVF all mean the RX100 V is a better stills camera if you’re an experienced photographer, want flexibility and some direct control, but with its touchscreen and superior AF system, the ZV-1 might be the better point-and-shoot.

For stills vs. Canon G7 X III

We’ve already seen that the ZV-1’s autofocus and mics give it a clear benefit over the Canon G7 X III as a vlogging camera. But given they’re similarly priced and are both 1″ sensor compacts with short, bright lenses and no viewfinder, it’s probably fair to see how they stack up on the stills side of things.

The ZV-1’s autofocus benefits continue to shine in this situation, as does its lens, which is significantly sharper, particularly at the wide-angle end. However, the G7 X III’s 24-100mm range is appreciably longer than the 24-70-ish equiv reach of the Sony.

The Canon also has a dedicated exposure compensation dial, a clicking control ring around the lens and a built-in flash, which help make it a more engaging camera to use, if you wish to take control over your photography. We also found the grip – designed solely for holding the camera facing away from you – more comfortable than the one on the ZV-1.

Overall

The ZV-1 promises to be a more capable vlogging camera than any other we’ve seen. Its generally excellent (and, crucially, dependable) autofocus is a huge part of this, and features such as product showcase mode have clearly been carefully developed to make this capability as easy to exploit as possible.

But building the ZV-1 primarily from existing RX100 components does appear to have limited the camera, somewhat. The 24mm-equiv wide-angle capability (26mm equiv by the time the slight crop of the 4K video mode has been factored-in), isn’t as wide as some users prefer for to-the-camera presenting, especially if you then need to engage the camera’s digital stabilization, which crops-in still further.

Similarly, while the G7 X III doesn’t offer any audio monitoring, either, it does seem odd that Sony hasn’t gone to the effort of providing a means to do so, either via USB or Bluetooth. Instead it’s limited to capabilities we’ve seen in existing RX100 models.

But, for all that, the Sony ZV-1 is the most overtly vlogging-focused camera on the market. In addition, although it’s not intended, we think it might also be a better point-and-shoot camera than the RX100 V. The more prominent grip, the touchscreen and the removal of the control ring may also make it a better (and less expensive) family camera.

For vlogging though, we suspect that the ZV-1’s biggest rival will be high-end smartphones, which offer a lot of capability without the need to buy a separate device.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Is the Sony ZV-1 the best vlogging camera, and what’s it like for photography?

Posted in Uncategorized

 

Let it roll: why camera makers are going to keep adding video

07 Jun
A lot of the pre-launch hype around Canon’s EOS R5 has focused on its video prowess, but why do features like 8K keep getting added to stills cameras?

Some of the most dramatic improvements in recent cameras have been in the realm of video, leaving many stills photographers unimpressed. But there are some good reasons why cameras keep getting better video, some equally good reasons we’re unlikely to see many ‘pure photography’ cameras in future, and even if we did, there’s very little reason to think such a camera would be any cheaper.

Why the focus on video?

One of the main reasons it seems all the camera makers are focused video is because it’s an area where there’s clear room for improvement. Image sensors are now very, very good: efficiency is very high and read noise is very low, meaning we’re unlikely to see the big steps forward in generational image quality that we saw in the earlier days of digital photography.

Instead, most of the progress being made is in terms of readout speed and processing power. We’re seeing these manifest as better autofocus performance, multi-shot camera modes and improved video. This is also why we spend more time discussing AF and video in our reviews: because they’re areas of significant progress and difference between models.

Understandably, we see a lot of stills photographers saying they don’t want to have to pay for features they don’t need. But it’s not that simple:

You’re already paying for the hardware

Pitched as ‘The Ultimate Photo Shooting Camera’ at launch, the Panasonic G9 gained a major mid-life video upgrade, to broaden its appeal.

The faster readout and processing that help provide higher-res and better bitrate video are the same technologies that underpin the faster, more subject-aware autofocus improvements we’ve seen in the past few years. The same is broadly true of the multi-shot high res, focus stacking and re-focus modes that have been added: so you won’t lower the hardware costs by leaving video out.

You may not be paying for the development

On top of this, the very reason manufacturers are committing development resources to video is because they hope it will broaden a product’s appeal beyond the (declining) market for traditional stills cameras. YouTube and social media have made video sharable, which makes video capability more desirable. If adding video features means more cameras get sold, then each buyer shoulders a little less of the development cost.

Also, some realms of professional photography now demand high-end video capabilities, so much of the development work is being conducted for that audience, and is then trickling down.

A separate, still-only variant would cost more, not less

Don’t fall into the trap of assuming you could make a cheaper model with these extra features missed off. Designing and developing two versions of a product would cost more, even if they only differed in terms of firmware, since you’d have to conduct the testing and quality assurance on two versions of the firmware, then continue to develop them in parallel in the event of updates.

A camera with fewer features wouldn’t be cheaper. Even post-purchase firmware would add to costs: would you be willing to pay to have video removed?

Each additional camera model then incurs marketing expenses, to tell the world that it exists and to communicate the differences. It then adds to production planning and supply chain complexity: you need to balance production capacity between the two models, then make sure that the right number of stills-only and hybrid models end up going to each region and each retailer.

We’ll still see stills-only models

Not every new camera will have video, but those that don’t will be in the minority: Leica has some high-end video capability in models where it makes sense.

Despite all these factors, we’ll still see some stills-only cameras. For instance, Leica is likely to continue to offer stills-only rangefinder cameras (even though some models have featured video), and adding high quality video isn’t likely to be a priority for Phase One’s medium format backs.

There’s a mixture of factors at play. Adding video might reduce, rather than broaden, appeal for a product where focus – whether it’s photographic tradition or ultimate stills quality – is a selling point. And this goes beyond the question of whether a video-enabled version would be a satisfying (or even satisfactory) video camera.

Let it roll

But outside these rarefied niches, video is here to stay. Hence Leica’s SL cameras tout pretty impressive video specs and Panasonic’s more stills-focused G9 received a major boost to its video spec, mid-life, to expand its appeal.

If well implemented, video features need not get in your way, allowing a more streamlined stills experience than in recent generations of camera.

At which point, rather than rail against the (almost) inevitable, you may find it more productive to argue for better video implementation, so that the video features don’t get in your way. Or perhaps, you could give video a try. Who knows? You might enjoy it.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Let it roll: why camera makers are going to keep adding video

Posted in Uncategorized

 

GoPro releases GoPro Labs, a beta update that adds experimental features to your HERO8 Black

06 Jun

GoPro has announced the release of GoPro Labs, a new program that allows GoPro HERO8 Black owners to sign up as beta testers to test out experimental features that haven’t yet made their way into final products. In GoPro’s own words, ‘Think of GoPro Labs as an insider look at innovative features our top engineers are playing with.’

The first release of GoPro Labs includes a pair of features that were first developed via internal [hackathons](https://en.wikipedia.org/wiki/Hackathons: ReelSteady GO optimization and QR Codes for camera control.

Earlier this year, GoPro acquired ReelSteady, a team of FPV drone operators and visual effects experts that have developed some of the most advanced stabilization and image correction software out there for GoPro cameras. Nothing has come from the acquisition as of this time, but the ReelSteady GO optimization in the GoPro Labs firmware update will allow GoPro HERO8 Black owners to optimize the in-camera rolling shutter correction to better work with ReelSteady’s post-production software.

Below is an example video from ReelSteady showing their image stabilization technology at work:

The QR Codes for camera control in the GoPro Labs firmware update is exactly what it sounds like. By creating custom QR codes with embedded commands, GoPro HERO8 Black owners can add new functions to their action cam without the need for Wi-Fi connectivity. Below are a few examples of features you can tweak via QR code:

  • Wake-up timer for remote start capture
  • Save favorite modes as a visual preset/QR code Motion detection start/stop — only capture video when something is happening
  • Speed detection start/stop — use GPS to determine your speed and automatically start capture at a defined speed
  • Camera scripting — e.g. shoot a time-lapse of a construction site but only during daylight hours (and many other detailed camera controls)
  • Personalize your GoPro with owner information Larger chapters for fewer files when taking long video captures — e.g. 4GB chapters will increase to 12GB.

GoPro has created and shared ten pre-built command QR codes with variables, but if you’re feeling adventurous, you can also create your own using GoPro’s list of action commands and settings commands. Additional support can be found on the GoPro Labs community within the GoPro forums.

To show off what’s capable with the new functionality, GoPro showed how the QR code camera control feature was used by GoPro Technical Fellow (and creator of the QR code feature), David Newman, worked alongside Northrup Grumman Corp. to capture the launch of a resupply mission to the International Space Station. Since the GoPro’s had to be set 72 hours in advance and not touched, he teamed up with his daughter to trigger each camera with a QR code before securing them to the launch pad. As the below video attests to, the results worked perfectly, despite none of the action cams having external power or displays.

View this post on Instagram

GoPro rocks. Raw video. Raw audio. Straight from the camera. Just some simple cuts. That’s a wrap! #gopro #antares #northropgrumman #iss #rocket #nofilter #okaymaybetherewasaNDfilter

A post shared by John + Sara (@reedpi) on

GoPro says these features could one day be unveiled alongside a new camera, but also notes there’s a chance ‘these features may never make it to a camera release.’

If you happen to have a GoPro HERO8 Black on hand, you can read through the installation instructions and download the GoPro Labs firmware update on GoPro’s website. Below is a great rundown of the new features from YouTuber DC Rainmaker:

If you end up creating anything interesting with the GoPro Labs firmware, let us know in the comments below or contact us via our feedback form!

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on GoPro releases GoPro Labs, a beta update that adds experimental features to your HERO8 Black

Posted in Uncategorized

 

Fujifilm X-T4 sample gallery (DPReview TV)

06 Jun

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_2176369272″,”galleryId”:”2176369272″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Our team from DPReview TV has been shooting a production Fujifilm X-T4 all week. Check out these new sample photos from the Canadian Rockies and the beautiful city of Calgary, Alberta.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Fujifilm X-T4 sample gallery (DPReview TV)

Posted in Uncategorized

 

The gear that got away: reader responses

06 Jun

Gear that got away: reader responses

After sharing our own stories of selling gear and later coming to regret it, we heard from our readers with their own tales of woe – and we weren’t quite prepared for the emotional rollercoaster ride it would be reading your comments! From an unlikely reunification, to a camera left behind in combat, to a sturdy lens that wouldn’t quit (until a spider moved in) your stories have all the excitement of a summer blockbuster. Take a look at a selection of our favorites.

Yashica Electro 35

CMCM: For purely nostalgic reasons, somehow and somewhere I lost track of my very first camera… a rangefinder Yashica Electro 35, bought in the Cu Chi, Vietnam PX in early 1967. This was apparently the first electronically controlled camera (hence the name “Electro), in which you selected the aperture and the camera automatically chose the shutter speed. It had an excellent fixed 45mm f1.7 lens, and my copy had a non-working light meter for most of its life. I used it sporadically until about 1979, when I got a Canon AE-1.

I’ve recently been digitizing old slides, and I’ve been amazed at how lovely the photos from the old Yashica could be. Wish I knew what happened to it! However, for fun I recently found an absolutely mint one that appears to have never been used, and even the light meter still works!

Panasonic LX100

The Jimmy 86: I’ll always somewhat regret selling my LX100. I was very much a fledgling photographer when I got (and arguably still am) but I took some of my favorite photos with it. The aspect ratio selection switch was just a dream and the camera was essentially good at everything.

I’ll likely never sell a camera when I upgrade again.

Olympus Trip 35

Photo by Marc Lacoste via Wikimedia Commons

BoborTwo: My 1st ‘proper’ camera, an Olympus Trip 35 (the David Bailey one, as my mum used to say).

It opened my camera eye, and led to me selling photos … but stupidly, I traded up to an OM10 on it, and it was gone forever. I tried to get it back some 2 years later, but it had been sold on to random customer in the Jessops in which I traded it.

I deeply miss it, I knew its limitations – there were many, but it took a long time for me to find something I loved as much – a Minox 35 GT – I will never let it go.

Nikon D700

philm5d: My Nikon D700. I had taken it to Scotland on Honeymoon however. Looking at the pics I wished I hadn’t got rid of it – lovely images and also a degree of sentimentality. A chap in Europe had bought it. Two years later I found his eBay name on an old email and offered to buy it back. He wrote back to say sorry he’d sold it on but if he came across the buyer’s details he’d tell me. Six months later he wrote to say he’d found owner two’s username details back in UK.

SO I messaged guy no.2. He said sorry he didn’t want to part with it. I put his eBay name in favourite sellers and two years later he’s selling a guitar. I offer to buy the guitar AND the camera and lo and behold he offers to list the camera at a crazy price (to dissuade others) + “offers” and tells me what he’s accept which was £450. So now I have my camera back after its travels. It’s slightly more worn with 45000 clicks but works perfectly. The serial number tallies with my honeymoon pics etc and I am happy. Beat that if you can!

Vintage photography magazines

valosade: Complete editions of Modern Photography and Popular Photography magazine from the 70s and 80s. When I moved I put them in a paper collection. I was insane, especially Modern I like to read every day …

felix from the suburbs: In my case, I had several decades worth of Modern Photography and Popular Photography nicely stored in cardboard boxes in the basement. We went up north one week-end and came back to a burst pipe in the basement right over where those boxes were kept. The magazines were turned to mush. Much heartbreak that day.

Leica M-2R

Leica M2 photographed by E. Wetzig via Wikimedia Commons

Rodger Kingston: It was 1973, and I was newly married and new to photography, still on my first “serious” camera, a Minolta SRT 101 SLR (which I eventually ruined by backing into a swimming pool at a wedding rehearsal, but that’s another story).

A friend offered me a new Leica M2-R with a Dual Range Summicron and Close-Focus Attachment for the ridiculous price – if I remember correctly – of $ 250, with the proviso that if I didn’t like the camera, I had to offer it back to him at the price he sold it to me for.

A complete newbie, I’d gotten used to the tunnel vision of an SLR, and found the inscribed frame of the Leica rangefinder unsettling to use, so after a short time I sold it back to him.

Now, a lifetime later, my favored cameras have been rangefinder/viewfinder style for many years (including a few Leicas), but none as sweet as that M2-R that I let slip away because I didn’t have the sense to learn how to see with it.

Olympus C-8080

Photo by photophile with Olympus C-8080

photophile: I purchased the huge, brick-like C-8080 it in early 2005 – and loved it straight away. THAT lens was astonishing at resolving detail. The supermacro mode was to die for, the flip-out LCD was really handy for shooting flowers & bugs at ground level and those direct on-body buttons to change metering mode and shooting mode etc – wow! BUT – it was slow to focus and RAW write speed was snail’s pace plus it was a bit noisy above ISO-200. So when the E500 came out, I thought it was time to ‘upgrade.’ As the SLR wasn’t cheap, I sold the C-8080.

The regret was immediate. Yes the new toy was great – but it seemed a bit sterile, too easy to use! Bizarrely, I actually MISSED having to fiddle and fidget with the C-8080, I especially missed the on-body buttons – hated having to trawl through menus on the E500.

Bought a used/abused unit in 2009. And I still have it. Love pressing buttons and turning dials, making it whirr and chirp as it struggles to lock focus. Bit like me!

Rollei 2.8

Photo by Sputniktilt via Wikimedia Commons

mikegc: I took my Rollei 2.8 to Vietnam in 1969. I was a combat photographer with the First Infantry Division. During an assault, the Rollei took a hit as I was running. The bullet passed through the body of the camera that shattered the viewfinder lens and the focusing control. I left it in the jungle and I’m very sorry I did that. It would make a great conversation piece.

Yashica Penta J

Photo by Rick Oleson

ikon44: In my student years in the mid 1960s I sold my UK-made Corfield Periflex 3a for a Yashica Penta J a 35mm film camera with selenium-gold clip on light meter (that was the good decision). The meter clipped on over the shutter speed dial, you chose a shutter speed and the meter gave you the ‘correct’ f stop. It was really easy to use and I found it very reliable.

I sold it in 1970 for an Olympus Pen F and have kicked myself ever since. Many of my friends had (and raved about) the half frame Olympus Pen F. I sold the Yashica for the Pen F and have never recovered from the mistake of thinking I could do better with ‘someone else’s’ idea of the right camera. I now have Nikon D750, and D610 and Fujifilm XT2 and am happy with them all… each for its own purpose.

Panasonic Lumix GF3

Wingsfan: Laugh if you want, but I traded a Lumix GF3 in on an Olympus E-M1 Mark II when they were offering $ 200 for any camera on trade on the E-M1 Mark II. I don’t regret it, because the Olympus is a much better camera, but I forgot how simple and fun it was to use the GF3. Plus, even though my daughter got to inherit my Lumix G5 out of the deal (she had been using the GF3), she still reminds me how much more she likes to GF3 to this day.

Canon PowerShot G12

davesurrey: A while ago I made a spur of the moment decision to reduce my camera collection and sold, amongst others, a Canon G12.

Then every time I saw the space on the shelves where that little fellow had sat I felt nostalgia over take me.

It was far from the best camera I possessed, even then, and it wasn’t even the one I instinctively grabbed when I went out. But there was something about it that I enjoyed.

So I solved the problem and now have a lovely G12 sitting on my shelves again which does get the occasional use.

Was it logical buying another again? Of course not, but what’s logic got to do with passion.

Canon EF 100mm F2 USM

Photo by Ashley Pomeroy via Wikimedia Commons

aceflibble: …For more technical reasons, my first copy of the Canon EF 100mm f/2 USM. That lens was absurdly sharp and well-corrected and had, by far, the fastest and most confident autofocus on the 1Ds cameras. I stupidly sold it when I got the EF 85mm f/1.2L II and immediately missed it. Tried the 70-200 f/2.8 IS as well but still wasn’t happy, so sold both the 70-200 and 85 and bought another copy of the 100. Sadly, that copy was nowhere near as good as my original. Sold it and got a third copy, a bit better but still not quite there. Somewhere out there is a world-class copy of the 100mm f/2 and I hope whoever has it appreciates what they have.

Nikon FM

Photo by Callum Lewis-Smith via Wikimedia Commons

tcab: Traded my Nikon FM film camera & 105mm lens for a film Pentax point and shoot before an overseas holiday. Walking out of the shop I happened to look around and saw the shop owner fondling the Nikon gear with a huge grin on his face. I thought maybe I had made a mistake, but left it at that and went on my holiday.

Twenty or more years later I look back and think – what was I thinking! I loved that camera – it was my first, too. Sure I have all sorts of better cameras now, but still regret selling that classic Nikon.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on The gear that got away: reader responses

Posted in Uncategorized