RSS
 

Posts Tagged ‘work’

The 500 Rule in Photography: What Is It and How Does It Work?

23 Sep

The post The 500 Rule in Photography: What Is It and How Does It Work? appeared first on Digital Photography School. It was authored by Jeremy Flint.

the 500 rule in photography: a quick guide

Do you want to create beautiful photos of the night sky? The 500 Rule is a great astrophotography technique that’ll help you render sharp stars, consistently. Best of all, it’s ultra easy to use.

So if you’re new to the 500 Rule, you’re in luck; this article will guide you through the main principles, explain how to use it, and highlight the importance of applying it in your nighttime photography.

Let’s get started.

What is the 500 Rule?

The 500 Rule is a popular guideline for photographing stars. Its main aim is to capture stars as razor-sharp pinpoints rather than star trails – by helping you calculate the longest shutter speed you can use without noticeable star streaking.

You see, photographing sharp stars – as opposed to slight star trails – requires a reasonably fast shutter speed. If your shutter speed gets too low, your stars will start to streak. And while this can look nice when done carefully, unintentional star trails can become messy.

beautiful astrophotography without star trails

Why is the 500 Rule important?

Truthfully, the 500 Rule, despite its name, is really more of a guideline. But it’s still plenty important, because it helps you capture striking images of the night sky without streaky stars.

You see, when you’re photographing at night, your camera is often starved for light – and so the longer you can make the exposure, the better. But go too long and you get star streaks, which is where the 500 Rule comes in; it provides you with a cutoff point that you can’t (or shouldn’t) go past when doing astrophotography.

The 500 Rule: the basic calculation

So, how does the 500 Rule work?

In principle, the 500 Rule is easy to calculate. Simply take 500 and divide it by the focal length of your lens. The result is equal to your maximum shutter speed.

For example, when using a lens with a focal length of 24mm, you would divide 500 by 24, which gives you 21 – or a shutter speed of 21 seconds.

And when using a 50mm focal length, you would divide 500 by 50, for a maximum shutter speed of 10 seconds.

It is important to remember, however, that the 500 Rule is only an approximation of the best exposure time to use. It won’t always guarantee that your images will be free of star trails (and later on in this article, I discuss some situations where the 500 Rule can fail).

astrophotography with star trails
An image with star trails; note that I did not use the 500 Rule.

When to use the 500 Rule

The most popular time to use the 500 Rule formula is when shooting night scenes of static stars or the Milky Way. When photographing the night sky, the conditions are very dark, and you will need to adjust your camera settings to compensate for the low light. Usually, if you want to capture a bright enough image of a night scene, you will need to increase the exposure time accordingly (and you should keep the ISO low to minimize noise).

This is exactly the time to apply the 500 Rule. By using the maximum shutter speed, you’ll get sharp stars – and you’ll also get a well-exposed image with the minimum amount of noise.

astrophotography mountain photo without star trails
An image using the 500 Rule (without star trails).

Generally, you’ll want to use a fairly wide-angle lens when photographing the night sky because it’ll offer a broader field of view. I recommend a shorter focal length lens such as a 14mm, 16mm, or 24mm to capture more stars in the scene.

For your convenience, I’ve calculated out the maximum exposure times for each of these focal lengths:

  • 14mm | 500/14 = 35 seconds
  • 16mm | 500/16 = 31 seconds
  • 24mm | 500/24 = 20 seconds

Dealing with crop-sensor and Four Thirds cameras

The 500 Rule is a great concept, one that you can use with any camera. However, the basic 500 Rule calculation mentioned above applies solely to full-frame cameras, and you’ll need a different formula for crop-sensor and Four Thirds cameras.

If you use a crop-sensor or Four Thirds camera, the maximum shutter speed can be calculated as follows:

500 divided by the crop factor multiplied by the focal length.

So first multiply the focal length by the crop factor (which gives you the effective focal length). Then divide 500 by the result.

For a Canon APS-C camera and a 24mm lens, this would be 500 divided by 1.6 x 24 = 13 seconds.

For Nikon APS-C camera and a 24mm lens, this is 500 divided by 1.5 x 24 = 14 seconds.

The Four Thirds crop factor is 2, so with a 24mm lens, you get 500 divided by 2 x 24 = 10 seconds.

aurora borealis using the 500 rule

Should you always use the 500 Rule for astrophotography?

You shouldn’t always use the 500 Rule. Sometimes, the suggested maximum shutter speed can still result in star trails in your images. For example, according to the 500 Rule, a 12mm lens should let you photograph down to around 40 seconds or so without star trails, yet an exposure time of over 30 seconds can show star movement. The same is true of an ultra-wide 14mm lens.

So if you want to record sharp stars, you may want to avoid using the 500 Rule with such lenses.

Alternatively, you may be someone who prefers to shoot star trails. If that’s the case, there is no need for you to use the 500 Rule. Instead, use an ultra-long shutter speed to record the curve of the stars as they move. You’ll generally need to apply a shutter speed of 30 seconds up to an hour or more.

(The longer the exposure, the larger the star trail!)

mountain star trail scene

Also, the 500 Rule does not take into consideration other factors such as light pollution, haze, and the angle of the stars. Therefore, even after making your 500 Rule calculation, it’s best to experiment with the shutter speed for optimal results.

The 500 Rule: final words

The 500 Rule is a great technique, and you can use it to achieve beautifully sharp shots of stars. It’s a clever method of finding the maximum exposure based on your camera type and lens focal length, and it really is a brilliant way of improving your photos of the Milky Way and the night sky.

If you own an APS-C or Four Thirds camera, be sure to adjust the calculation for the relevant crop factor. Above all, enjoy shooting the night sky!

Now over to you:

What do you think of the 500 Rule? Do you plan to use it in your astrophotography? Do you have any tips? Share your thoughts in the comments below!

aurora borealis using the 500 rule

The post The 500 Rule in Photography: What Is It and How Does It Work? appeared first on Digital Photography School. It was authored by Jeremy Flint.


Digital Photography School

 
Comments Off on The 500 Rule in Photography: What Is It and How Does It Work?

Posted in Photography

 

(Select) Android smartphones now work with Profoto’s professional lights via Profoto Camera app beta

02 Dec

One of the most-asked questions Profoto received after releasing its Profoto Camera app for iOS was: ‘when will it be available for Android devices?’ Now, four months after releasing the Profoto Camera app for iOS alongside the release of its B10/B10 Plus strobes, an Android version is here, albeit in beta and limited to a select number of Samsung devices, for now.

As with its iOS counterpart, the Profoto Camera app for Android beta makes it possible for Android users to trigger a number of its strobes, speedlights and compact LED lights using the company’s AirX Smart-TTL technology. Specifically, the app will work with Profoto’s A10, B10, B10 Plus, C1 and C1 Plus flashes, bringing full flash tube sync support.

Click to enlarge.

Getting this support wasn’t easy, says Profoto in its announcement post:

‘One difference in synchronizing external flash to a mobile device compared to traditional capturing devices like DSLR or MILC is that smartphone cameras require a much more flexible flash-length on different shutter speeds. This makes it more difficult to fire the flash at the exact time and duration to light the image. Up until now, attempts to synchronize the two have fallen short, making Profoto the world’s first company to successfully bring the full power of professional flashes to smartphones with their proprietary Profoto AirX technology’

The Profoto Camera app for Android is available for free in the Google Play Store as an ‘early access’ beta starting today for the following Samsung smartphones running Android OS 8 or later:

  • Galaxy S8 line
  • Galaxy S9 line
  • Galaxy S10 line
  • Galaxy S20 line
  • Galaxy Note 9 line
  • Galaxy Note 10 line
  • Galaxy Note 20 line

Profoto doesn’t specify when the app will likely be out of beta, nor when we can expect to see support for other phones. It’s likely going to be a slow-going process, as Profoto needs to create specific algorithms for each device to ensure compatibility with the onboard camera systems—no small feat considering the fragmentation of devices running Android OS.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on (Select) Android smartphones now work with Profoto’s professional lights via Profoto Camera app beta

Posted in Uncategorized

 

How Does Adobe Stock Work: Successfully Selling Your Photos

04 Sep

Posts with images get 650% more engagement than those without. For this simple reason, stock photos are in high demand and it’s unlikely that demand will be going away anytime soon — and that’s just content writers. Designers, artists, and other creatives always need new design assets and content for their work. This has given rise to a number of Continue Reading

The post How Does Adobe Stock Work: Successfully Selling Your Photos appeared first on Photodoto.


Photodoto

 
Comments Off on How Does Adobe Stock Work: Successfully Selling Your Photos

Posted in Photography

 

How does anamorphic photography work?

25 Aug

Last time we posted about an anamorphic lens there were lots of questions about what anamorphic photography is and how it works, so we thought we’d offer a simple guide to the basics to help everyone understand what it is.

The word ‘anamorphic’ comes from the Greek words ‘ana’ and ‘morph’ which together suggest something that alters its shape but then changes back to normal again. In photography, it relates to a situation in which an image is distorted as it is taken, but is then undistorted to a normal shape when it is projected or displayed. A common example of anamorphic imagery can be seen in the cinema, when a movie is shown in that long letterbox format that stretches across the screen. This characterizes what most of us think of as anamorphic.

A 2.35:1 format image shot using the Sirui 35mm F2.8 1.33x anamorphic lens. The image was recorded in 4K video, so it started life in the 16×9 format.

The optical anamorphic process was invented during World War I to help observers in tanks get a wider view of the battleground without having to make the observation hole any larger, and the system was used on-and-off in cinema once the war was over. Anamorphic films became more popular in the early 1950s with the process re-energized to provide an exciting alternative to the almost-square format of television.

With the growing number of TVs in homes, Hollywood wanted to ensure cinema retained some unique qualities that would still make people leave their living room and part with their money to watch a movie. The long letter-box format is now synonymous with epic cinema all over the world and is a subconscious indicator for the audience that the movie they are watching has high production values.

That link with the atmosphere of ‘serious’ filmmaking is why anamorphic photography is so attractive to amateur and professional filmmaker alike, as it can lift production value in the eyes of the audience (or client), and elevate the filmmaker from the throng of video-makers shooting 16:9 or 4:3 on ‘lower-end’ equipment. Warranted or not, many people see using the anamorphic process as a leg-up on the way to artistic greatness.

The [anamorphic] letter-box format is now synonymous with epic cinema all over the world

The image formats associated with anamorphic cinema are also pretty interesting in their own right and engage some different compositional properties that are genuinely useful and unique compared to those that apply to a typical still image – in the same way that panoramic formats work in still photography.

Anamorphic aspect ratios

While in stills photography we tend to use whole numbers when discussing the aspect ratio of any given format, such as 3×2, 4×3, 5×4, 10×8, in anamorphic cinema these things are measured using 1 as the height of the frame. So, popular aspect ratios these days include 2.35:1, 2.39:1 and 2.40:1, though the official standard according to the Society of Motion Picture and Television Engineers (SMPTE) specifies 2.39:1 for widescreen projection.

CinemaScope is 2.66:1 and belongs to 20th Century Fox, but there were a whole load of other formats devised by other studios that didn’t fancy paying Fox for the license to use 2.66:1. Hasselblad fans will recognize the CinemaScope proportions as they are approximately the same as the XPan format that lives on via the 65:24 ratio in the X1D II camera.

This is a 2.66:1 CinemaScope format image, created by using a 2x anamorphic lens while recording 4:3 full-sensor video in a Micro Four Thirds camera

In this digital age, filmmakers can use whatever format suits them, though there is some value in sticking to an established ratio just for familiarity and what it might mean to the audience. The movie La La Land, for example, is shown in CinemaScope to help invoke a sense of the age it portrays – audiences, not just of a certain age, pick up on these things subconsciously and it adds something to the picture.

This diagram shows how different popular projection formats compare. The 4:3 aspect ratio was popular in film and still is in digital sensor formats, while 3:2 is what you get when you shoot full frame and with APS-C/Super 35 sensors, while 16:9 is the standard for most digital cameras in video mode and what we see most in popular video. Widescreen really starts at 2.35:1 and 2.39:1 with moderate anamorphic lenses, and 2.66:1 provides a really long and thin widescreen format.

In film-based cinema, the ends of a wide format might be cropped from the picture to meet the 2.39:1 requirement, especially when a 2x anamorphic lens is in use, but in digital video, a timeline of any proportions can be created to show a finished product in 3.5:1 if desired.

How the format is made

Normal, spherical, lenses look all around themselves in equal measure – viewing at the same angle left/right as they do up/down. Anamorphic lenses capture an elongated horizontal field of view. To achieve this, the lens squeezes the image horizontally to fit within the constraints of the sensor’s dimensions.

This anamorphic image was captured using an anamorphic lens on an iPhone 11 Pro. Use the slider to compare the desqueezed image (L) with the squeezed image (R).
Photo by Dale Baskin

This effect can be seen in the anamorphic video clips below.

That squeezed view has to fit on to a relatively square sensor, such as a 4:3 Micro Four Thirds chip, so the anamorphic element group in the lens squeezes/distorts the horizontal view so that it will fit into the available sensor space. To do this a cylindrical element is used that has the shape of a section cut from a tube – it is bent in only one plane rather than being convex all round as a normal lens would be.

This is clearly a piece of paper and not a glass lens element, but it gives you an idea of the shape of the anamorphic cylinder element that creates the wider horizontal field of view without changing the vertical field of view.

That cylindrical lens is the shape produced when you bend a sheet of paper – bowed in the horizontal aspect but still flat in the vertical aspect – allowing it to capture a wider field of view horizontally than it does vertically. Like in a Hall of Mirrors, this distorted surface creates a distorted image on the sensor or film. When projected to show the audience, that distorted image is passed through another anamorphic lens to distort the view once again, but this time in reverse – un-distorting it so that it looks normal. Historically, in anamorphic cinema, both the camera and projector are fitted with anamorphic lenses.

In the digital world an anamorphic lens is needed only to record the image, as software can be used to stretch the recorded image and make the subjects look geometrically correct again.

This picture was taken with a 1.33x anamorphic lens in stills mode on the Lumix GH5. The recorded image measured 5184 pixels wide and 3456 high, as shown in the Image Size window of Photoshop
To find the length that the image needs to be for the subject to look normal you multiple the recorded length by the anamorphic factor – in this case 1.33x
With the width and height dimensions unlinked you just enter the new width dimensions. In this case 5184 x 1.33 = 6895 pixels. Hit ‘OK’ and the image stretches to the right anamorphic format

In still photography de-squeezing a picture is pretty straight forward. You simply multiply the horizontal pixel count by the squeeze factor of the lens. So, if your original image measures 4000×3000 pixels, for example, you multiply 4000 pixels by the squeeze factor to get the width the final image should be. If the lens had a 1.33x factor we multiply 4000 x 1.33 to get 5320 pixels. In the Resize dialogue of your editing software, unlink the horizontal and vertical resolution figures so the aspect ratio can change, and then replace the 4000 with 5320 for the horizontal dimension, keeping the 3000 pixel (vertical) dimension unchanged.

Why not just crop?

You would think it would be easier just to crop a normal picture to make a letterbox format than going to all the bother of getting special lenses – and you’d be right. The issue though is that when you crop you create a lower resolution image – whether on film or on a digital sensor – and either waste film or pixels in doing so. Anamorphic lenses create an image that fills the film frame/sensor area so all those pixels you paid for are used.

This is a frame from a 4K video recorded with the Sirui 35mm 1.33x anamorphic lens. It uses all eight million of the sensor’s pixels. In contrast, cropping a 16:9 video frame to this 2.35:1 format would give us an image with roughly 6MP of data.

Shooting video using a 4K camera produces frames that are each about 8MP. Once you crop that 4K image to an anamorphic format, such as 2.39:1 for example, you end up with footage containing far fewer pixels. 4K frames shot in 16:9 (1.78:1) are 3840 x 2160 pixels, but when that frame is cropped to 2.39:1 it becomes 3840 x 1606, which is only 6.2MP. Using an anamorphic lens allows you to record using the full 4K area of the sensor, thus retaining all those pixels so the resulting 2.39:1 footage retains 8 million captured pixels instead of just 6.2MP.

This is a still image recorded on the GH5 through the Sirui 35mm 1.33x anamorphic lens. The top image represents what the view looked like, and the second image is how the image looks once the lens has squeezed the wide aspect onto the 4:3 sensor. In software, I de-squeezed the 4:3 captured frame to 16:9 so that the subject would look normal.

Some cameras, like the Panasonic Lumix GH5 and GH5s offer a specific Anamorphic mode that allows the whole 4:3 sensor area to be used to record the footage. In this mode, the GH5 can create 6K footage in which each frame contains the full 20MP resolution of the sensor. When that image is de-squeezed to produce the anamorphic final result those 18 million pixels will still be present.

If you were to use a 1.33x anamorphic lens like the Sirui 35mm F1.8 the footage de-squeezes to a 16:9 format, but one that contains 18MP instead of the 14MP you’d get by simply cropping the full frame to 16:9. Even then, cropping this 16:9 image to 2.35:1 will deliver a higher resolution frame than shooting with a 16:9 area of the sensor in 4K – 10MP instead of 8MP.

Here’s the Cooke 32mm T2.3 Anamorphic/i 2x lens on the Lumix GH5 – well, it’s more like the camera is on the lens rather than the other way round. With the camera in its Anamorphic Mode the 4:3 sensor-captured image de-squeezes to make a 2.66:1 CinemaScope format picture.

The difference is more pronounced when using lenses with a greater than 1.33x anamorphic squeeze factor. A 2x lens, such as the Cooke 32mm Anamorphic/i would create a 2.66:1 output from the full area of a 4:3 sensor or a 3.5:1 final result from a 16:9 area. So, if you were cropping 4K footage to match those aspect ratios you’d end up with 3840 x 1444 pixels (5.5MP) for a 2.6:1 format or 3840 x 1098 pixels (4.2MP) for 3.5:1 format. Both of those represent a significant drop in resolution from the original 8MP of 4K footage – which is all preserved when using an anamorphic lens.

Resolution isn’t the only benefit

Retaining decent resolution isn’t the only reason to shoot with an anamorphic lens: these lenses have specific characteristics many people find attractive.

The almost trademark blue streak extending across the frame of an anamorphic picture comes from point light sources reflecting in the surface of the anamorphic cylinder and spreading out across the scene.

The most commonly recognized characteristic is a blue streak that shoots across the frame when a point light source is aimed at the camera – a car headlight for example. This is caused by direct light reflecting off the anamorphic cylinder and then spreading out left and right across the frame.

Obviously, these blue lines are more prevalent in lenses that have the anamorphic cylinder at the front of the construction, and much less obvious in those that place the cylinder at the rear. The new Arri/Zeiss anamorphics spread the cylinder effect throughout the lens construction, rather than having a specific group of elements to do the job, which allows a degree of control over how dominant the blue streaks will be. In more regular anamorphics the blue streak effect can be played up with reflective coatings inside the forward elements to enhance the color of the streak and how easily it can be ‘activated’.

The bulbous anamorphic cylinder can be a magnet for light and can reduce contrast when even off-center lights are pointed towards the lens.

As well as this specific type of flare, light falling on the front element will create an overall flare that in turn can give anamorphic footage a low contrast atmosphere even when contrast is quite high. This again depends on the design of the lens. Older lenses tend to flare more easily while newer designs aim for more contrast and allow filters to be used when lower contrast is desirable.

The oval shape of out-of-focus highlights is usually demonstrated in night scenes with distant car lights, but this characteristic is also visible during daylight hours. Here you can see the light between the trees – which would usually appear round – takes on an upright oval shape.

The other immediately recognizable characteristic of anamorphic lenses is the elongated shape of out-of-focus highlights. These highlights – a street light in the distance for example – would reproduce bright discs in pictures taken with a normal spherical lens, but when shot using an anamorphic lens they appear as ovals. In fact, all out-of-focus details are reproduced with an elongated shape that exaggerates the degree to which things are out of focus. This in turn only makes the focused subject stand out more.

The appearance of an extra-shallow depth-of-field is further enhanced by the complications of the altered angles of view we get with an anamorphic lens. A lens with a 1.33x anamorphic effect will have its marked focal length widened by the anamorphic factor – so a 100mm 1.33x lens would deliver the angle of view of a 75mm (100 divided by 1.33 = 75). With more dramatic anamorphic lenses the effect is more pronounced too, so a 1.8x which would give that 100mm the view of a 56mm. The final look is of a 56mm lens that exhibits depth-of-field characteristics similar to those we would expect from a 100mm lens.

Above you can see how the same scene is reproduced differently by a normal spherical lens and an anamorphic lens of the same focal length. I used the Lumix X-Vario 12-35mm set to 35mm to compare with the new Sirui 35mm 1.33x anamorphic lens.

The camera-to-subject distance remained the same, as did the F2.8 aperture, but there is a slight difference in the degree to which the background appears out-of-focus. As you can see, the subject appears much smaller in the anamorphic images due to the extra width of the view, so naturally, a photographer would normally get closer to make the subject fill the frame, and thus increase the shallow depth-of-field effect simply by using a closer focus distance to achieve the same subject magnification.

What is also clear from these images is that the anamorphic lens delivers a considerably wider view for the same marked focal length. This comparison also shows the shape of out-of-focus highlights from the same scene rendered quite differently.

Is it worth the effort?

That’s a matter of opinion of course, but those wanting to make the most of all the tools available to influence the audience will say ‘yes’. The look is special and it can add something very substantial to the atmosphere of a film. As mentioned earlier though, an anamorphic lens can’t make a poor film into a good film, compensate for bad lighting, primitive camera work or wooden acting – it is only a part of the many elements that can make a movie an award winner or a rotten tomato.

Street lights just out of the frame (and a high ISO setting) contribute to a nice soft contrast in this scene, even though the actual scene was filled with deep shadows. The look and feel of the shot are different enough to that which we would expect from a regular spherical lens that we can tell there is a certain something else about it. The highlights and background details look a bit different and there is a wide feel but without the usual distortion of a close perspective.

Anamorphic photography also isn’t suited to all subject types, and while not a fast rule it tends to work best with drama rather than documentary. The widescreen says ‘now I’m going to tell you a story’ and can prepare the audience for all the exaggeration that makes a story moving, dramatic and emotional, while more regular formats might be better for presenting strictly factual information.

There are in-between cross-over areas though that still work well, such as those old wildlife films that present factual information with a deep Hollywood voice-over and in which all the lions in the family have a name and roam the grasslands to the sound of a full studio orchestra.

There’s also a sense of cinema about a still shown in anamorphic format

In stills photography, what an anamorphic lens will give us is something a bit different. ‘Different’ is something I value, though obviously ‘good different’ rather than the other. ‘Different’ makes our work stand out from the rest, and as there aren’t many stills photographers using anamorphic lenses ‘different’ is what you will get.

There’s also a sense of cinema about a still shown in anamorphic format, and with the built-in characteristics of an anamorphic lens that inherent atmosphere will feel stronger, making it possible to present movie-stills filled with an implied storyline – without actually having to go to the bother of shooting the movie.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on How does anamorphic photography work?

Posted in Uncategorized

 

Oakland Museum of California showcases the work of Dorothea Lange in a free online exhibition

12 Aug

The Oakland Museum of California has put together a digital archive of photographs captured by Dorothea Lange, showcasing some of the best works from the 20th-century documentary photographer and photojournalist.

The extensive archive is split into four categories: The Depression, World War II at Home, Post-War Projects and Early Work/Personal Work. Each of the categories provide a synopsis of Lange’s work during the specified timeframes and further divides her images into themed galleries, which show not only the images Lange captured, but also supplementary material, such as notes to Lange from the United States Department of Agriculture, contact sheets of Lange’s images, maps of her travel routes and more.

It’s a fascinating, insightful and sometimes heartbreaking journey through the life and work of one of the most iconic 20th century American photographers. The online exhibition is entirely free to view, so set aside a few hours and head on over to the Oakland Museum of California website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Oakland Museum of California showcases the work of Dorothea Lange in a free online exhibition

Posted in Uncategorized

 

Adobe reveals how its CAI digital content attribution system will work

05 Aug

During its Adobe MAX 2019 event, Adobe announced its Content Authenticity Initiative (CAI), the first mission of which is to develop a new standard for content attribution. ‘We will provide a layer of robust, tamper-evident attribution and history data built upon XMP, Schema.org and other metadata standards that goes far beyond common uses today,’ the company explains in a new white paper about the initiative.

The idea behind Adobe’s CAI is that there’s no single, simple, and permanent way to attach attribution data to an image, making it hard for viewers to see who owns the image and the context surrounding its subject matter. This paves the way for image theft, as well as the spread of misinformation and disinformation, a growing problem on the modern Internet.

Adobe’s new industry standard for digital content attribution, which was announced in collaboration with Twitter and The New York Times, will potentially change this, adding a level of trust in content that may otherwise be modified or presented with an inauthentic context on social media and elsewhere.

Adobe said in November 2019 that it had a technical team:

…exploring a high-level framework architecture based on our vision of attribution, and we are inviting input and feedback from industry partners to help shape the final solution. The goal of the Initiative is for each member to bring its deep technical and business knowledge to the solution. Success will mean building a growing ecosystem of members who are contributing to a long-term solution, adoption of the framework and supporting consumers to understand who and what to trust.

The newly published white paper titled ‘The Content Authenticity Initiative: Setting the Standard for Digital Content Attribution‘ explains how this new digital content attribution system will work.

The team cites a number of ‘guiding principles’ in the initiative, including the ability for their specifications to fit in with existing workflows, interoperability for ‘various types of target users,’ respect for ‘common privacy concerns,’ an avoidance of unreasonable ‘technical complexity and cost’ and more. Adobe expects a variety of users will utilize its content attribution system, including content creators, publishers and consumers, the latter of which may include lawyers, fact-checkers and law enforcement.

The team provides examples of the potential uses for its authenticity system in various professions. For photojournalists, for example, the workflow may include capturing content at a press event using a ‘CAI-enabled capture device,’ then importing the files into a photo editing application that has ‘CAI functionality enabled.’

Having preserved those details during editing, the photojournalist can then pass on the images to their editor, triggering a series of content verifications and distribution to publications, social media managers and social platforms, all of which will, ideally, support displaying not only the CAI information but also any alterations made to the content (cropping, compression, etc).

The idea is that at all times during its distribution across the Internet, anyone will be able to view the details about the image’s origination, including who created it, what publication originally published the image, when the photo was captured, what modifications may have been made to the image and more.

The white paper goes on to detail other potential creation-to-distribution pipelines for creative professionals and human rights activists.

What about the system itself? The researchers explain that:

The proposed system is based on a simple structure for storing and accessing cryptographically verifiable metadata created by an entity we refer to as an actor. An actor can be a human or non-human (hardware or software) that is participating in the CAI ecosystem. For example: a camera (capture device), image editing software, or the person using such tools.

The CAI embraces existing standards. A core philosophy is to enable rapid, wide adoption by creating only the minimum required novel technology and relying on prior, proven techniques wherever possible. This includes standards for encoding, hashing, signing, compression and metadata.

Each process during the creator’s workflow, such as capturing the image and then editing, produce ‘assertions’ as part of the CAI system. Typically speaking, according to the white paper, these assertions are JSON-based data structures that reference declarations made by the actor, which can refer to both humans and machines, including hardware like cameras and software like Photoshop.

The researchers go on to explain that:

Assertions are cryptographically hashed and their hashes are gathered together into a claim. A claim is a digitally signed data structure that represents a set of assertions along with one or more cryptographic hashes on the data of an asset. The signature ensures the integrity of the claim and makes the system tamper-evident. A claim can be either directly or indirectly embedded into an asset as it moves through the life of the asset.

For every lifecycle milestone for the image, such as when it was created, published, etc., the authenticity system will create a new set of assertions and claim related to it, with each claim daisy-chaining off the previous claim to create something like a digital paper trail for the work.

Of course, there are potential issues with Adobe’s vision for content authentication, the most obvious being whether the industry is willing to adopt this system as a new standard. The CAI digital content attribution system will only succeed if major hardware and software companies implement the standard into their products. Beyond that, social media platforms would need to join the effort to ensure these permanent attribution and modification details are accessible to users.

As well, Adobe’s system will have to achieve its highest goal, which is to be tamper-proof, something that is yet to be demonstrated. Work under this initiative is still underway; interested consumers can find all of the technical details in the white paper linked above.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Adobe reveals how its CAI digital content attribution system will work

Posted in Uncategorized

 

Canon EOS R5 and R6 overheating claims tested: cameras work as promised – but that’s not enough

04 Aug
Testing conducted in Seattle by our Technical Editor Richard Butler. Real-world production experiences by Jordan Drake: the director and editor of many of our ‘DPRTV’ videos.

If you have any interest in cameras, you may have witnessed the heated discussions lately around the new Canon EOS R5 and R6’s tendency to overheat when capturing video internally. The Internet tends to amplify the most extreme version of any story or phenomenon, which might have lead to you getting the impression that the cameras are unusable.

Jordan’s EOS R5 experience

We shot for 10 hours at a variety of locations, which I thought would give the camera ample opportunity to cool down. I planned to shoot the episode in the 4K HQ mode, with occasional 4K/120P and 8K shots peppered throughout. Quickly I realized that setting up a shot and menu-diving would reduce the amount of record time I had for HQ, so I found myself spending far less time previewing the shot before rolling, adding a layer of stress.

Eventually, I realized couldn’t record all the talking points in 4K HQ, and settled on using 4K HQ for wide shots and standard, line-skipped 4K for closeups. This made shooting sustainable, though I found myself avoiding trying to capture any spontaneous establishing shots or cutaways, lest I drop the dreaded overheating clock a bit lower. While our host Chris took it in his stride, I can only imagine how frustrating it would be for the talent to not know if the camera will last until the end of a take.

I also found myself heavily rationing the 4K/120P as it really chews up your remaining shooting minutes. I spent two minutes capturing the seagull footage in the episode: beforehand I the camera said it would shoot 15 minutes of 4K HQ, when I returned I had only five minutes remaining!

If the quality difference between 4K HQ and standard 4K capture were not so dramatic, this would bother me less. However, once you start viewing and editing the gorgeous 4K HQ footage, it makes it that much harder to go back to inferior line skipped 4K, and that’s a type of disappointment I don’t want to be dealing with on a shoot.

After extensive testing of both cameras, our conclusions with regards internal recording are:

  • Both the EOS R5 and R6 appear capable of working as promised
  • Lack of dependability makes them a poor choice for much professional video work

We tested a pair of R5s and an R6 in a variety of warm conditions and found they consistently performed in line with the limitations that Canon acknowledged at the point of launch. However, the practical implications are that the cameras are prone to overheating if you shoot for extended periods and if you have crew or talent waiting to re-start shooting, they may take too long to recover.

It should be noted that Canon did not design either the EOS R5 or R6 to be professional video tools, nor does it primarily market them as such. But based on our testing and real-world usage we would caution against using them as a substitute.

So why is YouTube saying the sky is falling?

Our testing suggests that the cameras perform in exactly the way that Canon said they would. However, there is an important caveat that Canon’s figures don’t address: although the cameras can repeatedly deliver the amount of video promised, they may not always do so in real-world usage.

Even set to the mode designed to limit pre-recording temperature build-up, the clock is essentially running from the moment you turn the camera on. Video recording is the most processor-intensive (and hence most heat generating) thing you can do, but any use of the camera will start to warm it up, and start chipping away at your recording times. Consequently, any time spent setting up a shot, setting white balance, setting focus or waiting for your talent to get ready (or shooting still images) will all cut into your available recording time, and you won’t reliably get the full amount Canon advises.

Not only does this make R5 a poor fit for many professional video shoots, it also means that you can’t depend on the cameras when shooting video alongside stills at, say, a wedding, which is a situation that the EOS R5 clearly is intended for.

Even when left in direct sunshine, the cameras continued to record for the duration Canon promised. However, this is only true when you’re not using the camera for anything else.

The one piece of good news is that the camera’s estimates appear to be on the conservative side: every time the camera said it would deliver X minutes of footage, it delivered what it’d promised. You can also record for much longer if you can use an external recorder but again, this probably isn’t going to suit photographers or video crews looking for a self-contained, do-everything device.

Click here if you want to see our test methods and results.

EOS R5 suggestions:

  • Expect to shoot line-skipped 30p for the bulk of your footage
  • Only use 8K or oversampled HQ 4K for occasional B-Roll
  • 4K/120 and 8K will cut into your shooting time quickest of all
  • Be aware of your setup time and cumulative usage (including stills shooting)

EOS R6 suggestions:

  • Don’t expect to be able to shoot for extended periods
  • Be aware of the need for extensive cooling periods between bursts of shooting

Analysis: Why hadn’t Canon thought about this?

It’s easy to fall into the trap of thinking this means Canon didn’t put enough thought into thermal management for these cameras. Our testing suggests this isn’t the case, but that the cameras’ specs are rather over-ambitious.

Jordan’s EOS R6 experience

I had done some testing prior to my shoot, and was comfortable that overheating wouldn’t be a problem if I stuck to 4K/24p. Unfortunately, my experience on a warm day was quite different to that room-temperature test. There’s no line-skipped 4K mode on the R6, so if the camera overheats, you’re back to 1080P, which will be a jarring transition for viewers watching on larger screens.

While I was able to record much longer with the R6 before encountering the overheat warning, once it appears the camera takes far longer to cool down again than the R5. Our regular drives in an air conditioned car allowed Chris and Levi’s R5 to function throughout the day, but at one point I was left sitting in the car, babysitting a hot R6 while they went out to shoot. During a one hour lunch, the R5 had returned back to normal, but the R6 had a twenty minute warning still on.

This was hugely disappointing as, rolling shutter aside, the R6 video quality is excellent, and I’d be perfectly happy using it over the R5. However, the longer cool down times would probably lead me to use the R5, dropping to line-skipped 4K from time to time.

While I enjoyed most aspects of using these two cameras, I have no intention of using either of them as a primary video camera. They would be great for grabbing occasional, very high quality video clips, but I’d never want to rely on them for paid work.

With the exception of specialist video models, most cameras that shoot 4K are prone to overheating, regardless of the brand. Some companies let you extend the recording time by ignoring overheat warnings (and risk ‘low-temperature burns’ if you handhold the camera), while others simply stop when they get too hot. This should make it clear that shooting 4K for an extended period is difficult. For instance, Sony says the a7 III will shoot around 29 minutes of 4K video with the temperature warnings set to ‘Std,’ while the Fujifilm X-T4 promises 30 minutes of 4K/30 and 20 minutes of 4K/60.

The cumulative heat is constantly counting against you

8K is four times as much data as natively-sampled 4K and seventeen times more than the 1080 footage that older cameras used to capture so effortlessly. Perfect 2:1 oversampled 4K (downsampled 8K) requires this same amount of data, which is still 1.7x more data than is used to create 4K oversampled video from a 24MP sensor. Data means processing, which means heat.

The fact that the EOS R5 can shoot two hours of 4K/30p footage (in line-skipping mode) when sitting in direct sunshine suggests it’s pretty good at dissipating heat. But it seems trying to do so with 1.7x more data than the a7 III and X-T4 is a step too far: it’ll match them for promised recording duration but only just. This leaves it much more sensitive to any other use when not recording.

The EOS R6 is a slightly different matter. It can shoot 40 minutes of 4K taken from 5.1K capture, which is a pretty good performance and may be enough that you won’t often hit its temperature limits. However even after a 30 minute cooling period, it has only recovered enough to deliver around half of its maximum record time, whereas the EOS R5 recovered nearly its full capability. The metal rear plate of the R5 clearly allows it to manage heat better than the R6 can.

And, as Jordan’s experiences show: if you don’t have time to let the cameras cool, that cumulative heat is constantly counting against you.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon EOS R5 and R6 overheating claims tested: cameras work as promised – but that’s not enough

Posted in Uncategorized

 

My new evaluation criteria for my portfolio work…

05 Jun

From the inimitable “Burns Auto Parts Blog”

 

 

So here’s my challenge to you: look at your work on your site. Do you love it–all of it? Does it make you smile/get you excited/make you want to do more of it? Be honest–don’t look at it from its technical side and definitely do not ask “Do I think buyers will want this?” If you do, then look at your marketing.

If you don’t, then get off your creative butt and start making the work that you make out of love and that weird compulsion that makes you do this and not be a 9-5 “normal” person.

 

 

Words of wisdom indeed.  I have been working on a major portfolio overhaul, with just this in mind.  You know the saying… Show what you wanna shoot!

 

 

Tweet This Post Stumble This Post


F/1.0

 
Comments Off on My new evaluation criteria for my portfolio work…

Posted in Photography

 

How DSLR Lenses Work: DSLR Lenses Explained

24 May

A camera lens is arguably the most important part of a photographers set-up, to the point where most professional photographers would rather shoot with an ok camera body as long as they had a top-quality lens rather than the other way around. However, if you are just entering the world of DSLR lenses, at first glance they can be a Continue Reading

The post How DSLR Lenses Work: DSLR Lenses Explained appeared first on Photodoto.


Photodoto

 
Comments Off on How DSLR Lenses Work: DSLR Lenses Explained

Posted in Photography

 

Filmmaker uses COVID-19 work lull to make and sell $10 3D-printed camera battery cases

07 May

A documentary film maker from Utah has designed and built a series of battery holder magazines that he says help to solve the problem of knowing which batteries are fresh and which are depleted. The 3D printed magazines hold three or four batteries from common cameras and allow the batteries to be inserted contacts up for dead batteries and down for fresh ones.

Tim Irwin, who is printing the magazines in his basement, had been meaning to come up with a solution to this problem for a while, and had tried downloaded plans for 3D printed magazines in the past, but found they always broke. ‘I originally found files on Thingiverse that worked for a bit. But all the designs I tried from there ended up breaking because of a weak point in the print’ he explains. ‘When the travel restrictions around Covid 19 hit every one of my gigs was cancelled or postponed, so it seemed like prefect time to dive into this side mission. I designed my own from scratch and refined it over a long period of time until I was happy with the product. I’m always looking for ways to make my kit more efficient, quick, and organized. The Battery Mag was born out of that.’

Tim has designed the magazine so that when fresh batteries are loaded with their contacts down they are isolated from each other and from anything else the magazine might come into contact with, so the risk of shorting is avoided. And with deads loaded with the contacts facing up it is easy to see at a glance which battery to reach for next in fast moving situations.

Tim, who owns Functional Films, makes commercial video documentaries and says he is usually on the road shooting about 140 days a year. That has all stopped due to the coronavirus out-break, so this is how he is filling his time.

The Battery Mags are available for Panasonic DMW-BLF19, Canon LP-E6/N, Pentax DLI90, Sony NP-FZ100, Sony NP-FW50 batteries and he says a unit for Panasonic S cameras is also in production. The magazines are $ 9, $ 10 and $ 14 each, respectively, and can be ordered via the Battery Mag website.

For examples of Tim’s work see his Instagram page.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Filmmaker uses COVID-19 work lull to make and sell $10 3D-printed camera battery cases

Posted in Uncategorized