RSS
 

Posts Tagged ‘About’

Instagram starts warning users about wildlife abuse when they search certain hashtags

07 Dec

Instagram has announced a new wildlife protection measure following a New York Times report on how some traffickers are using the platform as part of the illicit animal trade. In a blog post published earlier this week, Instagram said that it will start presenting a content advisory screen to users who search for hashtags that are, “associated with harmful behavior to animals or the environment.”

This advisory, shown below, links to both the posts and a page where additional information on the matter is provided. That page, which discusses both environmental considerations and wild animal interactions, further links to TRAFFIC, the World Wildlife Fund, and World Animal Protection agencies.

In addition to encouraging its users not to damage the environment in order to get the perfect shot, Instagram says:

We also encourage you to be mindful of your interactions with wild animals, and consider whether an animal has been smuggled, poached or abused for the sake of tourism. For example, be wary when paying for photo opportunities with exotic animals, as these photos and videos may put endangered animals at risk.

Users who come across a video or photo they believe to be violating Instagram’s guidelines on this matter are urged to report it. The company explicitly states that it does not allow endangered animals to be sold via its platform, nor does it allow content featuring animal abuse.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Instagram starts warning users about wildlife abuse when they search certain hashtags

Posted in Uncategorized

 

What you need to know about Sony’s a7R III

26 Oct

Introduction

The a7R III is Sony’s latest high resolution camera, which carries over many of the improvements found on the company’s flagship a9. The 42.4MP sensor itself hasn’t changed from the Mark II, but virtually everything else wired into it has. This allows for faster burst shooting, improved autofocus and higher quality 4K video. Some important user interface and ergonomic changes, inspired by the a9, have also made the transition.

Same sensor, better performance

The a7R III uses the same full-frame 42.MP BSI CMOS sensor as its predecessor, though improved readout circuitry lowers the already low read noise, improving dynamic range.

Sony promises a slight improvement in rolling shutter, but not to the level as the a9, so the a7R III didn’t earn the ‘Anti-distortion shutter’ designation.

While the hybrid autofocus continues to offer 399 phase-detect points, there are now 425 contrast-detect points, up from 25 on the a7R II.

From a9 (hardware)

Several items found on the body a9 have thankfully migrated to the a7R III. They include its brilliant 3.69M-dot OLED EVF with incredible contrast and resolution, a joystick for selecting an AF point, an AF-On button and a flash sync port. (The a7R III flash sync speed is 1/250 sec.). Also added is the a9’s ‘C3’ button to the left of the ‘Menu’ button on the camera’s back, and the ability to assign a button to ‘Protect/Rate’ in Playback. This should make it much easier to quickly select images from bursts in camera. Bursts can even be grouped during playback for faster image viewing and selection.

The rear thumb dial on the back plate of the camera has been updated as well from the Mark II to be heftier, with better haptic feedback and less accidental input – just like the one on the a9.

On the memory card front, the Mark III now has two SDHC card slots instead of one on the Mark II. One of the slots supports UHS-II media while the other sticks to UHS-I.

From a9 (firmware)

There are some nice improvements on the software side, as well. Sony says that autofocus is up to two times faster than the Mark II. Low light performance is now rated down to -3 EV with a F2 lens, meaning the a7R III should offer similar low light AF performance to the a9 – a stop better than the Mark II. AF algorithms have been refined, with more ‘tenacious’ subject tracking and improved Eye AF. We’re hoping this means that Eye AF is more prone to stick to your original subject (per the a9), rather than randomly jump between detected faces as with the a7R II. Eye AF is still laggy when shooting bursts though, much like the original a7R II.

Also new is the helpful Touchpad AF feature, which lets you use the LCD to move the focus point while your eye is to the viewfinder. Movement can be absolute (you’re picking a point on the frame) or relative (to the current focus point). You can also restrict the active area to certain parts of the screen. Our first impression is that Touchpad AF seems rather over-sensitive, jumping almost uncontrollably around the screen.

While Sony didn’t make a big deal about it, the a7R III should also inherit the a9’s Improved JPEG color and noise reduction. Today, Sony’s JPEG engine renders some of the finest detail we’ve seen amongst cameras, even at high ISO. But JPEG color still remains a point of contention.

From a9: Battery!

For both stills and video shooters, perhaps the biggest news is that Sony has found room for the larger NP-FZ100 battery used in the a9. This required a complete redesign of the body, including a slightly modified grip, but it means a huge boost in battery life. If you’re using the LCD, expect 650 shots per charge (which is the ‘official’ CIPA number), and 530 shots with the EVF. Compare that to the 290 shot CIPA rating the Mark II received. Color us impressed.

An optional battery grip, the same VG-C3EM model as the a9 uses, doubles battery life, so you’ll get up to 1300 shots.

Entirely new

There are a couple of things that are a7R III ‘originals’. The first is a redesigned low vibration shutter mechanism, which allows 10 fps bursts without the risk of ‘shutter shock.’ It also allows for the 1/250 sec flash sync mentioned earlier. With the proper strobes, you can even get up to 10 fps shots with flash – something even an a9 won’t do (it’s capped at 5 fps with flash, since that’s its maximum mechanical shutter rate).

The camera has two USB ports. The first is USB 3.1 with Type C connector (found on modern smartphones and newer Apple laptops), which allows tethering and battery charging. A more traditional micro USB jack is available which supports existing remotes and external battery packs.

Responding to user feedback, Sony has added the ability to enter the menus while the camera is writing to a memory card. YES!

Video

Better processing means improved detail and lower noise in both full-frame and Super 35 crop mode 4K. The real standout footage, as before, should be the Super 35 4K, since it’s oversampled.

The AF algorithms in video have also been improved and are more resistant to refocusing off to the background. That’s a huge improvement over the Mark II, and means many casual users can leave the camera in complete auto AF area mode (‘Wide’) with Face Detection on and expect precisely focused 4K footage.

If you’re looking for a simple ‘tap-to-track-subject’ mode a la most other manufacturers, you’re still out of luck. The camera unfortunately still has the old outdated ‘Center Lock-On AF’ mode, which you’ll have to turn on to enable ‘tap-to-track’ functionality. Once it’s on, you can tap anywhere on the screen and it’ll put a box around your subject and track it. It doesn’t work as well as Lock-on AF modes in stills in our experience, and it’s still unfortunate that you have to engage this mode to get ‘tap-to-track’ – a functionality you’d just expect out of the box by default. Furthermore, you’ll have to remember to turn ‘Center Lock-on’ off when you switch back to stills mode, since it’s not a mode you’ll ever want to be used and can be accidentally triggered by a touch of the touchscreen.

New video functionality

This one is kind of huge: there are now separate function button configurations for stills and movie modes. By default the movie mode functions are set to ‘As in Stills mode’ but this can be edited, per button, to ensure you have access to the settings you need for both situations. We’ve been asking for this for a long time, as video needs often differ drastically from stills needs, so this is a welcome change. We’d still like to see totally separate settings banks for video vs. stills – where each mode remembers your last used settings – but this is a start in the right direction.

The a7R III now supports S-Log 3 / S-Gamut 3, which offer even flatter profiles to make use of camera’s full dynamic range. Also new is support for Hybrid Log Gamma (HLG), which allows you to view wide dynamic capture on HDR displays, without any post-processing required. Newer displays allow HDR capture to appear less ‘flat’, since HDR displays have a wide range of tones they can reproduce. The ‘flat’ log capture is automatically expanded to the full capability of the display so your high contrast capture look high contrast on-screen, without blown highlights or blocked shadows.

HDR display of HDR capture will become increasingly important in the stills world, as it is already in the video world, so we’re glad to see Sony taking this new movement seriously in even their prosumer cameras.

Multi-shot mode

This mode, similar to that on Olympus and Pentax cameras, shoots four uncompressed Raws, which must be later processed in Sony’s Imaging Edge software to combine them into a .ARQ file, which can then be adjusted. Both Olympus and Pentax do this in-camera.

The benefits of Multi-shot mode are an increase in color resolution (since each pixel has its own red, green and blue value) and a reduction in noise and softness, since there’s no demosaicing needed and since you’ve taken 4 images in place of one. The latter you can do by stacking images from any camera for a nearly ~2 EV noise or dynamic range benefit, but the benefits of not needing to demosaicing are specific to these sorts of multi-shot modes that can use sensor shift to shift the sensor in the precise movements needed to remove the effects of the Bayer color filter. You won’t realize these gains unless you post-process, though.

There’s at least a 1 second delay between shots while the camera waits for the sensor to settle. This delay means that this feature will not work well with moving subjects. You can change the delay to anywhere between 1 and 30s.

What’s missing

There are a couple of things that we would’ve liked to have seen on the a7R III. They include lossless compressed Raw, more use of the touch panel (for adjusting settings, as an example), in-camera Raw conversion and support for downloadable PlayMemories apps.

The lack of PlayMemories apps may be of particular concern to landscape photographers using such apps for timelapse or gradient filters, and for those that use apps like ‘Sync to Smartphone’ to automatically download all JPEGs from camera to their phones and online photo storage services. This is a trend, starting with the a9, we’d really like to see Sony reverse.

Overall, though, the a7R III is an impressive package, and one that we’re eager to spend more time with.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on What you need to know about Sony’s a7R III

Posted in Uncategorized

 

Landscape Photography: It’s All About the Light

16 Oct

There are many tools that photographers use for creating compelling landscape photography, but some fail to realize that light is the most important element. We only shoot in those magic hours when the sun’s rays hit our subject at an angle to create a warm glow.

Landscape Photography: It's All About the Light

What many people don’t realize is that there are lots of different types of light that can affect the quality of your landscape images. How you approach this light will make a huge difference in the quality of your photographic portfolio. Let’s get started and talk about a few of our favorite examples of beautiful light.

That magic time for landscapes, of course, is sunrise and sunset, but specifically what other types of light will make or break your images?

Reflected Light

Zion-Narrows- Reflected Light Landscape Photography: It's All About the Light

This picture was taken on one of our expeditions to Zion National Park in Utah. Part of the beauty and excitement of this trip is strapping on your water shoes, grabbing your hiking stick and wading through the river to get some amazing shots.

Reflected light, which can also be called bounced or diffused light occurs when there is direct sunlight reflected off an adjacent surface. The canyons in the Southwest are perfect for this type of light as the color of the canyon is bounced back and reflected giving a warm glow to the walls. The quality of this light is soft, even, and beautiful.

Overcast Light

Morro-Bay- Overcast Light Landscape Photography: It's All About the Light

Morro Bay on the Central Coast of California has many faces depending on the weather. It’s just as striking in the fog as it is on a beautiful sunny day.

This quality of light is found on overcast and foggy days and is very soft and bluish. The color of this light comes from the whole sky, which acts like one big softbox and in the right situation can be very dramatic.

Backlight

Big-Sur-morning-light Landscape Photography: It's All About the Light

This image was taken in Big Sur, one of our favorite shooting locations. It boasts incredible sunsets, especially in the winter.

A typical backlit picture will have a rim of the sun’s rays around the subject, or you will be able to see the sun as a bright spot in the photograph. If you are using a small aperture, you will be able to get a “sun star” or sun flare effect like this one.

Direct Light

White-Sands-Direct-Light Landscape Photography: It's All About the Light

Because of the reflection of light off the sand, White Sands, New Mexico is an unparalleled photography location.

Direct sunlight is usually found approximately one to two hours after sunrise and one to two hours before sunset. It can be hot and unforgiving while casting strong shadows. This light works great for black and white but can sometimes be overly intense for color photography.

Morning and Evening Horizontal Light

This light is warm and horizontal and is caught during sunrise and sunset. It is horizontal because the sun’s rays are cast at an angle as the sun is rising and setting. This is the prime light for photography due to its combination of low contrast and warm tones. Objects lit directly by this light may seem to glow, as if illuminated from within, with details emerging clearly. Learn to use this light on a regular basis and you will be amazed at the results.

Canadian-Rockies-Morning-Light

The Canadian Rockies in the fall never fails to disappoint us. The crisp mountain air and the deciduous larch trees make this an amazing photographic location.

Open Shade

In landscape photography, open shade consists of areas not lit by direct sunlight. This is very soft light and is common in forested areas. The best part about this type of light is you can shoot all day and still have the benefit of this soft, dreamy light.

Redwood-Forest-Big-Sur

This redwood forest is one of our favorite stops on California’s Big Sur coast.

Combination Light – Direct and Diffused

Here is an example of combination light, both direct and diffused. This was shot on Mt. Whitney in the Eastern Sierra, the highest mountain in the contiguous United States. This image depicts a highly unusual phenomenon. There were rays of morning horizontal sunlight shining from behind us while we were shooting. Only a portion of the mountain was shaded or diffused by the clouds overhead creating a spotlight effect.

Mt-Whitney-Spotlight

This shot was a result of several hours of “waiting for the light” and we were greatly surprised and rewarded for our efforts.

Manmade Light

You don’t really think of manmade light in landscape photography, but here is a great example!

This image was captured on the Big Sur coast at dusk. There were rows of cars waiting to get through a construction site. As the cars were let through, we captured the row of car lights with a long exposure and the camera mounted on a tripod.

Big-Sur-Highway

Photography Exercise

Try shooting the same subject in the exact same location before sunrise and after sunset. Notice the differences in the light? Are the color and tone different? Do the details look different in the light areas and in the shadows? Comment below and let me know how you do. Enjoy!

The post Landscape Photography: It’s All About the Light by Holly Higbee-Jansen appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on Landscape Photography: It’s All About the Light

Posted in Photography

 

Congress is considering a copyright small claims bill you should know about

07 Oct
Photo by Dennis Skley

A bill has reached Congress that aims to establish a cheaper route for those seeking settlement of small claims in copyright infringement cases. Put forward by a bypartisan group of representatives, the Copyright Alternative in Small Claims Enforcement Act of 2017 (CASE) intends to provide a more viable alternative to federal courts for those making relatively small claims in cases where the cost of pursuing compensation deters individual photographers and small to medium sized business owners.

The current system can cost professional photographers wishing to file a claim for unauthorized use of an image almost a year’s earnings, according to a report by Copyright Defence, and copyright lawyers are unwilling to take on a case in which damages would be less than $ 30,000.

Copyright Defence says that the average claim made by photographers is $ 3,000 or less, making the pursuit of offenders impractical and letting infringers off scot-free.

The new bill proposes that a small-claims style panel be set up within the Copyright office that would allow these low-value, high-volume disputes to be heard. Such an introduction would benefit not only photographers and artists, but also musicians, film makers and anyone who produces creative work.

Bought to Congress by Hakeem Jeffries of the Democrat Party and Tom Marino, a Republican, the bill is supported by the American Society of Media Photographers, American Photographic Artists, National Press Photographers Association, Professional Photographers of America, North American Nature Photography Association, among others. The bill was first proposed by a collection of visual artists groups in February 2016.

Press release from Hakeem Jeffries:

Reps. Jeffries, Marino Lead Bipartisan Effort to Help Musicians and Artists Protect Their Creative Work

WASHINGTON, DC – A bipartisan solution to help artists, photographers, filmmakers, musicians, songwriters, authors and other creators protect their life’s work from unauthorized reproduction has been introduced today by two key members of the House Judiciary Committee — U.S. Representative Hakeem Jeffries (NY-08), a Democrat, and U.S. Representative Tom Marino (PA-10), a Republican.

The Copyright Alternative in Small-Claims Enforcement (CASE) Act of 2017 will create a Copyright Claims Board (“CCB”) in order to provide a simple, quick and less expensive forum for copyright owners to enforce their intellectual property. The majority of the copyright owners that are affected by piracy and theft are independent creators with small copyright infringement claims. The CCB will establish an alternative forum to the Federal District Court for copyright owners to protect their work from infringement.
A broad coalition of legislators have co-sponsored the bill, including Democratic Congresswoman Judy Chu (CA-33), Republican Congressman Doug Collins (GA-9), Democratic Congressman Ted Lieu (CA-33) and Republican Congressman Lamar Smith (TX-21).

Rep. Jeffries said: “The establishment of the Copyright Claims Board is critical for the creative middle class who deserve to benefit from the fruits of their labor. Copyright enforcement is essential to ensure that these artists, writers, musicians and other creators are able to commercialize their creative work in order to earn a livelihood. The CASE Act will enable creators to enforce copyright protected content in a fair, timely and affordable manner. This legislation is a strong step in the right direction.”

Representative Marino said: “Creators, solo entrepreneurs, photographers, and artists often struggle to enforce their copyright in a timely and cost efficient manner. This can hinder creativity and prevent these professionals from being able to sustain a profitable livelihood. The CASE Act provides a boost to copyright holders and allows a forum for timely resolutions. This is a positive step in the right direction.”

Representative Collins said: “America’s economic leadership depends on its commitment to protecting intellectual property, and I’m proud to work with my friend Congressman Hakeem Jeffries to provide another tool to make this possible. A copyright small claims system would offer small creators a simple, effective forum for defending their property rights against infringement. We’re working to modernize the Copyright Office to meet the needs of today and tomorrow—including music licensing structures—and this bill is a critical step in strengthening intellectual property protections for creators who find themselves disadvantaged by existing policies.”

Representative Lieu said: “More than 2 million hardworking artists in the United States rely on the U.S. Copyright Office to protect their livelihoods. For too long, our legal system skewed in favor of low-volume, high-value industries. But for many independent artists, whose claims of infringement often total a few thousand dollars, it is far too expensive to sue in federal court – essentially forcing creators to forfeit their rights. The Small Claims Board is an important step toward ensuring that digital photographers, graphic artists, illustrators, and others have a way to resolve disputes quickly and affordably. I commend my colleagues on both sides of the aisle for supporting this crucial effort.”

Representative Smith said: “Our founders enshrined copyright protection for creators’ works in the Constitution. The Copyright Alternative in Small-Claims Enforcement Act offers creators an efficient and cost-effective process to protect their creations. I look forward to working with the authors of the bill to protect the intellectual property of all innovators.”

Representative Chu said: “Creators like artists, photographers, and songwriters contribute over a trillion dollars to our economy each year. But intellectual property theft makes it difficult for creators to earn a living. This is especially true for small and individual creators who depend on licensing and copyright, but lack the resources to adequately challenge copyright infringement claims in federal court. I’m proud to support the CASE Act because it proposes a common sense solution that will make it easier for creators to protect their intellectual property and continue to share their works and grow our economy.”

Participation in the CCB will be voluntary, and respondents will have the ability to opt out. The CCB will be housed within the U.S. Copyright Office, and its jurisdiction limited to civil copyright cases with a cap of $ 30,000 in damages. A panel of three Copyright Claims Officers will be designated to adjudicate and settle copyright claims. The simplified proceedings do not require the parties to appear in-person and will permit them to proceed pro se – i.e., without an attorney.

The bill is supported by the Authors Guild, American Society of Media Photographers, American Photographic Artists, National Press Photographers Association, Professional Photographers of America, North American Nature Photography Association, Songwriters Guild of America, Nashville Songwriters Association International, National Music Publishers Association, Digital Media Licensing Association, Graphic Artists Guild, Creative Future, and the Copyright Alliance.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Congress is considering a copyright small claims bill you should know about

Posted in Uncategorized

 

Nine things you should know about the Google Pixel 2

07 Oct

Nine things you should know about the Google Pixel 2

With all the hype surrounding the release of the Google Pixel 2 and Pixel 2 XL and their “world’s highest rated smartphone camera,” it’s easy to lose the forest for the trees. What’s important about this new phone? Where did Google leave us wanting more? How is this phone’s camera better than its predecessor? And why should photographers care about the technology baked into Google’s new flagship?

After covering the launch in detail and spending some time with the Pixel 2 in San Francisco, we’re setting out to answer those questions (and a few others) for you.

Dual Pixel AF

The new Pixel phones sport a very clever feature found on higher-end Canon cameras: split left- and right-looking pixels behind each microlens on the camera sensor. This allows the camera to sample left and right perspectives behind the lens, which can then be used to focus the camera faster on the subject (it’s essentially a form of phase-detect AF).

It’s officially called dual pixel autofocus, and it has the potential to offer a number of advantages over the ‘focus pixels’ Apple phones use: every pixel can be dedicated to focus without any impact to image quality (see this illustration). We’ve been impressed with its implementation on the Samsung Galaxy S7 and on Canon cameras. So we’re expecting fast autofocus for stills, even in low light, as well as very smooth autofocus in video with little to no hunting. Given how good the Pixel 2’s stabilized 4K video is, you might even make some professional-looking clips from these new phones.

Dual pixel + machine learning driven portraits

The split pixels have another function: the left-looking and right-looking pixels underneath each microlens essentially sample two different perspectives that are slightly shifted from one another. Google then build a rudimentary depth map using this set of separated images and some help from its machine learning algorithms.

Clever. However, the stereo disparity between the two images are likely to be very small compared to a dual camera setup, which is likely to make it difficult for the Pixel 2 cameras to distinguish background from subject for more distant subjects. This might explain the poor results in DXO’s comparison, but better results in the image above where Allison is much closer to the camera.

On the plus side, Portrait mode now renders full resolution 12MP files (you only got 5MP files on the original Pixels), and the ‘lens blur’ Google uses is generally more pleasing than Apple’s more Gaussian blur. Out of focus highlights are rendered as more defined circles compared to Apple’s results. This comes at a cost though: the blurring algorithm is computationally intensive so you’ll generally wait a few seconds before seeing the result (and you can’t see it in real time as you can with Apple).

Hardier hardware

Unsurprisingly if you’ve been following the rumor mill, the hardware specs on the new Pixel 2 phones didn’t particularly impress any more than what we’ve seen from other phones. They’re nice devices, and both are far more durable with IP67 ratings (a huge step up from the poor IP53 ratings of the previous Pixel phones, which were prone to quick wear and tear), but hardware-wise there’s not too much to be excited about.

We’ve lost the headphone jack but gained stereo speakers in the front. The XL has less of a bezel, but it’s still not as bezel-less as Samsung phones. No dual-cameras. RAM and processor are what you get in other Android phones. You can invoke the Assistant with a squeeze, but… well…

Nothing really stands out. But wait, there’s more to the story.

AI First

If there’s one point Google CEO Sundar Pichai continuously makes in his presentations, it’s that we’re moving from a ‘Mobile First’ to an ‘AI First’ world. He’s referring to the move away from thinking of mobile devices simply as pocketable computation devices but, instead, intelligent devices that can adapt to our needs and make our lives easier. And Google is a leader here, thanks to the intelligence it acquires from its search services and apps like Maps and Photos.

AI is increasingly being used in many services to make them better, but often transparently. CEO Pichai recently cited an example of the Fitness app: every time he opens it he navigates to a different page. But rather than have the app team change the default page, or add an option to, he figures AI should just learn your preference transparently.

What’s that mean for photography and videography? We’re purely speculating here, but, imagine a camera learning your taste in photography by the way you edit photos. Or the photos you take. Or the filters you apply. Or the photos you ‘like’. How about learning your taste in music so when Google Assistant auto-builds videos from your library of photos and videos, they’re cut to music you like?

The possibilities are endless, and we’re likely to see lots of cool things make their way into the new Pixel phones, like…

Google Lens

Sundar Pichai first talked about Google Lens at the I/O Developer Conference earlier this year. It marries machine vision and AI, and is now available for the first time in the Photos app and within Google Assistant on the new Pixel phones. Google’s machine vision algorithms can analyze what the camera sees, and use AI to do cool things like identify what type of flower you’re pointing your camera at.

This sort of intelligence is applicable to photography as well: Pichai talked about how AutoML has improved Google’s ability to automatically identify objects in a scene. Anything from a fence to a motorbike to types of food to your face: Google is getting increasingly better at identifying these objects and understanding what they are – automatically using reinforcement learning.

And once you understand what an object is, you can do all sorts of cool things. Remove it. Re-light it. Identify it so you can easily search for it without every keywording your photos. The Photos app can already pull up pictures of planes, birthdays, food, wine, you name it. We look forward to seeing how the inclusion of Google Lens in the new phones makes Photos and Assistant better.

Maybe intelligent object recognition could even fix flare issues by understand what flare is… though this may not be necessary for the new phone…

Goodbye ugly lens flare

Thankfully, the nasty flare issues that plagued the first-gen Pixel phones appear to be remedied by lifting the camera module above the glass backing, which has also been reduced and streamlined to fit flush with the rest of the phone.

The camera unit is raised from the back ever-so-slightly though, but that’s a compromise we’re willing to accept if it means the camera isn’t behind a piece of uncoated glass – a recipe for flare disaster. The only flare we’ve seen so far with our limited hands-on time is what DXO witnessed in their report: the lens element reflections in corners you sometimes see even in professional lenses. That’s something we’ll gladly put up with (and that some of us even like).

If flare bugged you on the previous Pixel phones (it certainly bugged me), consider it a non-issue on the new phones.

Incredibly smooth video

When the original Pixel launched, Google claimed its camera beat other cameras with optical image stabilization (OIS) despite lacking OIS. It claimed its software-based stabilization approach allowed it to get better with time as algorithms got better. Omitting OIS was also crucial to keeping the camera small such that it fit within the slim body.

Google is singing a different tune this year, including both OIS and electronic image stabilization (EIS) in its larger camera unit that extends ever-so-slightly above the back glass. And the results appear to be quite impressive. The original Pixels already had very good stabilization in video (even 4K), but combining OIS + EIS appears to have made the video results even smoother. Check out the video from Google above.

For low light photography, OIS should help steady the camera for longer shutter speeds. You should also get better macro results and better document scanning. Hey, that’s worth something.

Equally as important as what the new phones offer is what the new phones don’t offer…

Color management? HEIF?

Notably absent was any talk about proper color management on the new phones. The previous Pixels had beautiful OLED displays, but colors were wildly inaccurate and often too saturated due to lack of any color management or proper calibrated display modes.

iPhones have some of the most color accurate screens out there. Their wide gamut screens now cover most of DCI-P3 but, more importantly, iOS can automatically switch the screen’s gamut between properly calibrated DCI-P3 and standard gamut (sRGB) modes on-the-fly based on content.

This means you view photos and movies as they were intended. It also means when you send an image from your iPhone to be printed (using a service that at least understands color management, like Apple’s print services), the print comes back looking similar, though perhaps a bit dimmer.*

The Samsung Galaxy S8 also has calibrated DCI-P3 and sRGB modes, though you have to manually switch between them. The new Pixel phones made no mention of calibrated display modes or proper color management, though Android Oreo does at least support color management (though, like Windows, leaves it up to apps). But without a proper display profile, we’re not sure how one will get accurate colors on the Pixel 2 phones.


*That’s only because prints aren’t generally illuminated as much as bright backlit LCDs that these days reach anywhere from 6 to 10 times the brightness prints are generally viewed at.

HDR display?

Sadly there was no mention of 10-bit images or HDR display of photos or videos (using the HDR10 or Dolby Vision standards) at Google’s press event. This leaves much to be desired.

The iPhone X will play back HDR video content using multiple streaming services, but more importantly for photographers it will display photos in HDR mode as well. Remember, this has little to do with HDR capture but, instead, the proper display of photos on displays—like OLED—that can reproduce a wider range of tones.

To put it bluntly: photos taken on an iPhone X and viewed on an iPhone X will look more brilliant and have more pop than anything else you’re likely to have seen before thanks to the support for HDR display and accurate color. It’s a big deal, and Google seems to have missed the boat entirely here.

HDR displays require less of the tonemapping traditional HDR capture algorithms employ (though HDR capture is still usually beneficial, since it preserves highlights and decreases noise in shadows). Instead of brightening shadows and darkening bright skies after capture, as HDR algorithms like the Pixel 2’s are known to do post-capture (above, left), leaving many of these tones alone is the way to go with high dynamic range displays like OLED.

In other words, the image above and to the right, with its brighter highlights and darker shadows, may in fact be better suited for HDR displays like that of the Pixel 2, as long as there’s still color information present in the shadows and highlights of the (ideally 10-bit) image. Unfortunately, Google made no mention of a proper camera-to-display workflow for HDR capture and display.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nine things you should know about the Google Pixel 2

Posted in Uncategorized

 

Why should you care about the Sony RX10 IV? Phase detection autofocus, that’s why

16 Sep

The Sony RX10 IV is a fixed lens camera with a 1″-type sensor and 24-600mm equivalent lens that can shoot 4K video or stills at 24 fps, but that’s not what we think is interesting about it. The addition of phase detection autofocus is pivotal to all of those features. If you have a little over a minute to spare, we’ll tell you why. And for bonus points, we shot this video entirely hand-held with an RX10 IV and continuous AF turned on.

Sony RX10 IV impressions, sample images and more

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Why should you care about the Sony RX10 IV? Phase detection autofocus, that’s why

Posted in Uncategorized

 

Netflix acquires rights to Kodachrome: a movie about the final days of the iconic film

16 Sep
Photo courtesy Toronto International Film Festival (TIFF)

Netflix has acquired the rights to Kodachrome, an upcoming Jason Sudeikis movie about the last days of the Kodachrome film era. The news was first reported by Deadline, who is claiming that Netflix paid $ 4 million for the rights and plans a widespread theatrical release that could cover theaters in major regions around the world—including the US, UK, Canada, and Japan.

Kodachrome the movie revolves around a father and son on a road trip to get to one of Kodak’s photo processing labs before it closes down forever. The screenplay was inspired by a New York Times article about the last lab in the world that was processing the now-iconic film stock; in the movie, the characters are racing against time to try and get four rolls developed before it’s too late.

True to the film’s theme, Kodachrome was shot on film, not digital, and features the acting talents of Jason Sudeikis, Ed Harris, and Elizabeth Olsen. Here’s hoping it comes to a theatre near you… and pays proper tribute to the analog legend.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Netflix acquires rights to Kodachrome: a movie about the final days of the iconic film

Posted in Uncategorized

 

Raw bit depth is about dynamic range, not the number of colors you get to capture

03 Sep
Shooting this image in 14-bit helped retain the full dynamic range captured by the sensor. Most of the time, with most cameras, 12-bit is enough.

Raw bit depth is often discussed as if it improves image quality and that more is better, but that’s not really the case. In fact, if your camera doesn’t need greater bit depth then you’ll just end up using hard drive space to record noise.

In fairness, it does sound as if bit depth is about the subtlety of color you can capture. After all, a 12-bit Raw file can record each pixel brightness with 4096 steps of subtlety, whereas a 14-bit one can capture tonal information with 16,384 levels of precision. But, as it turns out, that’s not really what ends up mattering. Instead, bit depth is primarily about how much of your camera’s captured dynamic range can be retained.

Much of this comes down to one factor: unlike our perception of brightness, Raw files are linear, not logarithmic. Let me explain why this matters.

Half the values in your Raw file are devoted to the brightest stop of light you captured

The human visual system (which includes the brain’s processing of the signals it gets from the eyes), interprets light in a non-linear manner: double the brightness of a light source by, say, turning on a second, identical light, and the perceptual difference isn’t that things have got twice as bright. Similarly, we’re much better as distinguishing between subtle differences in midtones than we are vast differences in bright ones. This is part of the way we’re able to cope with the high dynamic ranges in the scenes we encounter.

Digital sensors are different in this respect: double the light and you’ll get double the number of electrons released by the sensor, which results in double the value generated by the analogue-to-digital conversion process.

This diagram shows how the linear response of a digital sensor maps to the number of EV you can potentially capture. Note how the brightest stop of light takes up 1/2 of the available values of your Raw file.

Why does this matter? Because it means that half the values in your Raw file (the values between 2048 and 4096 in a 12-bit Raw file) are devoted to the brightest stop of light you captured. Which, with most typical tone curves, ends up translating to a series of near-indistinguishably bright tones in the final image. The next stop of light takes up the next 1024 values, and the third stop is recorded with the next 512, taking half of the remaining values each time.

In a typical out-of-camera JPEG rendering, the first ~3.5EV are captured above middle grey, and the first three of these stops of highlights have used up 7/8th of your available Raw values. The remaining Raw values are used to capture tones from just above middle grey all the way down to black.

Using the D750’s default JPEG tone curve as an example, you can see that around 3.5EV of the camera’s dynamic range is used for tones above middle grey. 1/2 the Raw values are used to capture the tones that end up being JPEG values of roughly 240 upwards, and more than 7/8ths of the available values on tones about middle grey.

Follow this logic onwards and you’ll see that the difference between 12 and 14-bit Raw has less to do with subtle transitions (after all, even in the example I describe, the tones around middle brightness would be encoded using 256 levels: the same number of steps used for the entire dynamic range of the image if saved as a JPEG or viewed on most, 8-bit monitors). Instead it has much more to do with having enough Raw values left to encode shadow detail.

By the time you’ve created a JPEG, the brightest stop of your image is likely to be made up from the tones in this image. Half of your Raw file was used for storing just these near-white tones.

Since every additional ‘bit’ of data doubles the number of available Raw values, but the brightest stop of light takes up half of your Raw values, you can see that all of those additional values increase the capacity of your Raw file by 1EV. Which, assuming neither you nor your camera’s exposure calibration are completely mad, ends up meaning an extra stop in the shadows.*

A 14-bit Raw file won’t generally give extra highlight capture, it’ll mean having sufficient Raw numbers left to be able to capture detail in the shadows. And if your camera is swamped by noise before you get to 14EV (most are), all this extra data will effectively be used to record shadow noise.

In other words, 12-bits provides enough room to encode roughly 12 stops of dynamic range, while 14 bits gives the extra space to capture up to around 14EV. Or to look at it from the opposite perspective: if your camera is overwhelmed by noise before you get to 12 stops of DR, you don’t benefit from more bit depth: all you’d be doing is capturing the shadow noise in your image in greater detail.

Bit depth in video

It’s a similar story in video. Because video capture is so data intensive, it’s not usually practical to try to save all the captured data, which usually means crushing everything down to just 8 or 10 bits.

Log gamma is a way of taking the linear data captured by the sensor and reformatting it so that each stop of captured light is given the same amount of values in the smaller file. This makes more sensible use of the file space and retains as much processing flexibility as possible.

And, even if you own, say, a Sony a7S (one of the few cameras we’ve encountered that has sufficiently large/clean pixels that it doesn’t have enough bit depth to capture its full dynamic range at base ISO), you need to remember that you only get the camera’s full DR at base ISO. As soon as you increase the ISO setting, you’ll amplify the brightest stop of captured data beyond clipping, such that you very quickly get to the stage where you’re losing 1EV of DR for every 1EV increase in ISO.

If your camera doesn’t capture more than 12 stops of DR, you probably shouldn’t clamor for 14-bit Raw

So, even though you started with a camera whose DR outstrips its bit depth, that stops being true as soon as you hike up the ISO: instead you just go back to encoding shadow noise with tremendous precision.

Consequently, if your camera doesn’t capture more than 12 stops of DR, you probably shouldn’t clamor for 14-bit Raw: it’s not going to increase the subtlety of gradation in your final images (especially not if you’re viewing them as 8-bit). All those extra bits would do is increase the amount of storage you’re using by around 16% with all of that space being devoted to an archive of noise.


Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Raw bit depth is about dynamic range, not the number of colors you get to capture

Posted in Uncategorized

 

Three Good Reasons To Learn More About Photography

28 Jul

Photography has become so popular, mainly because of the inclusion of cameras on mobile phones, so it’s more difficult for your photos to be noticed. But, if you learn a little more about photography your photos will be more likely to stand out from the crowd.

Your life is full of gadgets and equipment that can be challenging to learn to use really well. Learning to use your camera will make your photography so much more enjoyable. Photography is therapy. Picking up your camera, making time to take photos, can be a wonderful break from the busy pace of your daily life.

Three Good Reasons To Learn More About Photography - Thai ladies

Committing even a small amount of time regularly to learn more about photography will help you enjoy the creative process of image making. It will help overcome frustrations you may have because you don’t understand your camera well enough. As you study you will find that your creative ideas and expression will come more naturally. And, as you know and understand more and begin to relax when you have your camera in your hands, you will find a personal groove and means of expression that will be unique to you.

So here are three really good reasons for you to learn more about photography.

#1. Create outstanding photos

Most of us love to share our photos and see the response or family and friends have to them. Even more exciting is when strangers begin to show appreciation for our photographs. The desire to have your photos seen and enjoyed by others can be a real motivation for you to enjoy photography. But getting your photographs noticed is not so easy.

Three Good Reasons To Learn More About Photography - girl with elephant

This has become more of a challenge in recent years because pretty much everybody has a some form of a camera these days. Social media has made it extremely easy to share photos and have them seen by a potentially global audience. But how do you get your photos noticed when everyone else is sharing their photos in the same way?

Take some time to learn more. Learning about light, exposure, color, tone, composition and timing will help you produce more creative, more interesting, more noticeable photographs. And, if you think about it, you probably something about these things already, because you see them all the time, but are not necessarily thinking about them.

You can’t see anything if there’s no light. Light is the essence of photography. With no light, you can have no photo. Learning to appreciate different types of light and when some light is better for making photos than others, will help you create more outstanding photographs. You see light all the time and if you can begin to understand it and appreciate how to expose your photographs well, you will create more compelling images. Knowing something of the limitations of your camera and how it captures tone and color will also help greatly in the creative process.

Three Good Reasons To Learn More About Photography - flower

Compose and time your photos better

Learning composition rules and developing a real feel for them will also help your photographs be more impactful. Like with any creative expression, learning the rules will allow you to eventually implement them without really thinking about them. This is when I believe you will become most creative.

Certainly timing your photographs well takes research and practice. Learning to anticipate action and choose precisely the best time to make a photograph, the decisive moment, is a skill that will certainly enhance your photography and make it stand out.

2. Become intimate with your equipment

Learning how to use your camera well and becoming confident will result in a more enjoyable and more creative photography experience. I have met (and taught) many people who own very nice cameras but are not confident in using them. If you don’t have a good understanding of your camera you will most likely become somewhat frustrated when you pick it up to use it, or later when you are looking at disappointing photos.

Three Good Reasons To Learn More About Photography

Becoming familiar with your camera and how to use it well takes time and commitment to study. Because each camera model is different, with the controls in different places, it means you need to do some research and hands on practice to know how to use your cameras with confidence.

Essentially all cameras are the same. They function the same way, with light hitting the sensor (or film) to create photographs. Whether you use a camera in any of the automatic modes, or prefer to use it in Manual Mode, the process of creating photos is the same, but the amount of creative control differs greatly.

Setting your exposure manually gives you far more control over the end result. Learning to do this takes a bit more dedication but will ultimately result in you making more unique, creative photographs. If your camera is always set to one of the automatic modes then the camera is making some (or all) of the most creative choices. Cameras are smart, but they are not creative – you are.

Three Good Reasons To Learn More About Photography

Learning to take control of the camera will help you enjoy the creative process of photography far more than if you have to stop and think about the basics of what to do each time you pick up your camera.

3. Photography can be therapeutic

Having creative drive, wanting to make good photos and have others enjoy them, will hopefully lead you to want to learn more about using your camera well. Doing that will free you up to enjoy your whole photography experience and you can then experience photography as a therapy.

Three Good Reasons To Learn More About Photography

Expressing your creativity with a camera you understand and love is very therapeutic. Taking time out from your busy day, even just for 10 or 15 minutes, to take a few photographs can be enjoyable and relaxing. Indulging in longer photography sessions on weekends or during vacations can be terrifically therapeutic.

I find when I pick my camera up to shoot for pleasure, (it’s different shooting for work if I have a client to please,) I can easily become absorbed only in making photographs, and nothing else matters! Being able to really zone in on what I am doing helps me forget all the worries and stresses I may be experiencing in life and just enjoy the process of being creative.

Narrowing the attention of your thoughts to the creative processes of photography, meditating on photography, brings a whole other dimension to the experience. Being aware of and intentionally seeking opportunities where you can use your camera creatively can help you relax differently than other activities you may enjoy. Watching the news on TV, checking social media, or going to a movie are all things that add a change of pace to your daily life. But a lot of what you do to relax does not involve being creative. Being creative with your camera adds a whole new dimension to life and can be most therapeutic.

Three Good Reasons To Learn More About Photography

Conclusion

Having the desire and drive to want your photos to be noticed when you share them is a good reason to learn more about photography. Overcoming the frustration you may feel because you haven’t taken the time to learn how to use your camera is another good solid reason to invest some time, and maybe even some money, in learning how your camera functions and how you can control it better (preferably in manual mode.)

Once you are on the path to learn more about your camera and about photography, knowing that it can be wonderfully therapeutic, should be most encouraging for you to follow some course of study to make the most of your camera – even your phone camera! Here’s a video for you to watch more on this topic as well.

What other good reasons do you have for learning more about photography? Please share in the comments below.

The post Three Good Reasons To Learn More About Photography by Kevin Landwer-Johan appeared first on Digital Photography School.


Digital Photography School

 
Comments Off on Three Good Reasons To Learn More About Photography

Posted in Photography

 

Photokina’s new manager talks about the future of the trade show

18 Jul

Back in May, Photokina, the biennial photo industry trade show in Cologne, Germany, announced that it would become an annual event and include products and technologies beyond its historical focus of cameras and photography.

Now recently appointed show manager Christoph Menke is providing some background on the decision to change the dates of the future shows and other changes in a short Q&A session with the internal PR team of Koelnmesse, the company that is organizing Photokina.

You can read the full Q&A below, in case you’re curious:

What made Koelnmesse decide to change the show cycle from every other year show to a yearly show?

Today, professionals and consumers view the subject of imaging completely different compared to 10 years ago. Now virtual reality, wearables, tablets, mobile and smart home security are an integral part of the imaging world. The same applies to imaging software for editing, sorting, storing images, or even for CGI & sharing solutions.

As an imaging platform, we need to embrace those new technologies. As a part of this embrace, we acknowledge the significantly shorter innovation cycles of those new technologies compared to established capture technologies. For instance, the software industry has always been characterized by short development cycles. To offer these industries a suitable exhibition platform, the answer can only be a shorter cycle.

Based on surveys we know that our visitors prefer an annual photokina. The annual show cycle will also put a more regular spot light on other segments of our show such as photo equipment, photo accessories and photo studio segments and the brands represented there. They will benefit from more frequent exposure to buyers, consumers and the international media attending our show

Why is photokina going to move to May in 2019 and the following years?

The photokina dates for the next 2 years are Sept 26-29, 2018 and May 8-11, 2019 (Wednesday to Saturday). The switch to the May dates starting in 2019 is the result of conversations with key accounts from all segments. The feedback we received indicated that the May dates will provide an ideal time frame to fully take advantage of international demand before the start of the summer season.

The Show will be shortened from six to four days – what will be the upside of this change?

Based on attendees surveys we conducted we know that four show days are sufficient to see all the imaging technologies and content. Within those four days we create a more compact and thereby more intense show experience that is appreciated by both exhibitors and visitors. The fact that our customers will no longer have to wait two years for the next photokina had a significant impact on the decision to shorten the sequence.

Will the annual show cycle also mean changes to the content and focus of this event?

The changes in content and focus are what led to the structural changes. New technologies are accelerating in the innovation cycles in the imaging world. The annual show cycle is photokina’s response to a rapidly changing market place. Our mission is to provide a platform that shows the imaging technologies of the future and promotes the exchange between developers, engineers, start-ups and manufacturers.

Take video for example: In times of the YouTube-revamped trend towards amateur videos and an increasing convergence of the technologies for photo & video (4K-Grabbing), the moving picture is as important as it was in the first hour of photokina – hence the name. One of the highlights for the next event will be an Imaging Lab at photokina.

What has been the reaction of your photokina customers to the date change?

So far the responses are mostly positive. Budgets and logistics are certainly issues which have to be dealt with and we expect a transition process to adjust to the yearly dates. We are confident that the date change will provide an improved photokina for exhibitors and attendees alike.

The latest editions of Photokina were noticeably smaller and less busy than previous shows which is not much of a surprise given the decline of the camera market. Let’s hope the changes mentioned by Christoph Menke will help Photokina remain as relevant and vibrant as it has been throughout most of its existence.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Photokina’s new manager talks about the future of the trade show

Posted in Uncategorized