RSS
 

Posts Tagged ‘devices’

Google shares a deep dive into its new HDR+ with Bracketing technology found in its latest Pixel devices

26 Apr

Google has shared an article on its AI Blog that dives into the intricacies of the HDR capabilities of its most recent Pixel devices. In it, Google explains how its HDR+ with Bracketing technology works to capture the best image quality possible through clever capture and computational editing techniques.

To kick off the article, Google explains how its new ‘under the hood’ HDR+ with Bracketing technology — first launched on the Pixel 4a 5G and Pixel 5 back in October — ‘works by merging images taken with different exposure times to improve image quality (especially in shadows), resulting in more natural colors, improved details and texture, and reduced noise.’

Using bursts to improve image quality. HDR+ starts from a burst of full-resolution raw images (left). Depending on conditions, between 2 and 15 images are aligned and merged into a computational raw image (middle). The merged image has reduced noise and increased dynamic range, leading to a higher quality final result (right). Caption and image via Google.

Before diving into how the behind-the-scenes work is done to capture the HDR+ with Bracketing images, Google explains why high dynamic range (HDR) scenes are difficult to capture, particularly on mobile devices. ‘Because of the physical constraints of image sensors combined with limited signal in the shadows […] We can correctly expose either the shadows or the highlights, but not both at the same time.’

Left: The result of merging 12 short-exposure frames in Night Sight mode. Right: A single frame whose exposure time is 12 times longer than an individual short exposure. The longer exposure has significantly less noise in the shadows but sacrifices the highlights. Caption and image via Google.

Google says one way to combat this is to capture two different exposures and combine them — something ‘Photographers sometimes [do to] work around these limitations.’ While this works fairly well with cameras with larger sensors and more capable processors inside tablets and laptops to merge the images, Google says it’s a challenge to do on mobile devices because it requires ‘Capturing additional long exposure frames while maintaining the fast, predictable capture experience of the Pixel camera’ and ‘Taking advantage of long exposure frames while avoiding ghosting artifacts caused by motion between frames.’

Google was able to mitigate these issues with its original HDR+ technology through prioritizing the highlights in an image and using burst photography to reduce noise in the shadows. Google explains the HDR+ method ‘works well for scenes with moderate dynamic range, but breaks down for HDR scenes.’ As for why, Google breaks down the two different types of noise that get into an image when capturing bursts of photos: shot noise and read noise.

Google explains the differences in detail:

One important type of noise is called shot noise, which depends only on the total amount of light captured — the sum of N frames, each with E seconds of exposure time has the same amount of shot noise as a single frame exposed for N × E seconds. If this were the only type of noise present in captured images, burst photography would be as efficient as taking longer exposures. Unfortunately, a second type of noise, read noise, is introduced by the sensor every time a frame is captured. Read noise doesn’t depend on the amount of light captured but instead depends on the number of frames taken — that is, with each frame taken, an additional fixed amount of read noise is added.’

Left: The result of merging 12 short-exposure frames in Night Sight mode. Right: A single frame whose exposure time is 12 times longer than an individual short exposure. The longer exposure has significantly less noise in the shadows but sacrifices the highlights. Caption and image via Google.

As visible in the above image, Google highlights ‘why using burst photography to reduce total noise isn’t as efficient as simply taking longer exposures: taking multiple frames can reduce the effect of shot noise, but will also increase read noise.’

To address this shortcoming, Google explains how it’s managed to use a ‘concentrated effort’ to make the most of recent ‘incremental improvements’ in exposure bracketing to combined the burst photography component of HDR+ with the more traditional HDR method of exposure bracketing to get the best result possible in extreme high dynamic range scenes:

‘To start, adding bracketing to HDR+ required redesigning the capture strategy. Capturing is complicated by zero shutter lag (ZSL), which underpins the fast capture experience on Pixel. With ZSL, the frames displayed in the viewfinder before the shutter press are the frames we use for HDR+ burst merging. For bracketing, we capture an additional long exposure frame after the shutter press, which is not shown in the viewfinder. Note that holding the camera still for half a second after the shutter press to accommodate the long exposure can help improve image quality, even with a typical amount of handshake.’

Google explains how its Night Sight technology has also been improved through the use of its advanced bracketing technology. As visible in the illustration below, the original Night Sight mode captured 15 short exposure frames, which it merged to create the final image. Now, Night Sight with bracketing will capture 12 short and 3 long exposures before merging them, resulting in greater detail in the shadows.

Capture strategy for Night Sight. Top: The original Night Sight captured 15 short exposure frames. Bottom: Night Sight with bracketing captures 12 short and 3 long exposures. Caption and image via Google.

As for the merging process, Google says its technology chooses ‘one of the short frames as the reference frame to avoid potentially clipped highlights and motion blur.’ The remaining frames are then aligned with the reference frame before being merged.

To reduce ghosting artifacts caused by motion, Google says it’s designed a new spatial merge algorithm, similar to that used in its Super Res Zoom technology, ‘that decides per pixel whether image content should be merged or not.’ Unlike Super Res Zoom though, this new algorithm faces additional challenges due to the long exposure shots, which are more difficult to align with the reference frame due to blown out highlights, motion blur and different noise characteristics.

Left: Ghosting artifacts are visible around the silhouette of a moving person, when deghosting is disabled. Right: Robust merging produces a clean image. Caption and image via Google.

Google is confident it’s been able to overcome those challenges though, all while merging images even faster than before:

Despite those challenges, our algorithm is as robust to these issues as the original HDR+ and Super Res Zoom and doesn’t produce ghosting artifacts. At the same time, it merges images 40% faster than its predecessors. Because it merges RAW images early in the photographic pipeline, we were able to achieve all of those benefits while keeping the rest of processing and the signature HDR+ look unchanged. Furthermore, users who prefer to use computational RAW images can take advantage of those image quality and performance improvements.’

All of this is done behind the scenes without any need for the user to change settings. Google notes ‘depending on the dynamic range of the scene, and the presence of motion, HDR+ with bracketing chooses the best exposures to maximize image quality.’

Google’s HDR+ with Bracketing technology is found on its Pixel 4a 5G and Pixel 5 devices with the default camera app, Night Sight and Portrait modes. Pixel 4 and 4a devices also have it, but it’s limited to Night Sight mode. It’s also safe to assume this and further improvements will be available on Pixel devices going forward.

You can read Google’s entire blog post in detail on its AI blog at the link below:

HDR+ with Bracketing on Pixel Phones

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google shares a deep dive into its new HDR+ with Bracketing technology found in its latest Pixel devices

Posted in Uncategorized

 

Apple unveils 5G iPhone 12 Pro, 12 Pro Max devices with larger screens, better cameras and more

14 Oct

Moments after revealing its iPhone 12 devices at today’s virtual event, Apple showed off the next-generation of its flagship mobile devices, the iPhone 12 Pro and iPhone 12 Pro Max.

The two new models are constructed of surgical-grade steel and use advanced physical vapor deposition (VPD) technology that results in a ’spectacular luster.’ Despite having effectively the same physical dimensions of their predecessors, the iPhone 12 Pro and 12 Pro Max feature larger screens, 6.1” and 6.7”, respectively, as Apple has managed to further shrink the bezel.

The new iPhone 12 Pro models are based on Apple’s A14 Bionic chip, which was first seen its the company’s iPad Air refresh last month. The 5nm process chipset features a 6-core CPU and a 4-core GPU that Apple claims is up to 50% faster than any other phone on the market.

As with the iPhone 12, both the Pro and Pro Max receive sub—6GHz and mmWave 5G connectivity, ensuring the devices should work with the array of various 5G networks major carriers around the world are supporting. Apple has also included its new ‘Ceramic Shield’ technology to its flagship devices, which should help reduce the likeliness of cracked screens.

As has been the case for most of Apple’s ‘Pro’ devices, the company put a huge emphasis on the camera capabilities of its latest flagship devices. The iPhone 12 Pro features a 12MP (13mm equivalent) ultrawide module, a 12MP F1.6 wide module and a 12MP (52mm equivalent) telephoto module. Apple says the wide module captures 27% more light than previous-generations, aided by the larger aperture and seven-element lens. The optical image stabilization has also been improved, as well as autofocus thanks to the improved LiDAR capabilities.

Apple further pushed the photography boundaries with the iPhone 12 Pro Max, putting in a 47% larger sensor that offers an 87% improvement in low-light photos thanks to its larger 1.7um pixels. Apple also added swapped out the telephoto on the 12 Pro for a 5x (65mm equivalent) telephoto lens.

Apple has also added the ability to record HDR video, including the ability to shoot and in Dobly Vision HDR directly within the Photos app.

The iPhone 12 Pro will be available to pre-order on October 16, starting at $ 999. iPhone 12 Pro Max pre-orders will open on November 6 and pricing starts at $ 1,099. Both models are available in blue, gold, graphite and silver.

Apple ProRAW

In addition to the new hardware, Apple also revealed it will bring Apple ProRAW to its latest iPhone devices later this year. Apple says the new format will combine the benefits of its Deep Fusion and Smart HDR technology with the flexibility of a Raw file format when editing. The format will be able to be captured with all the onboard cameras.

To ensure third-party apps will also be able to make the most of the new format, Apple will be launching an API for both desktop and mobile apps for third-party developers to use. There’s no mention on when exactly we’ll see this update get pushed to devices.

This story is developing. Refresh the page for the latest information.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Apple unveils 5G iPhone 12 Pro, 12 Pro Max devices with larger screens, better cameras and more

Posted in Uncategorized

 

Delkin Devices unveils new 2TB CFexpress Type B card with read speeds up to 1,730MB/s

11 Jun

Hot on the heels of ProGrade Digital’s new 1TB CFexpress Type B card, Delkin Devices has unveiled its new 2TB CFexpress Type B card, the highest-capacity CFexpress Type B card to date.

The new CFexpress Type B cards use a PCIe 3.0 interface and NVMe storage to achieve read and write speeds up to 1,730MB/s and 1,430MB/s, respectively, meaning it’s not only higher-capacity than ProGrade Digital’s ‘Gold label’ cards, but also faster. Delkin Devices says each of its cards undergoes ‘extensive testing to ensure full functionality and performance’ with the latest CFexpress compatible camera systems.

A compatibility chart from Delkin Digital showing what cameras the card has already been approved for use in and what camera models it’s currently testing the cards with.

In addition to the standard lifetime warranty, these new cards also come with Delkin Device’s 48HR Replacement Guarantee, which states that Delkin Devices ‘will happily replace any non-working card within 48 hours or less (not including weekends), prior to receiving your non-working card.’ In the event you have an authorized Delkin Devices retailer nearby, you can also pick up the replacement in-person — just be sure to register your card.

No pricing or availability information is given, but Delkin’s 1TB version of this card has a list price of $ 1,000 but is currently available for $ 700 at Adorama and $ 890 at B&H. Being the press release is live, we expect it won’t take long before the 2TB model becomes available.

https://www.the-digital-picture.com/News/News-Post.aspx?News=34868

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Delkin Devices unveils new 2TB CFexpress Type B card with read speeds up to 1,730MB/s

Posted in Uncategorized

 

Lead engineer for computational imaging on Pixel devices leaves Google

14 May

According to a report by industry publication The Information, two key executives in Google’s Pixel team left the company earlier this year. One of them is the former General Manager of the Pixel Smartphones Business Unit, Mario Queiroz. According to his Linkedin profile, he left Google at the end of January to take on the role of Executive Vice President at data security company Palo Alto Networks.

A few months earlier, and two months before the launch of the Pixel 4 devices in October 2019, he had already moved internally from the Pixel team into a role that directly reported to Google CEO Sundar Pichai.

From an imaging point of view, the second executive leaving the Pixel team and company is more interesting, though: Marc Levoy has been a Computer Science professor at Stanford University since 1990 and since 2014, in his role as Distinguished Engineer at Google, had been leading the Pixel team that developed computational photography technologies for Pixel smartphones, including HDR+, Portrait Mode, and Night Sight.

Since its inception the Pixel smartphones series had excelled in the camera department, receiving positive camera reviews across the board. With the Pixel phones using very similar camera hardware to its direct rivals, a lot of the Pixel’s camera success could likely be attributed to the innovative imaging software features mentioned above.

However, things look slightly different for the latest Pixel 4 generation that was launched in October 2019. While many of the software features and functions were updated and improved, the camera hardware looks a little old next to the top-end competition. Companies like Samsung, Huawei and Xiaomi offer larger sensors with higher resolutions and longer tele lenses, and combine those hardware features with computational imaging methods, achieving excellent results. The Pixel 4 is also one of very few high-end phones to not feature a dedicated ultra-wide camera.

The Pixel 4 camera is still excellent in many situations but it’s hard to argue that Google has, at least do a degree, lost the leadership role in mobile imaging that it had established with previous Pixel phone generations.

It looks like internally there has been some discontent with other aspects of the Pixel 4 hardware, too. The report from The Information also details some criticism from Google hardware lead Rick Osterloh on the Pixel 4 battery:

At a hardware team all-hands meeting in the fall, ahead of the October launch in New York, Osterloh informed staff about his own misgivings. He told them he did not agree with some of the decisions made about the phone, according to two people who were present at the meeting. In particular, he was disappointed in its battery power.

Battery and camera performance are likely only two out of a range of factors that caused Pixel 4 sales figures to decrease when compared to its predecessors. IDC estimates that Google shipped around 2 million Pixel 4 units in the first two quarters the phone was on sale, compared to 3.5 million Pixel 3 units and almost 3 million Pixel 3A devices.

These figures are also relatively small when compared to the largest competitors. According to IDC Apple sold a whopping 73.8 million iPhones in the fourth quarter of 2019, for example.

It’s not entirely clear, but likely, that the departures of Queiroz and Levoy are linked to the Pixel 4’s performance in the marketplace. What will it mean for future Pixel phones and their cameras? We will only know once we hold the Pixel 5 in our hands but we hope Google will continue to surprise us with new and innovative technologies that get the most out of smartphone cameras.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Lead engineer for computational imaging on Pixel devices leaves Google

Posted in Uncategorized

 

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

18 Mar

The post How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide appeared first on Digital Photography School. It was authored by Theodomentis Lucia.

How to repair corrupted videos featured image

Earlier today, I thought about playing some videos that I shot on my digital camera during my last vacation. Sadly, every time I tried to play them on VLC or Windows Media Player, I got a gray or green display on the screen instead. I realized that my videos were corrupt and thought of digging up and finding an ideal solution for it. To be honest, after some failed attempts, I was finally able to repair my corrupted videos. If you have also encountered a similar situation in the past, then you may learn something from my experience.

Read on to find out how.

What could have caused your videos to become corrupt?

Before we head into the details on how to repair corrupted videos, it is important to discuss a few things in advance. You should know the major reasons why a video gets corrupt or damaged so that you can avoid it in the future.

These reasons include:

  • The transfer process of videos from your digital camera to the computer could be halted mid-transfer.
  • You may have restarted the system while the video was still playing in the background.
  • Sometimes, a third-party tool like a video editing software can also end up corrupting a file.
  • The location (drive or partition) where your video is stored could be corrupted.
  • The meta content or the header of the video might be tampered with as well.
  • If you have forcefully tried to change the video extension or type, then it can corrupt the file.
  • The audio-video components of the file might not be synced properly or could be missing.
  • You have played the video with an unsupported media player, or there could be an issue with the video encoding.
  • Other logical issues related to video playback, picture, sound, etc. can also cause this problem.
How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

How can you repair corrupted videos on Windows or Mac?

If your videos have been corrupted, then you need to look for the right tools to fix them. 

Since there are so many tools out there to repair corrupted videos, I asked a friend of mine, who is an expert in the field. He recommended Recoverit Video Repair, so I decided to give it a try as well. 

Ideally, Recoverit is a dedicated application to recover the lost or corrupt data of all kinds. However, it also has a dedicated video repairing tool that can fix various issues related to a video file.

After getting to know these features, I wanted to give Recoverit a try and found its click-through process pretty easy. 

Once you have installed the Recoverit Video Repair application, you can follow these steps to fix your corrupt files.

Steps to repair corrupted videos and files

Step 1: Launch the Video Repair tool

If you have some videos to fix, then just launch the Recoverit application on your system and launch the “Video Repair” tool from its home page. Also, attach your digital camera to the computer and move your damaged videos to the system.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

Step 2: Add corrupt videos to repair

Once you launch the Recoverit Video Repair application, you can just click on the “add” button to load the corrupted videos.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

This will further launch a browser window, letting you locate and load the videos that are damaged. If you want, you can load multiple videos and repair them at the same time.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide - Adding the videos screenshot

Step 3: Start the repairing process

After you have added the corrupted videos to the application, the interface will let you know. You can view the details of the added videos and even remove them from here. 

If you are ready, then just click on the “Repair” button to commence the repairing process.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

Step 4: Wait for the repairing process to be over

As soon as you click on the “Repair” button, the application starts fixing the loaded videos and displays the progress. Kindly be patient as of now and let the application complete the process.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide - waiting for the repair process screenshot

Once the repairing process is completed, Recoverit will let you know by displaying the following prompt.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

Step 5: View the repaired videos

You can now preview the results of the repaired process by clicking on the play icon adjacent to the video.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide - View repaired videos screenshot

This opens a pop-up window with a video player that will let you play the repaired video. In this way, you can check the results of the application before saving the videos.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

Step 6: Save the repaired videos

If you are satisfied with the results, then click on the “Save to Folder” button right next to the video. 

If you want, you can also click on the “Save All” button to save all the videos. This opens a browser window, letting you select a secure location to save the videos.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

Step 7: Run an advanced video repair (optional)

In the case that you are not satisfied with the standard video repairing results, then click on the Advanced Video Repair feature, which you can find at the bottom of the video player.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide - Advanced video repair screenshot

To run an advanced repair, you need to load a sample video to the application. The sample video should be shot on the same device as the corrupted video and must be in the same format. 

After loading the sample video, you can start the advanced video repair process and view its results as well.

How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

Conclusion

When using Recoverit, it’s easy to repair corrupted videos. Recoverit is able to fix all sorts of issues related to videos like missing fragments, gray/green screens, video not loading error, and so on and supports a wide range of video formats like MOV, AVI, FLV, 3GP, MP4, MKV, MTS, and more. It can fix all kinds of videos shot on a wide range of devices like digital cameras, drones, camcorders, and smartphones.

The video repair tool is available in three different purchase options – $ 29.95 a month, $ 39.95 a year, or $ 49.95 (lifetime purchase) for the Windows version. If you get it now, you can get 30% off Recoverit for windows or 30% off for mac.

Download and Get 30% off Recoverit video repair now by using the exclusive coupon code: LENOP09

Disclaimer: Recoverit is a dPS paid partner.

The post How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide appeared first on Digital Photography School. It was authored by Theodomentis Lucia.


Digital Photography School

 
Comments Off on How to Repair Corrupted Videos Shot on Digital Cameras and Other Devices: A Step-by-Step Guide

Posted in Photography

 

Anker announces the first MFi certified LED flash cube for iPhone 11, 11 Pro devices

31 Dec

Chinese smartphone accessory manufacturer Anker has announced a new iPhone lighting accessory that connects to Apple mobile devices via the Lightning cable.

Last week, we reported, via 9to5Mac, that Apple could soon support Made for iPhone (MFi) lighting accessories. This new device from Anker confirms that report and comes as the first device that will work natively with Apple’s hardware and software.

The iPhone LED Flash, as it’s currently being called, will retail for $ 50 and work exclusively with iPhone 11 and iPhone 11 Pro models. According to Anker, the flash unit works with Apple’s stock camera app, as well as third-party camera apps, and is capable of firing off 10,000 shots per charge. When the battery is dead, it can recharge via a Lightning cable (although you can’t use the one that’s built-in, which is inconvenient).

Ignore the ‘December 27, 2019’ release date — it appears it won’t be out for another month.

Anker claims the light, which bears a striking resemblance to the Lume Cube, can help ‘illuminate objects at 2x the range and 4x the brightness’ compared to the LED flash modules onboard the latest iPhone models. It comes with a diffuser (also similar to Lume Cube) and features a standard 1/4”-20 tripod mount.

No definitive date is given for the launch, but 9to5Mac is reporting it will be available sometime in January. In the meantime, you can keep an eye on Anker’s website for more information. We have contacted Anker for a confirmation on the release date and will update the article accordingly if we receive a response.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Anker announces the first MFi certified LED flash cube for iPhone 11, 11 Pro devices

Posted in Uncategorized

 

Oppo demonstrates its under-display front camera in prototype devices

13 Dec

Back in June Chinese smartphone maker OPPO announced a device with an under-display front camera at MWC Shanghai. This allows for the design of display without a ‘notch’ or front camera ‘pinhole’ but also means that incoming light has to first travel through the display before it hits the camera lens.

To make this possible the display section above the camera is made of a highly-transparent material and comes with a redesigned pixel structure that is optimized for the transmittance of light. In addition, the camera comes with a bigger than usual sensor to further make up for any loss of light and white balance and HDR algorithms have been customized to reduce the transparent display’s impact.

This week, Oppo now finally shared prototypes of devices with the new in-display camera with press and media and thanks to a Twitter post by David Imel from Android Authority we can see the new design fully in action.

The new under-display camera system is hardly visible on the front of the device and only becomes slightly visible when viewing the phone at certain angles. Of course, it’s way too early to make any judgments on image quality but in terms of usability, the new system appears to work just like any other smartphone front camera.

Just like their rear counterparts, front cameras have taken huge steps forward in terms of image quality over the last couple of years or so. However, manufacturers have not only been focused on image output but also the integration into the device. We’ve seen notches and pinholes as well as motorized pop-up and flip-up front cameras. If Oppo’s concept catches on 2020 might well be the year of the under-display front camera.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Oppo demonstrates its under-display front camera in prototype devices

Posted in Uncategorized

 

Sandmarc brings its anamorphic, tele, wide-angle and macro lenses to iPhone 11 devices

07 Dec

Smartphone accessory manufacturer Sandmarc has launched its new line of cases for Apple’s iPhone 11, 11 Pro and 11 Pro Max smartphone that enables its collection of lenses to work with the latest iOS devices. The new lineup works with Sandmarc’s anamorphic, telephoto, wide-angle and a macro lenses.

The anamorphic lens is a 1.33x anamorphic lens that offers a 2.4:1 aspect ratio once the footage is de-squeezed from the 16:9 video the iPhone captures. The telephoto lens offers 2x magnification on the iPhone 11 and 4x magnification when paired with the telephoto camera module on the iPhone 11 Pro and 11 Pro Max.

The Macro lens will work with any of the camera modules on Apple’s latest iPhones and a protective translucent lens hood will both protect the front element and diffuse light on the subject matter.

The wide lens seems a bit unnecessary considering all of the iPhone 11 models feature both a wide-angle and super-wide-angle lens, but much like Moment’s new wide-angle lens, using Sandmarc’s wide-angle lens atop the standard wide-angle lens on the iPhone 11 devices means you can get ultra-wide-angle shots with Apple’s Night Mode capture mode, as it’s limited to the ‘standard’ wide-angle camera onboard the iPhone 11, 11 Pro and 11 Pro Max devices.

All of these lenses are compatible with Sandmarc’s collection of filters, including their hybrid filter, circular polarizer filter, ND filters and others. They are constructed of aluminum and feature multi-coated elements to reduce flares and ghosting. The anamorphic lens costs $ 159.99, the macro lens costs $ 89.99 and the telephoto and wide-angle lenses cost $ 99.99.

When you purchase a lens, you will have the option to choose an accompanying case for your iPhone 11, 11 Pro or 11 Pro Max device that the lenses will mount to (in addition to receiving a clip mount for more versatile shooting). If you already have a Sandmarc lens (or a whole kit, you can purchase just the cases as well. You can find all of the new cases, lenses and filters on Sandmarc’s website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sandmarc brings its anamorphic, tele, wide-angle and macro lenses to iPhone 11 devices

Posted in Uncategorized

 

Google’s Pixel 4 Astrophotography mode is now available on Pixel 2, 3 and 3a devices

07 Nov

The Google Pixel 4 offers a range of new and innovative camera features. Some of them are, mainly due to hardware requirements, exclusive to the latest Pixel device but Google has promised to make some others available for older Pixel models.

This has now happened in the case of the Pixel 4 Astrophotography mode. The function had previously been made available for older Pixels via a community-driven development effort but it’s now officially supported with older devices in the latest version 7.2 of the Google Camera app. Below are a few sample photos captured with the Astrophotography mode on the Pixel 4:

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_6422957949″,”galleryId”:”6422957949″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Users of the Pixel 2 and Pixel 3 series, including the Pixel 3a, are now able to use the feature after updating to the latest version of the app. The Astrophotography option builds on Google’s Night Sight technology and captures and combines several frames to achieve a clean exposure and great detail as well as limited noise levels when photographing the night sky.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google’s Pixel 4 Astrophotography mode is now available on Pixel 2, 3 and 3a devices

Posted in Uncategorized

 

Adobe announces Sensei-powered Photoshop Camera app for Android, iOS devices

04 Nov

Today, at Adobe MAX, Adobe announced Photoshop Camera a free AI-powered camera app for Android and iOS devices that will be publicly launched in 2020.

At the core of the camera app is Adobe’s AI technology, Sensei. When taking a photo, Sensei will recognize what the subject matter is and automatically suggest filters to match the content, both in real-time and as a filter after the capture. The filters, which are referred to as ‘lenses’ are curated and created by various ‘well-known artists and influencers,’ but Adobe is also accepting sign-ups for artists interesting in creating custom lenses.

In a blog post announcing the new app, Adobe says:

We built Photoshop Camera as a Sensei-first app on our journey to expand our focus to deliver creative tools, including Photoshop, for everyone. With Photoshop Camera you can capture, edit, and share stunning photos and moments – both natural and creative – using real-time Photoshop-grade magic right from the viewfinder, leaving you free to focus on storytelling with powerful tools and effects.

One of the most impressive features of the Photoshop Camera app is an auto-masking mode that can intelligently select various parts of the images depending on what the particular lens in use is trying to achieve with the subject matter. A few examples include masking out and replacing the sky in an image as well as applying fake bokeh to a shot.

You can sign up for a limited preview version of the app on Adobe’s website. The final version is set for a 2020 release.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Adobe announces Sensei-powered Photoshop Camera app for Android, iOS devices

Posted in Uncategorized