RSS
 

Posts Tagged ‘latest’

Google shares a deep dive into its new HDR+ with Bracketing technology found in its latest Pixel devices

26 Apr

Google has shared an article on its AI Blog that dives into the intricacies of the HDR capabilities of its most recent Pixel devices. In it, Google explains how its HDR+ with Bracketing technology works to capture the best image quality possible through clever capture and computational editing techniques.

To kick off the article, Google explains how its new ‘under the hood’ HDR+ with Bracketing technology — first launched on the Pixel 4a 5G and Pixel 5 back in October — ‘works by merging images taken with different exposure times to improve image quality (especially in shadows), resulting in more natural colors, improved details and texture, and reduced noise.’

Using bursts to improve image quality. HDR+ starts from a burst of full-resolution raw images (left). Depending on conditions, between 2 and 15 images are aligned and merged into a computational raw image (middle). The merged image has reduced noise and increased dynamic range, leading to a higher quality final result (right). Caption and image via Google.

Before diving into how the behind-the-scenes work is done to capture the HDR+ with Bracketing images, Google explains why high dynamic range (HDR) scenes are difficult to capture, particularly on mobile devices. ‘Because of the physical constraints of image sensors combined with limited signal in the shadows […] We can correctly expose either the shadows or the highlights, but not both at the same time.’

Left: The result of merging 12 short-exposure frames in Night Sight mode. Right: A single frame whose exposure time is 12 times longer than an individual short exposure. The longer exposure has significantly less noise in the shadows but sacrifices the highlights. Caption and image via Google.

Google says one way to combat this is to capture two different exposures and combine them — something ‘Photographers sometimes [do to] work around these limitations.’ While this works fairly well with cameras with larger sensors and more capable processors inside tablets and laptops to merge the images, Google says it’s a challenge to do on mobile devices because it requires ‘Capturing additional long exposure frames while maintaining the fast, predictable capture experience of the Pixel camera’ and ‘Taking advantage of long exposure frames while avoiding ghosting artifacts caused by motion between frames.’

Google was able to mitigate these issues with its original HDR+ technology through prioritizing the highlights in an image and using burst photography to reduce noise in the shadows. Google explains the HDR+ method ‘works well for scenes with moderate dynamic range, but breaks down for HDR scenes.’ As for why, Google breaks down the two different types of noise that get into an image when capturing bursts of photos: shot noise and read noise.

Google explains the differences in detail:

One important type of noise is called shot noise, which depends only on the total amount of light captured — the sum of N frames, each with E seconds of exposure time has the same amount of shot noise as a single frame exposed for N × E seconds. If this were the only type of noise present in captured images, burst photography would be as efficient as taking longer exposures. Unfortunately, a second type of noise, read noise, is introduced by the sensor every time a frame is captured. Read noise doesn’t depend on the amount of light captured but instead depends on the number of frames taken — that is, with each frame taken, an additional fixed amount of read noise is added.’

Left: The result of merging 12 short-exposure frames in Night Sight mode. Right: A single frame whose exposure time is 12 times longer than an individual short exposure. The longer exposure has significantly less noise in the shadows but sacrifices the highlights. Caption and image via Google.

As visible in the above image, Google highlights ‘why using burst photography to reduce total noise isn’t as efficient as simply taking longer exposures: taking multiple frames can reduce the effect of shot noise, but will also increase read noise.’

To address this shortcoming, Google explains how it’s managed to use a ‘concentrated effort’ to make the most of recent ‘incremental improvements’ in exposure bracketing to combined the burst photography component of HDR+ with the more traditional HDR method of exposure bracketing to get the best result possible in extreme high dynamic range scenes:

‘To start, adding bracketing to HDR+ required redesigning the capture strategy. Capturing is complicated by zero shutter lag (ZSL), which underpins the fast capture experience on Pixel. With ZSL, the frames displayed in the viewfinder before the shutter press are the frames we use for HDR+ burst merging. For bracketing, we capture an additional long exposure frame after the shutter press, which is not shown in the viewfinder. Note that holding the camera still for half a second after the shutter press to accommodate the long exposure can help improve image quality, even with a typical amount of handshake.’

Google explains how its Night Sight technology has also been improved through the use of its advanced bracketing technology. As visible in the illustration below, the original Night Sight mode captured 15 short exposure frames, which it merged to create the final image. Now, Night Sight with bracketing will capture 12 short and 3 long exposures before merging them, resulting in greater detail in the shadows.

Capture strategy for Night Sight. Top: The original Night Sight captured 15 short exposure frames. Bottom: Night Sight with bracketing captures 12 short and 3 long exposures. Caption and image via Google.

As for the merging process, Google says its technology chooses ‘one of the short frames as the reference frame to avoid potentially clipped highlights and motion blur.’ The remaining frames are then aligned with the reference frame before being merged.

To reduce ghosting artifacts caused by motion, Google says it’s designed a new spatial merge algorithm, similar to that used in its Super Res Zoom technology, ‘that decides per pixel whether image content should be merged or not.’ Unlike Super Res Zoom though, this new algorithm faces additional challenges due to the long exposure shots, which are more difficult to align with the reference frame due to blown out highlights, motion blur and different noise characteristics.

Left: Ghosting artifacts are visible around the silhouette of a moving person, when deghosting is disabled. Right: Robust merging produces a clean image. Caption and image via Google.

Google is confident it’s been able to overcome those challenges though, all while merging images even faster than before:

Despite those challenges, our algorithm is as robust to these issues as the original HDR+ and Super Res Zoom and doesn’t produce ghosting artifacts. At the same time, it merges images 40% faster than its predecessors. Because it merges RAW images early in the photographic pipeline, we were able to achieve all of those benefits while keeping the rest of processing and the signature HDR+ look unchanged. Furthermore, users who prefer to use computational RAW images can take advantage of those image quality and performance improvements.’

All of this is done behind the scenes without any need for the user to change settings. Google notes ‘depending on the dynamic range of the scene, and the presence of motion, HDR+ with bracketing chooses the best exposures to maximize image quality.’

Google’s HDR+ with Bracketing technology is found on its Pixel 4a 5G and Pixel 5 devices with the default camera app, Night Sight and Portrait modes. Pixel 4 and 4a devices also have it, but it’s limited to Night Sight mode. It’s also safe to assume this and further improvements will be available on Pixel devices going forward.

You can read Google’s entire blog post in detail on its AI blog at the link below:

HDR+ with Bracketing on Pixel Phones

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google shares a deep dive into its new HDR+ with Bracketing technology found in its latest Pixel devices

Posted in Uncategorized

 

Microsoft’s latest computer vision technology beats humans at captioning images

16 Oct
Seeing AI. Photo by Microsoft

Microsoft has expanded its existing efforts to improve life for the visually impaired by developing an AI system capable of automatically generating high-quality image captions — and, in ‘many cases,’ the company says its AI outperforms humans. This type of technology may one day be used to, among other things, automatically caption images shared online to aid those who are dependent on computer vision and text readers.

Computer vision plays an increasingly important role in modern systems; at its core, this technology enables a machine to view, interpret and ultimately comprehend the visual world around it. Computer vision is a key aspect of autonomous vehicles, and it has found use cases in everything from identifying the subjects or contents of photos for rapid sorting and organization to more technical use cases like medical imaging.

In a newly published study [PDF], Microsoft Researchers have detailed the development of an AI system that can generate high-quality image captions called VIsual VOcabularly (VIVO), which is a pre-training model that learns a ‘visual vocabulary’ using a dataset of paired image-tag data. The result is an AI system that is able to create captions describing objects in images, including where the objects are located within the visual scene.

Test results found that at least in certain cases, the AI system offers new state-of-the-art outcomes while also exceeding the capabilities of humans tasked with captioning images. In describing their system, the researchers state in the newly published study:

VIVO pre-training aims to learn a joint representation of visual and text input. We feed to a multi-layer Transformer model an input consisting of image region features and a paired image-tag set. We then randomly mask one or more tags, and ask the model to predict these masked tags conditioned on the image region features and the other tags … Extensive experiments show that VIVO pre-training significantly improves the captioning performance on NOC. In addition, our model can precisely align the object mentions in a generated caption with the regions in the corresponding image.

Microsoft notes alternative text captions for images are an important accessibility feature that is too often lacking on social media and websites. With these captions, individuals who suffer from vision impairments can use dictation technology to read the captions, giving them insight into the image that they may otherwise be unable to see.

The company previously introduced a computer vision-based product described specifically for the blind called Seeing AI, which is a camera app that audibly describes physical objects, reads printed text and currency, recognizes and reports colors and other similar things. The Seeing AI app can also read image captions — assuming captions were included with the image, of course.

Microsoft AI platform group software engineering manager Saqib Shaikh explained:

‘Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t. So, there are several apps that use image captioning as a way to fill in alt text when it’s missing.’

That’s where the expanded use of artificial intelligence comes in. Microsoft has announced plans to ship the technology to the market and make it available to consumers through a variety of its products in the near future. The new AI model is already available to Azure Cognitive Services Computer Vision customers, for example, and the company will soon add it to some of its consumer products, including Seeing AI, Word and Outlook for macOS and Windows, as well as PowerPoint for Windows, macOS and web users.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Microsoft’s latest computer vision technology beats humans at captioning images

Posted in Uncategorized

 

MS Optics reveals its latest lens, the Elnomaxim 55mm F1.2 for Leica M-mount cameras

31 Jul

Miyazaki san of MS Optics fame has released his latest M-mount lens, the Elnomaxim 55mm F1.2.

Bellamy Hunt over at Japan Camera Hunter is still working to translate the details of the lens, but what is known at this point is that the lens uses a gauss type optical design with an extremely simple formula. Specifically, the lens is Miyazaki san’s take on the Zeiss 50mm F2 Sonnar lens originally designed for the Zeiss Contax I rangefinder.

The entirely manual lens features an aperture range of F1.2 through F16, has a minimum focusing distance of one meter (3.25ft) and has a 49mm front filter thread. The lens measures in at 50mm diameter, 43mm long and weighs 180g (6.35oz).

Japan Camera Hunter has shared a few sample photos captured with the lens:

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_6710359379″,”galleryId”:”6710359379″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

As is the case with most MS Optics lenses, this thing isn’t going to win any sharpness contests, but it has character.

The Elnomaxim 55mm F1.2 lens for M-mount is available in black chrome and silver chrome, and is currently available to order from Japan Camera Hunter for $ 1,200. Units are being produced in small batches, so expect stock to come and go.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on MS Optics reveals its latest lens, the Elnomaxim 55mm F1.2 for Leica M-mount cameras

Posted in Uncategorized

 

Nikon is the latest camera company sued by DigiMedia Tech over alleged patent infringement

01 Jul

DigiMedia Tech, LLC, has filed a patent infringement lawsuit against yet another camera company, this time going after Nikon over its alleged infringement of three different US patents. This lawsuit follows similar infringement cases brought against Olympus, Fujifilm and JK Imaging, all of them also over the alleged infringement of digital camera technology patents.

DigiMedia Tech is a non-practicing entity (NPE) of IPInvestments Group, which received many US patents from Intellectual Ventures LLC in November 2019. Following the patent acquisition, DigiMedia Tech has filed lawsuits against several companies over their alleged infringement of these patents — in the latest one involving Nikon, the company claims infringement of US patents No. 6,914,635, No. 7,715,476 and No. 6,545,706.

The ‘635 patent was first filed in 2001 by Nokia Mobile Phones; it involves a microminiature zoom system designed for digital cameras. The ‘476 patent was first filed in 1999 and then again in 2005; it covers a ‘system, method and article of manufacture’ related to a digital camera’s ability to track a subject’s head. The third and final patent in the lawsuit, ‘706, was filed in 1999 and likewise covers head-tracking camera technology.

The infringement lawsuit specifically names Nikon’s Coolpix A1000 as a model that allegedly infringes the ‘635 patent and the Nikon P900RM ‘and similar products’ as allegedly infringing the ‘706 and ‘476 patents. Among other things, the DigiMedia Tech lawsuit wants Nikon to pay ‘damages in an amount to be determined at trial for Defendants’ infringement, which amount cannot be less than a reasonable royalty.’

It’s unclear how much this could amount to, financially speaking. Likewise, Nikon hasn’t yet commented on the infringement lawsuit.

DigiMedia Tech’s decision to sue Nikon isn’t surprising in light of its recent activity. On May 29, the NPE filed patent infringement lawsuits against Fujifilm and Olympus, alleging that both have used digital camera technologies in select camera models that infringe on its US patents. Following that, DigiMedia Tech filed the Nikon lawsuit referenced above, then a similar complaint against JK Imaging, the company behind Kodak PIXPRO cameras, on June 24 in California Central District Court.

A full list of DigiMedia Tech’s lawsuits, including related documents, can be found through the Unified Patents portal.

A summary of each of the lawsuits DigiMedia Tech, LLC currently has against a number of camera manufacturers.

The NPE practice of exploiting acquired patents has been heavily criticized for years. These companies oftentimes don’t actually practice the invention detailed by the patent and usually don’t sell processes or products related to them. These non-practicing entities instead enforce the patent rights against companies allegedly infringing them, doing so to obtain licensing payments or some other type of revenue, such as royalties or damages, on the acquired patents.

Though not all NPEs exploit acquired patents, there are those that do. Ones that operate aggressively and file large numbers of lawsuits in order to cast a wide net to see what they catch are colloquially referred to as ‘patent trolls.’

In 2011, the Hastings Science and Technology Law Journal published a large PDF document titled ‘Indirect Exploitation of Intellectual Property Rights by Corporations and Investors’ that details NPEs and the ways they may be used. The discussion is extensive and ideal for understanding the reasoning behind these lawsuits, stating in part that patent infringement lawsuits from NPEs may be, among other things, used by:

…a sponsoring entity against a competitor to achieve a corporate goal of the sponsor. A corporation or investor, by serving as the sponsor for an IP privateering engagement, can employ third-party IPRs as competitive tools. The privateer, a specialized form of non-practicing entity (NPE), asserts the IPRs against target companies selected by the sponsor. The sponsor’s benefits do not typically arise directly from the third party’s case against a target, but arise consequentially from the changed competitive environment brought about by the third party’s IPR assertion.

Of course, DigiMedia Tech’s own reasons for filing suits against these camera companies are unclear and it’s impossible to say whether there would be an indirect benefit for a competing company as a result of these allegations. As these cases are only days and weeks old, the outcome of each lawsuit is yet to be seen.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nikon is the latest camera company sued by DigiMedia Tech over alleged patent infringement

Posted in Uncategorized

 

DJI scores a victory in the latest round of a patent battle with Autel

29 May
DJI’s Mavic 2 Pro is one of the products that could potentially become unavailable in the U.S., as early as July, on account of Autel’s claims.

With close to an 80% share in the U.S. consumer drone market, DJI holds a substantial lead over its competitors. News arose, several weeks ago, that some of its best-selling models plus accessories could be banned from being sold and imported into the U.S., as early as July, on account of a preliminary Autel Robotics victory.

DJI and its law firm, Finnegan, recently responded to Autel’s claims and secured their own victory in the latest round of an ongoing, years-long patent battle. For perspective, DJI filed a patent infringement complaint against Autel, who currently holds an 0.8% share of the U.S. drone manufacturers market, back in 2016. DJI alleged that Autel copied the ‘look and feel’ of its Phantom UAVs with its X-Star Premium drone.

DJI filed a patent infringement complaint. They claim the Autel X-Star Premium copies the ‘look and feel’ of their Phantom series.

On August 30, 2018, Autel mounted its own offense by requesting the International Trade Commission (ITC) investigate DJI, pursuant to Section 337, for selling drones infringing on Autel’s US Patent No. 9, 260,184. Months later, on October 2nd, the ITC set its investigation in motion based on Autel’s assertion of the following 3 patents:

  • ’174 patent – obstacle avoidance
  • ’184 patent – rotor blades
  • ’013 patent – batteries that clamp onto the drones

The ITC’s chief administrative law judge (CALJ) issued a favorable initial determination (ID) to DJI on March 2nd of this year, according to Finnegan. CAL Judge Bullock found that ’174 patent claims, involving obstacle avoidance technology, were not infringed as they were not practiced by any domestic industry product and therefore invalid under 35 U.S.C. § 101. He declared that many of the accused DJI products also did not infringe on the ‘184 patent involving rotor blades. The ‘013 patent claim on batteries that clamp into the drone was also declared invalid.

‘Ultimately the Commission may decide that Autel deserves no remedy at all, but at a minimum, the Commission is unlikely to enforce any exclusion order or cease-and-desist order based on the three invalid patents.’

Finnegan’s post also claims a win for DJI with the Patent Trial and Appeals Board (PTAB). On May 13th – 21st of this year, the PTAB found all 3 patents (‘174, ‘184, ‘013) declared in Autel’s ITC proceeding unpatentable. ‘The Commission is currently deciding whether to review the CALJ’s ID. Ultimately the Commission may decide that Autel deserves no remedy at all, but at a minimum, the Commission is unlikely to enforce any exclusion order or cease-and-desist order based on the three invalid patents. DJI’s sales in the U.S., therefore, will not be affected by Autel’s claims,’ DJI’s legal team concludes.

The entire IP update from DJI’s law firm can be viewed here. Representatives from the drone manufacturer declined to comment.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on DJI scores a victory in the latest round of a patent battle with Autel

Posted in Uncategorized

 

IBC 2020 latest show to be cancelled, go virtual as organizers fear ‘many unknowns’

19 May

The International Broadcasting Convention (IBC), scheduled to take place in Amsterdam this September, has become the latest industry exhibition to get canceled as the coronavirus pandemic continues to disrupt events around the world.

Organizers cited ‘many unknowns’ around the shape of restrictions for social distancing and measures that would have to be in place to make the show safe for visitors and exhibitors alike. ‘It has become clear that a return to (a new) normal is unlikely to be achieved by September’ CEO Michael Crimp says in a statement on the show’s website.

Crimp says the decision to cancel now, while the show was still four months away, was to allow exhibitors to plan for the future and not spend money and time on the event only to have it canceled at a later date. He also says the show will use its digital platform, IBC365, to support the industry and hints that there will be some form of a virtual show on the channel this year, before the physical show returns in 2021.
For more information see the IBC Show website.

Press release:

IBC2020 Cancelled due to Covid-19

I hope you are safe and well, as we continue to adapt to the changing world in which we find ourselves. Following on from my previous statement I wanted to give you an update on the developments and situation at IBC.

As previously outlined, the IBC team has been focused on assessing and developing appropriate plans for IBC2020 this September at the RAI Amsterdam.

Within these plans it is crucial that IBC can deliver a safe and successful environment. However, as governments announce the route forward, it has become clear that a return to (a new) normal is unlikely to be achieved by September.

It has also become evident, through our dialogue with the IBC community, that an early decision is preferential for the industry so it can plan for the future.

Right now, despite the best work of the IBC team and our Dutch colleagues, there are still many unknowns. Therefore, we cannot guarantee that we will be able to deliver a safe and valuable event to the quality expected of IBC.

It is also evident that important aspects of a large-scale event such as IBC will be greatly altered by social distancing, travel restrictions, masks etc. so much so that the spirit of IBC will be compromised.

With that in mind and based on what we know at this point, it is with a heavy heart IBC has made the difficult decision to cancel the IBC2020 show. You may have seen IBC and the IABM surveys on this topic. Evidence gathered from these IBC stakeholders helps to confirm this decision.

Whilst this is hugely disappointing for us all, IBC will continue to play a vital role in supporting the industry to get back on track wherever we are able.

For more than 50 years, IBC has provided the central annual meeting place for the Media, Entertainment & Technology community. For example, over the coming months IBC will continue to engage with the industry through its digital platform IBC365. Details of our plans will follow soon.

Your views continue to help shape IBC. If you have suggestions, questions or concerns regarding this decision and announcement please do not hesitate to contact us using our dedicated email address: use our dedicated email address statusupdates@ibc.org

My very best wishes to all of you during this time of unprecedented challenges and I look forward to welcoming you next year at IBC2021, in Amsterdam.

Kind regards,
Michael Crimp
CEO IBC

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on IBC 2020 latest show to be cancelled, go virtual as organizers fear ‘many unknowns’

Posted in Uncategorized

 

AVID Media Composer 2020.4 update moves to 64-bit, delivering support for latest macOS and Mac Pro

05 May

AVID has released a new update for AVID Media Composer, its popular video editing software. The new version, AVID Media Composer 2020.4, includes numerous new features, but the biggest one for Mac users is that the software is finally 64-bit.

Apple’s macOS Mojave was the last version of Apple’s operating system to support 32-bit apps. Apple warned software developers for a couple of years that 32-bit apps would no longer be supported with macOS Catalina. As photographers and videographers alike have upgraded to macOS Catalina or purchased new computers such as Apple’s latest Mac Pro which ships with Catalina, they have had to deal with outdated software no longer being supported. Until AVID Media Composer 2020.4, that list of inoperable software included Media Composer.

In addition to being 64-bit, AVID Media Composer 2020.4 also includes a new Universal Media Engine (UME). This new UME speeds up the entire workflow, right from file ingest, by removing reliance on QuickTime. AVID promises that the new UME will be felt via improved performance during importing, playback, editing and exporting files.

Windows users can now create, edit, collaborate on and export Apple ProRes media natively. This includes full encoding and decoding support. However, per Cinema5D, it appears that this may not include support for ProRes RAW files. On AVID’s latest blog post detailing Media Composer 2020.4, there is no mention of ProRes RAW.

“Get a birds-eye view of an entire 128-track sequence with the Timeline Sequence Map, enabling faster navigation without scrolling or changing the view size.” Image and text credit: AVID

The updates in AVID Media Composer 2020.4 are not limited to under-the-hood improvements, the team has also worked on improving the user interface and making the software more efficient. Additions and improvements include a Timeline Sequence Map, which allows for a full view of 128-track sequence, color coding for bin tabs, Titler+ improvements, bulk editing capabilities, multi-select tools, faster sound separation, new 9×16 and 1:1 aspect ratio mask margins, new color space support, additional workspace customization options and much more.

AVID Media Composer 2020.4 includes new bulk edit capabilities. Image credit: AVID

AVID Media Composer 2020.4 is a free update for all existing Media Composer users. If you are a first-time user, perhaps someone who is looking for video editing software for your new macOS Catalina machine, the software is available via a subscription model. You can subscribe on a per-month basis for $ 23.99/month or for a full year for $ 239.00. By opting for an annual subscription, you save just under $ 50. For more information on AVID Media Composer, click here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on AVID Media Composer 2020.4 update moves to 64-bit, delivering support for latest macOS and Mac Pro

Posted in Uncategorized

 

Apple’s new 13-inch MacBook Pro features faster CPU options, latest Intel Iris Graphics configurations and updated keyboard

04 May

Apple has announced its new 13-inch MacBook Pro, bringing modest performances improvements and one physical improvement that should make keyboard warriors around the world much happier.

We’ll get to the specs in a minute, but first the most important update—the keyboard. Apple has eschewed its troubled ‘butterfly’ keyboard for its Magic Keyboard with this new 13-inch MacBook Pro. For nearly five years, Apple’s ‘butterfly mechanism’ keyboard has caused issues for MacBook owners, with individual keys sometimes becoming entirely non-functioning, causing all sorts of issues while typing. This transition means the ‘butterfly’ keyboard is no longer present in any of Apple’s laptops.

The physical ‘esc’ key is a welcomed change also.

In addition to the changes underneath the keys, Apple has also added a physical ‘Escape’ (esc) key to the left of the Touch Bar. On previous models, the ‘esc’ key was digital, located within the Touch Bar, a design decision that could wreak havoc if the Touch Bar glitched out or broke.

The updated 13-inch model is powered by Intel quad-core chips, with optional upgrades to configure the MacBook Pro with Intel’s 10th-gen CPUs that offer up to 2.3GHz per core and TurboBoost speeds up to 4.1GHz. Apple has also added the option to configure the 13-inch MacBook Pro with up to 32GB of 3733MHz LPDDR4X RAM and has doubled the base model storage to 256GB (with optional upgrades to increase it to 4TB).

If you upgrade to the $ 1,799 model, you’ll also get the latest Intel Iris Plus Graphics, which Apple claims will offer an 80% performance increase over the previous-generation dual-core MacBook Pros. In addition to improving overall graphic performance, models with Intel’s Iris Plus graphics will be able to power Apple’s Pro Display XDR at its full 6K resolution.

As for the laptop’s own display, the 13-inch screen supports P3 wide color gamut, offers a maximum 500 nits brightness and includes Apple’s ‘True Tone’ technology that automatically adjusts the screen’s white balance based on the ambient environment.

The base model starts at $ 1,299 and includes a 1.4GHz quad-core 9th-gen Intel Core i5 CPU, 8GB of RAM and 256GB of storage. However, if you’re planning on using this for editing photos or video though, we’d suggest you jump up to the $ 1,799 base model, which offers Intel’s 10-gen CPUs with the newer Intel Iris Plus graphics and 16GB of RAM. Further upgrades can be configured for additional costs, as usual.

You can find out more information about the latest 13-inch MacBook Pro models and browse through the different configurations on Apple’s website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Apple’s new 13-inch MacBook Pro features faster CPU options, latest Intel Iris Graphics configurations and updated keyboard

Posted in Uncategorized

 

CIPA’s latest numbers show camera production, sales slashed by half in March

27 Apr

The coronavirus pandemic has hit the camera industry particularly hard with a dramatic downturn in both production and sales during March. Traditionally a period when sales of new products announced after the New Year begin to come on-line, this March saw production and shipments from Japanese companies drop to only 48% of levels reached in the same month last year.

Figures released by the Japanese Camera and Imaging Products Association (CIPA) show world-wide shipments were only 47.8% of the volume last March, with the number shipped to Asia (excluding Japan and China) only 39.8% of last those shipped in March 2019. Shipments to ‘Other Areas’ (including the Middle East) are most healthy but still down to 68.2% of last year’s volume, and this region accounts for a very small proportion of sales. Shipments to the USA were at 44.7% and those to Europe were 48.3%, while Japan managed 54.5%.

Production and shipped data for March 2020. Column 2 is for comparison to February 2020 and column 3 shows a comparison to March 2019. Column 4 compares Q1 2020 with Q1 2019

It seems SLR cameras have fared far worse than mirrorless models, which may be partly down to the fact that there are fewer new SLR models around at the moment. Production of SLRs reached only 32.6% of the levels for last March, while mirrorless models reached 56%. China was the only region to receive more SLRs than mirrorless cameras, but that figure was still only half of what the country took last March.

The CIPA figures are reflected in the sales reported by Stackline, which showed online camera sales in the USA were down 64% in March. With many camera shops with closed doors too, sales across the counter are also likely to be very poor. The market research company rated cameras no. 3 in its list of the 100 fastest declining product categories – with only briefcases and luggage doing worse. Unsurprisingly, disposable gloves were the fastest-growing product.

Last week Canon reported a drop in camera revenue of 27% for the first quarter of the year – slightly ahead of that across the total Japanese camera market which recorded a drop in revenue of 31.1% compared to the same period last year. The revenue drop for SLRs shipped from Japan was 40.2% while that for mirrorless models was 25.8% in the months January to March 2020.

Sales of lenses have held up a little better with the total volume produced in March dropping by 46.1% and those shipped falling by 44.8%. Production of full-frame lenses dropped by 34.5%, while those designed for smaller formats fell by 53% by volume. In better news, the value of smaller-format lenses shipped to the USA was up by 1.5% over the value shipped to the region in February – and I’ll take that as a positive.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on CIPA’s latest numbers show camera production, sales slashed by half in March

Posted in Uncategorized

 

Shoot now, focus later: multi-view E-mount lens patent is Sony’s latest foray in to light field photography

23 Apr

According to Sony Alpha Rumors, Sony has filed a patent for an interchangeable E-mount lens that will allow users to adjust focus after the shot has been recorded. The lens appears to contain a number of lenses arranged next to each other to record multiple individual images on the camera’s sensor that can be combined later presumably to control focus and depth-of-field.

The site doesn’t tell us where the patent information was seen so we can’t read it for ourselves, but some diagrams are provided that we are told are part of the application.

The Light L16 light field camera from Light Labs Inc

Sony investigating light field technology is nothing new, as in the past it has filed patents for a light field sensor and has a partnership to supply sensors to Light Labs Inc, the manufacturer of the Light L16 camera that was announced in 2015. The draw of the technology is obvious as it can allow multiple focal lengths to be used for full-resolution zooming and/or focus and depth-of-field selection after the event.

We have seen a few attempts at harnessing the idea in commercial camera products in the past, including the Lytro Illum, Nokia’s 9 PureView and to some extent a number of other multi-lens and multi-sensor smartphones. It is hard to tell from the available information exactly what these lenses will used for in this patented idea, and whether they will be to collect distance information or be used to expand the range of tones that can be recorded in a single shot – or both.

Either way, such a lens will need a camera with an extremely powerful processor or the ability to simply record the images for processing in software later – as with Sony’s Pixel Shift Multi Shooting mode that requires images are processed in the company’s Imaging Edge desktop application.

As we have all noticed in the past though, exciting patent applications don’t always result in a product that comes to market. If genuine however this does at least demonstrate Sony is still pursuing ideas in this area.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Shoot now, focus later: multi-view E-mount lens patent is Sony’s latest foray in to light field photography

Posted in Uncategorized