RSS
 

Posts Tagged ‘engineer’

Lead engineer for computational imaging on Pixel devices leaves Google

14 May

According to a report by industry publication The Information, two key executives in Google’s Pixel team left the company earlier this year. One of them is the former General Manager of the Pixel Smartphones Business Unit, Mario Queiroz. According to his Linkedin profile, he left Google at the end of January to take on the role of Executive Vice President at data security company Palo Alto Networks.

A few months earlier, and two months before the launch of the Pixel 4 devices in October 2019, he had already moved internally from the Pixel team into a role that directly reported to Google CEO Sundar Pichai.

From an imaging point of view, the second executive leaving the Pixel team and company is more interesting, though: Marc Levoy has been a Computer Science professor at Stanford University since 1990 and since 2014, in his role as Distinguished Engineer at Google, had been leading the Pixel team that developed computational photography technologies for Pixel smartphones, including HDR+, Portrait Mode, and Night Sight.

Since its inception the Pixel smartphones series had excelled in the camera department, receiving positive camera reviews across the board. With the Pixel phones using very similar camera hardware to its direct rivals, a lot of the Pixel’s camera success could likely be attributed to the innovative imaging software features mentioned above.

However, things look slightly different for the latest Pixel 4 generation that was launched in October 2019. While many of the software features and functions were updated and improved, the camera hardware looks a little old next to the top-end competition. Companies like Samsung, Huawei and Xiaomi offer larger sensors with higher resolutions and longer tele lenses, and combine those hardware features with computational imaging methods, achieving excellent results. The Pixel 4 is also one of very few high-end phones to not feature a dedicated ultra-wide camera.

The Pixel 4 camera is still excellent in many situations but it’s hard to argue that Google has, at least do a degree, lost the leadership role in mobile imaging that it had established with previous Pixel phone generations.

It looks like internally there has been some discontent with other aspects of the Pixel 4 hardware, too. The report from The Information also details some criticism from Google hardware lead Rick Osterloh on the Pixel 4 battery:

At a hardware team all-hands meeting in the fall, ahead of the October launch in New York, Osterloh informed staff about his own misgivings. He told them he did not agree with some of the decisions made about the phone, according to two people who were present at the meeting. In particular, he was disappointed in its battery power.

Battery and camera performance are likely only two out of a range of factors that caused Pixel 4 sales figures to decrease when compared to its predecessors. IDC estimates that Google shipped around 2 million Pixel 4 units in the first two quarters the phone was on sale, compared to 3.5 million Pixel 3 units and almost 3 million Pixel 3A devices.

These figures are also relatively small when compared to the largest competitors. According to IDC Apple sold a whopping 73.8 million iPhones in the fourth quarter of 2019, for example.

It’s not entirely clear, but likely, that the departures of Queiroz and Levoy are linked to the Pixel 4’s performance in the marketplace. What will it mean for future Pixel phones and their cameras? We will only know once we hold the Pixel 5 in our hands but we hope Google will continue to surprise us with new and innovative technologies that get the most out of smartphone cameras.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Lead engineer for computational imaging on Pixel devices leaves Google

Posted in Uncategorized

 

‘Accurate autofocus on any subject in any environment’: Olympus engineer talks OM-D E-M1 Mark III AF

14 Apr
Olympus’ newly-announced OM-D E-M1 Mark III (left) alongside the OM-D E-M1X.

The recently-launched Olympus OM-D E-M1 Mark III features an advanced 121-point all cross-type autofocus system, and many other capabilities inherited from the flagship OM-D E-M1X including Live ND mode and various multi-shot features.

Modern mirrorless interchangeable lens cameras use one, or a combination of two main types of autofocus: contrast-detection and phase-detection. Contrast-detection autofocus works by driving focus until the contrast of a sampled area on the sensor is at its maximum – the presumed point of sharp focus. Contrast-detection is highly accurate, but can be slow, and relies on a certain amount of ‘trial and error’.

Phase-detection works more like human vision, using dedicated pixels to compare light coming from your subject from two slightly different perspectives at the same time. This allows the camera to judge depth, allowing for faster focus acquisition, without the ‘hunting’ characteristic of many purely contrast-detection autofocus systems.

Traditional phase-detection autofocus systems rely on pixels that are sensitive to vertical lines in a scene. Some, more sophisticated systems use a ‘cross-type’ pixel arrangement, which can detect horizontal or vertical detail, meaning that accurate focus can be achieved even with complex, non-linear subjects.

The 121-point autofocus system in the Olympus OM-D E-M1 Mark III and the E-M1X uses a combination of cross-type phase and contrast-detection, to ensure fast and accurate focus in a range of challenging environments.

In this interview, Tetsuo Kikuchi, manager of Imaging System Development at Olympus Corp explains how the E-M1 Mark III’s autofocus system was developed, and what it means to design a camera for demanding professionals.


What are the user requirements for professional-grade autofocus systems?

The most common requests we receive from professionals are that the camera must focus on their desired subject as quickly and accurately as possible, and it must continue to focus on that subject in any situation.

Professionals often stress the importance of operability, too. Their work requires the ability to quickly and easily adjust autofocus settings as shooting situations and subjects change. We believe that in order to satisfy professionals, we have to be able to meet all of those requirements.

What was the main customer feedback about autofocus in previous models?

After releasing the OM-D E-M1, which was Olympus’ first on-chip PDAF mirrorless camera, we received many requests for more AF points. These requests came from professional photographers and ‘prothusiasts’, especially in the genres of sports, bird, wildlife, and aviation.

We collated performance feedback and took special note of comments regarding focus drift to the background. Then, we set out to develop the 121-point all-cross-type PDAF system to eradicate this issue, delivering highly accurate focusing with all AF points, enabling photographers to keep their subject in focus.

Olympus’ 121-point all-cross-type PDAF system covers the majority of the frame.

What was the main priority when developing autofocus for the E-M1X and E-M1 Mark III?

There were two main goals, actually: ‘Quick focusing on targeted subjects after engaging autofocus’ and ‘stable and highly accurate continuous focusing on targeted subjects in any environment’.

What was the concept behind the 121-point all-cross-type On-chip Phase Detection AF point array?

We developed our on-chip PDAF system to achieve our goal of accurate and continuous autofocus on any subject in any environment.

The AF sensor array layout on the image sensor – which is unique to Olympus – is designed to detect any vertical, horizontal, or diagonal line patterns on subjects, and to find a defocused subject in the foreground. Thanks to our AF sensor layout design, our on-chip PDAF can deliver the high-speed focusing of phase detection and the accurate subject detection of cross-type sensors that DSLR cameras have previously achieved with a dedicated AF sensor. Our system has the added advantage of wider autofocus area coverage.

What makes Olympus autofocus technologies different to or better than competitors?

Our PDAF system can detect vertical and horizontal line patterns equally, allowing the camera to detect and focus on subjects in the foreground. This is a merit of Olympus. Because the PDAF sensors are arrayed not in one high density line but discretely over the entire area of the image sensor, any potential negative effect on image quality is also reduced.

Olympus’ most recent firmware delivers accurate autofocus without the risk of ‘focus drift’ to the background.

OM-D E-M1 Mark III, M.Zuiko Digital ED 300mm F4 IS PRO. F4, 1/250sec, ISO 3200

How difficult is it to implement cross-type on-chip phase-detection autofocus technology?

The most difficult challenge we faced when developing our all cross-type on-chip PDAF system was in determining the optimal layout of the PDAF sensors: one that would achieve the highest level of focusing accuracy with horizontally-arrayed and vertically-arrayed AF sensors simultaneously. In principle, utilizing phase detection AF can cause measurement errors; minimizing such errors is required for highly accurate focusing.

Could you elaborate on the sources of these measurement errors?

The measurement errors are attributed to a combination of factors, but the degree of measurement error is specific to the PDAF sensor layout. Therefore, we needed to build a proprietary in-house method to evaluate the reliability of measured distance data. This was important when we commenced development of the OM-D E-M1 Mark II, which was our first camera model equipped with all cross-type PDAF sensors.

Using pre-production cameras, our R&D members worked closely with professional photographers

Using pre-production cameras, our R&D members worked closely with professional photographers to conduct shooting tests, and these tests were repeated many times to refine our method. As a result, we are able to accurately evaluate our PDAF reliability and deliver exceptional performance.

Measurement errors can also come from the lens. However, our cameras can automatically correct for such errors according to the lens’ known characteristics, thus eliminating any effect.

Concept rendering, showing how cross-type phase-detection autofocus pixels are arrayed on the sensor of Olympus’ OM-D E-M1 Mark III and E-M1X.

Is there an autofocus advantage to a smaller sensor compared to APS-C or full-frame?

In principle, there is no correlation between PDAF performance and sensor size. However, our on-chip PDAF strongly complements our compact system size (which is ideal for photography genres such as bird and wildlife) because this autofocus method allows for a small camera body and fast focusing on moving subjects.

Lens resolution can affect autofocus accuracy though, because high resolution lenses make it possible to more precisely detect focus position.

The OM-D E-M1X is one of Olympus’ OM-D models that utilizes its contrast-detection plus all-cross-type phase-detection AF system.

How will the computational and machine learning based approaches we’ve seen in Olympus cameras evolve?

To ensure our products deliver the highest levels of performance, continuous device evolution must be paired with ever-evolving computational photography technologies. We have been heavily investing our resources to meet this challenge. For example, the E-M1 Mark III boasts our Handheld High Res Shot Mode, a technology that can produce high resolution low noise images, similar to those of full frame cameras, but with a system that’s significantly smaller in size.

We will develop new technologies to enable photographers to capture challenging images which are only possible using an Olympus camera and lens

We have also achieved advanced subject detection AF with AI-based deep learning technology. Features such as these will be continuously improved. Looking to the future, we will develop new technologies to enable photographers to capture challenging images which are only possible using an Olympus camera and lens, negating the need for extra equipment, special shooting skills, or additional post-processing.

The OM-D E-M1X has 121-point contrast-detection plus all-cross-type phase-detection AF. It features Olympus’ Subject Detection AF with AI-based deep learning technology.

OM-D E-M1 Mark III, M.Zuiko Digital ED 300mm F4 IS PRO. F5.6, 1/1600sec, ISO 400

Smartphone cameras today perform dynamic and local adjustments to automatically create a pleasing image. Do you see Olympus cameras also adopting this ‘auto’ approach in the future?

Our goals do not include the development of technology that significantly limits users in their individual creativity and expression. Instead, we develop cameras that facilitate the creative process, helping photographers bring their image concepts to life. While it is important that we enable photographers to utilize their skills and knowledge, we also see value to improving camera features for shooting assistance.

Thus, we will strive to uphold a balance between expression and automation with new technology that can benefit photographers of all skill levels and genres.


Tetsuo Kikuchi is manager of Imaging System Development at Olympus Corp, in Tokyo.

This is sponsored content, created with the support of Olympus. What does this mean?

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on ‘Accurate autofocus on any subject in any environment’: Olympus engineer talks OM-D E-M1 Mark III AF

Posted in Uncategorized

 

We’re hiring! DPReview is looking for a Software Development Engineer and Senior Product Manager

13 Jun

We’re looking to add a Software Development Engineer and a Senior Product Manager to our team! Each role is uniquely positioned to help shape the future of the site. The Senior Product Manager will own DPReview’s product roadmap, working closely with our engineering and editorial teams. The Software Development Engineer will help build the next generation of web and mobile experiences for DPReview, shaping products from concept to delivery.

If you’re passionate about photography and ready to help build the future of DPReview, take a look at the full job descriptions linked below and learn how to apply.

Apply now: Senior Product Manager

Apply now: Software Development Engineer

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on We’re hiring! DPReview is looking for a Software Development Engineer and Senior Product Manager

Posted in Uncategorized

 

We’re hiring! DPReview seeks Senior Software Development Engineer

05 May

DPReview is hiring! We’re looking for a Senior Software Development Engineer to join our Seattle-based team. You will lead our engineering team and leverage our unique position in the industry to build modern solutions that deliver content, services and tools to a large and highly engaged community of passionate photographers. Bring your creativity, passion and talent to help us build the next generation of our web and mobile experiences. Find all the details below.

Find out more and apply for this role – Software Development Engineer, Digital Photography Review

Senior Software Development Engineer, Digital Photography Review

Digital Photography Review (DPReview.com) is seeking a talented, passionate, and creative engineer to help us build the future of the world’s most popular digital camera website. You will lead a small engineering team, leveraging our unique position in the industry to build modern solutions that deliver content, services, and tools to a large and highly engaged community of passionate photographers.

Your core focus in the first year will be to help re-think and build the next generation of mobile experiences for DPReview.com. This includes product comparison tools for photographic gear, community and social features focused on photography enthusiasts, and machine learning driven personalization mechanisms, and a big focus on improving CX.

DPReview has its own unique culture with a startup-like environment, but with all the benefits of being backed by industry leader Amazon. Engineers will have an opportunity to partner with our in-house product management and editorial teams to help shape projects from concept to delivery, but also will participate in and benefit from one of the strongest engineering communities in the technology world at Amazon.com.

If you’re looking for an opportunity to lead a small, lean team that’ll work across the stack on a variety of interesting problems for an enormous userbase, then this is it!

Basic Qualifications

  • Bachelor’s Degree in Computer Science or related field
  • 8+ years of professional software development experience
  • Experience mentoring junior engineers
  • Experience leading small teams of engineers
  • Strong data structure and algorithm knowledge required
  • Expertise with professional software engineering best practices for the full software development life cycle, including coding standards, code reviews, and code instrumentation

Preferred Qualifications

  • Experience with visual design and / or UX
  • Mobile HTML5, CSS, JavaScript and/or Android/iOS experience
  • Proficient in at least one object-oriented programming language such as Java, C++ or C#
  • Experience with REST and other web service models
  • Experience building complex, scalable, high-performance software systems that have been successfully delivered to customers;

Find out more and apply for this role – Software Development Engineer, Digital Photography Review

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on We’re hiring! DPReview seeks Senior Software Development Engineer

Posted in Uncategorized

 

We’re hiring! DPReview seeks Software Development Engineer

23 Mar

DPReview is hiring! We’re looking for a Software Development Engineer to join our Seattle-based team. Bring your creativity, passion and talent to help us build the next generation of our web and mobile experiences. This role will help build shopping and comparison tools for photo gear as well as other special projects on the roadmap. Find all the details below.

Click here to find out more and to apply for this role – Software Development Engineer, Digital Photography Review

Software Development Engineer, Digital Photography Review

Digital Photography Review (DPReview.com) is seeking a talented, passionate, and creative engineer to help us build the future of the world’s most popular digital camera website. You will leverage our unique position in the industry to constantly strive for smarter and better ways to deliver the content, services, and tools that have made it such a success.

Your core focus will be to build the next generation of web and mobile experiences for DPReview.com. This includes shopping and comparison tools for photographic gear, community and social features focused on photography enthusiasts, and other special projects on the roadmap.

While a part of Amazon, DPReview has its own unique culture. It’s a startup-like environment backed by an industry leader. Engineers will have an opportunity to partner with our in-house product management and editorial teams to help shape projects from concept to delivery.

If you’re looking for an opportunity to be a part of a small, lean team that’ll work across the stack on a variety of interesting problems, then this is it!

Basic Qualifications

  • Bachelor’s Degree in Computer Science or related field
  • 4+ years of professional software development experience

Preferred Qualifications

  • Some design and / or UX experience a big plus
  • Proficient in at least one object-oriented programming language such as Java, C++ or C#
  • Strong problem solving skills and computer science fundamentals (data structures, algorithms)
  • Experience in common web technologies: HTML, CSS, JavaScript, AJAX
  • Experience with REST and other web service models

Click here to find out more and to apply for this role – Software Development Engineer, Digital Photography Review

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on We’re hiring! DPReview seeks Software Development Engineer

Posted in Uncategorized

 

Google software engineer shows what’s possible with smartphone cameras in low light

27 Apr
Image: Florian Kainz/Google

On a full moon night last year, Google software engineer Florian Kainz took a photo of the Golden Gate bridge and the City of San Francisco in the background with professional camera equipment: a Canon EOS-1D X and a Zeiss Otus 28mm F1.4 ZE lens. 

When he showed the results to his colleagues at Google Gcam, a team that focuses on computational photography, they challenged him to re-take the same shot with a smartphone camera. Google’s HDR+ camera mode on the Google Nexus and Pixel phones is one of Gcam’s most interesting products. It allows for decent image quality at low light levels by shooting a burst of up to ten short exposures and averaging them them into a single image, reducing blur while capturing enough total light for a good exposure. 

However, Florian being an engineer, wanted to find out what smartphone camera can do when taken to the current limits of technology and wrote an Android camera app with manual control over exposure time, ISO and focus distance. When the shutter button is pressed the app waits a few seconds and then records up to 64 frames with the selected settings. The app saves DNG raw files which can then be downloaded for processing on a PC. 

He used the app to capture several night scenes, including an image of the night sky, with a Nexus 6P smartphone, which is capable of shutter speeds up to 2 seconds at high ISOs. On each occasion he shot an additional burst of black frames after covering the camera lens with opaque adhesive tape. Back at the office the frames were combined in Photoshop. Individual images were, as you would expect, very noisy, but computing the mean of all 32 frames cleaned up most of the grain, and subtracting the mean of the 32 black frames removed faint grid-like patterns caused by local variations in the sensor’s black level.

The results are very impressive indeed. At 9 to 10MP the images are smaller than the output of most current DSLRs but the photos are sharp across the frame, there is little noise and dynamic range is surprisingly good. Getting to those results took a lot of post-processing work but with smartphone processing becoming even more powerful it should only be a question of time before the sort of complex processing that Florian did manually in Photoshop can be done on the device. You can see all the image results in full resolution and read Florian’s detailed description of his capture and editing workflow on the Google Research Blog.

 Image: Florian Kainz/Google

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google software engineer shows what’s possible with smartphone cameras in low light

Posted in Uncategorized

 

Engineer Prints:Now in COLOR!

17 Jun

Call it a quarter-life crisis … our Engineer Prints moved and got a makeover.

Our giant Engineer Prints have left the Photojojo site, but we’re still printing them over at Parabo Press!

In addition to the Parabo app, you can now order them right from the Parabo Press website where you can upload files up to 20MB (that’s biiig).

Take $ 10 off your order with the coupon EPS10.

Plus, they’re now available in FULL COLOR!
(…)
Read the rest of Engineer Prints:
Now in COLOR! (29 words)


© laurel for Photojojo, 2016. |
Permalink |
No comment |
Add to
Now in COLOR!”>del.icio.us

Post tags:


Photojojo

 
Comments Off on Engineer Prints:Now in COLOR!

Posted in Equipment

 

‘We want to make lenses that can be used forever’: Sony engineer discusses G Master lenses

10 Feb

‘We want to make lenses that can be used forever,’ says a senior engineer behind Sony’s new G master lenses. At the launch of the ‘G Master’ range of high end lenses, we spoke to Motoyuki Ohtake, Distinguished Engineer in Sony’s Lens Design Department about the process and the philosophy behind the latest lenses.

The development process series involved re-thinking several parts of the design and manufacturing process, he says.

Motoyuji Ohtake, Distinguished Engineer, Opto Design Department, Core Technology Division, Digital Imaging Business Group at Sony.

To understand how the lenses came about, he explained the usual process of lens development. ‘Sometimes we propose a new lens but often it comes from the product planning department [the marketing department that assesses potential requirements and demands]. We then make a series of rough designs, some are big, with high optical performance, others are more compact but maybe not so optically strong. We discuss which design to proceed with, based on what we think is the optimal balance or cost, performance and size to make the perfect product.’

After deciding which of the initial designs to pursue, there’s a great deal of collaboration between teams, he explains: ‘we work with the mechanical team, the lens motor team, the lens control team, the lens element team and maybe the equipment team who will have to prepare the manufacturing process.’ Each of these team feeds its expertise into the design. ‘Maybe the optical team proposes a new lens design and the motor team tells us which motor is best. Or warn us if the focus will be too slow. They feed back about the mechanical aspects,’ he says.

The G Master series required many of these teams to re-think their parts of the process, from design to manufacture.

Re-thinking basic assumptions

‘For the G Master lenses we decided we would assess the spatial frequency at 50 lines per mm,’ says Ohtake: ‘Usually lens makers, including ourselves, evaluate lenses at 10 and 30 lpmm (or 10, 20 and 40 for Carl Zeiss-branded optics).’

‘At the start of the process we all agreed we should change the spacial frequency [to a more challenging target],’ he says: ’but which is best to get good performance? We could design for 100 lpmm but the lens would become very bulky and long – which might not be a very practical lens. A balance of the size and the optical performance was very important.’

The target of 50 lpmm wasn’t dictated by the company’s 40MP camera or 4K video, he says. ’All our FE lenses were designed for at least 40MP. Because we have an image sensor team within Sony, we get to see the sensor roadmap, so we’ve been designing for this all along with FE. With the G Master we’d like to make lenses that can be used forever.’

A focus on bokeh

But it’s not just the more stringent frequency assessment that was developed for the G Master lenses, Ohtake explains: ‘We had to discuss what good bokeh means. We have some designers from Minolta who understand that the spirit of the ‘G’ lenses was good bokeh in the background but we had no way to evaluate that.

‘We looked at what is considered good bokeh and how it affects not just the background rendering but also the transition from perfectly sharp to out-of-focus regions. We developed a way to evaluate bokeh and were able to make a simulation. This meant we didn’t have to build a lens to see how it performed, we could now computer model it before taking a design too far.’

This is a significant change, Sony says, as it means bokeh can be one of the primary design considerations, rather than being something that can only be adjusted later in the process, once the main aspects of the design have been settled upon.

Another piece of the puzzle – shape and smoothness

This analysis of the factors that affect bokeh showed that both the precision of the lens molding and the smoothness of the lens surface could have an effect.

‘Traditionally it was very hard to achieve both: current technology gives a roughness on the scale of 20-30nm on the aspheric surface. Improving this usually involved polishing, which can then lead to the lens element being slightly unevenly shaped.’

‘We developed a new way of making the lens element and a new molding process, including a new machine. Now we can get roughness down to around 10nm and get a more accurate shape to the aspherical surface.’

AF technologies

Ohtake wouldn’t budge when we asked which his favorite lens was, but immediately reached for the 85mm F1.4 when we took this group shot.

The first three G Master lenses use three different AF motor technologies between them – emphasizing Ohtake’s point that different technologies work better in different contexts.

The 24-70mm F2.8 uses a Direct Drive SSM system (piezoelectric element). This is very fast, very quiet and very precise. We used a linear motor for the 24-70mm F4 but this lens has a heavier focus element, so direct drive was a better choice.

The focus element in the 85mm F1.4 was even heavier, however. ‘For the 85mm we use a ring type focus motor. This is very good for heavy lens elements and our lens software team developed a good algorithm so that it works well with contrast-detection autofocus’ (a traditional weakness for ring-type designs).

Finally, the 70-200mm uses a combination of a linear actuator and a ring-type focus motor. ‘The focus group had become too heavy so we separated the two focusing lenses. One is very heavy, so we used a ring type motor for that one, then used a linear motor for the other. The ring type is used to quickly achieve approximate focus and the linear motor is used for the high precision aspect.’

Still correct to optically correct

Discussing the idea that bokeh and sharpness have previously been in conflict, we asked Ohtake about other trade-offs. We’ve been told that the ability to correct lateral chromatic aberration in software makes lens design easier, since you don’t have to correct it optically, which can quickly complicate the lens design and detract from other parameters.

Not for G Master lenses, he explains. ‘Light doesn’t separate nicely into red, green and blue’ (the color channels that most cameras capture, and which can be adjusted, relative to one another, to correct lateral CA). It’s a continuum with each wavelength being displaced slightly differently. ‘To get the really high contrast we wanted in G Master, we had to suppress it in the lens.’

The future of APS-C

We also asked Ohtake about Sony’s APS-C lenses for E-mount. His team likes designing APS-C lenses, he says: ‘The focus elements are light, so it’s easier to design. We have all these focus motor technologies in-house and we’d like to try them in APS-C lenses if that’s what the Product Planning team says is required.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on ‘We want to make lenses that can be used forever’: Sony engineer discusses G Master lenses

Posted in Uncategorized

 

‘We want to make lenses that can be used forever’: Sony engineer discusses G Master lenses

05 Feb

‘We want to make lenses that can be used forever,’ says a senior engineer behind Sony’s new G master lenses. At the launch of the ‘G Master’ range of high end lenses, we spoke to Motoyuki Ohtake, Distinguished Engineer in Sony’s Lens Design Department about the process and the philosophy behind the latest lenses.

The development process series involved re-thinking several parts of the design and manufacturing process, he says.

Motoyuji Ohtake, Distinguished Engineer, Opto Design Department, Core Technology Division, Digital Imaging Business Group at Sony.

To understand how the lenses came about, he explained the usual process of lens development. ‘Sometimes we propose a new lens but often it comes from the product planning department [the marketing department that assesses potential requirements and demands]. We then make a series of rough designs, some are big, with high optical performance, others are more compact but maybe not so optically strong. We discuss which design to proceed with, based on what we think is the optimal balance or cost, performance and size to make the perfect product.’

After deciding which of the initial designs to pursue, there’s a great deal of collaboration between teams, he explains: ‘we work with the mechanical team, the lens motor team, the lens control team, the lens element team and maybe the equipment team who will have to prepare the manufacturing process.’ Each of these team feeds its expertise into the design. ‘Maybe the optical team proposes a new lens design and the motor team tells us which motor is best. Or warn us if the focus will be too slow. They feed back about the mechanical aspects,’ he says.

The G Master series required many of these teams to re-think their parts of the process, from design to manufacture.

Re-thinking basic assumptions

‘For the G Master lenses we decided we would assess the spatial frequency at 50 lines per mm,’ says Ohtake: ‘Usually lens makers, including ourselves, evaluate lenses at 10 and 30 lpmm (or 10, 20 and 40 for Carl Zeiss-branded optics).’

‘At the start of the process we all agreed we should change the spacial frequency [to a more challenging target],’ he says: ’but which is best to get good performance? We could design for 100 lpmm but the lens would become very bulky and long – which might not be a very practical lens. A balance of the size and the optical performance was very important.’

The target of 50 lpmm wasn’t dictated by the company’s 40MP camera or 4K video, he says. ’All our FE lenses were designed for at least 40MP. Because we have an image sensor team within Sony, we get to see the sensor roadmap, so we’ve been designing for this all along with FE. With the G Master we’d like to make lenses that can be used forever.’

A focus on bokeh

But it’s not just the more stringent frequency assessment that was developed for the G Master lenses, Ohtake explains: ‘We had to discuss what good bokeh means. We have some designers from Minolta who understand that the spirit of the ‘G’ lenses was good bokeh in the background but we had no way to evaluate that.

‘We looked at what is considered good bokeh and how it affects not just the background rendering but also the transition from perfectly sharp to out-of-focus regions. We developed a way to evaluate bokeh and were able to make a simulation. This meant we didn’t have to build a lens to see how it performed, we could now computer model it before taking a design too far.’

This is a significant change, Sony says, as it means bokeh can be one of the primary design considerations, rather than being something that can only be adjusted later in the process, once the main aspects of the design have been settled upon.

Another piece of the puzzle – shape and smoothness

This analysis of the factors that affect bokeh showed that both the precision of the lens molding and the smoothness of the lens surface could have an effect.

‘Traditionally it was very hard to achieve both: current technology gives a roughness on the scale of 20-30nm on the aspheric surface. Improving this usually involved polishing, which can then lead to the lens element being slightly unevenly shaped.’

‘We developed a new way of making the lens element and a new molding process, including a new machine. Now we can get roughness down to around 10nm and get a more accurate shape to the aspherical surface.’

AF technologies

Ohtake wouldn’t budge when we asked which his favorite lens was, but immediately reached for the 85mm F1.4 when we took this group shot.

The first three G Master lenses use three different AF motor technologies between them – emphasizing Ohtake’s point that different technologies work better in different contexts.

The 24-70mm F2.8 uses a Direct Drive SSM system (piezoelectric element). This is very fast, very quiet and very precise. We used a linear motor for the 24-70mm F4 but this lens has a heavier focus element, so direct drive was a better choice.

The focus element in the 85mm F1.4 was even heavier, however. ‘For the 85mm we use a ring type focus motor. This is very good for heavy lens elements and our lens software team developed a good algorithm so that it works well with contrast-detection autofocus’ (a traditional weakness for ring-type designs).

Finally, the 70-200mm uses a combination of a linear actuator and a ring-type focus motor. ‘The focus group had become too heavy so we separated the two focusing lenses. One is very heavy, so we used a ring type motor for that one, then used a linear motor for the other. The ring type is used to quickly achieve approximate focus and the linear motor is used for the high precision aspect.’

Still correct to optically correct

Discussing the idea that bokeh and sharpness have previously been in conflict, we asked Ohtake about other trade-offs. We’ve been told that the ability to correct lateral chromatic aberration in software makes lens design easier, since you don’t have to correct it optically, which can quickly complicate the lens design and detract from other parameters.

Not for G Master lenses, he explains. ‘Light doesn’t separate nicely into red, green and blue’ (the color channels that most cameras capture, and which can be adjusted, relative to one another, to correct lateral CA). It’s a continuum with each wavelength being displaced slightly differently. ‘To get the really high contrast we wanted in G Master, we had to suppress it in the lens.’

The future of APS-C

We also asked Ohtake about Sony’s APS-C lenses for E-mount. His team likes designing APS-C lenses, he says: ‘The focus elements are light, so it’s easier to design. We have all these focus motor technologies in-house and we’d like to try them in APS-C lenses if that’s what the Product Planning team says is required.’

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on ‘We want to make lenses that can be used forever’: Sony engineer discusses G Master lenses

Posted in Uncategorized

 

Turn Old Family PicsInto Big Ol’ Engineer Prints

21 Jun

Dads are great. So great, in fact, that they deserve a really big present this Father’s Day.

Are you thinking what we’re thinking? Let’s say it together now:

Engineer Prints!

These hugenormous prints are perfect gifts, especially if you make ‘em out of old family pictures.

So we’re gonna 1) show you how to digitize old prints and 2) give you some fun ideas for making your Engineer Prints unique and wonderful, just like your fave guy.

Engineer Prints Say Happy Father’s Day!

(…)
Read the rest of Turn Old Family Pics
Into Big Ol’
Engineer Prints (895 words)


© Taylor for Photojojo, 2015. |
Permalink |
No comment |
Add to
Into Big Ol’
Engineer Prints”>del.icio.us

Post tags:


Photojojo

 
Comments Off on Turn Old Family PicsInto Big Ol’ Engineer Prints

Posted in Equipment