RSS
 

Posts Tagged ‘TECH’

Canon shows off its latest CMOS sensor tech in new promo video

18 Apr

Canon isn’t only in the business of making DSLR, mirrorless and point-and-shoot cameras. It’s also in the business of making the CMOS sensors inside those cameras—arguably the most important component. And in order to showcase what its achieved with its latest lineup of CMOS sensors, Canon USA has created a little promotional video.

The video showcases a variety of sensors seen across Canon’s product line, from the extreme low-light full-frame sensor it showed off earlier this year, to more industrialized CMOS sensors made for surveillance and security purposes.

The video description from Canon USA:

This video showcases Canon variety of sensors. For several decades Canon has been developing and manufacturing advanced CMOS sensors with state-of-the-art technologies for exclusive use in Canon products. These sensors are a critical driving force behind many of our successful product lines, ranging from consumer products all the way up to high-end business and industrial solutions.

The video does seem a touch overly dramatic for what it is, and may even come across as a bit cheesy at times (why are they showing new sensor tech inside a Canon EOS 1D that came out in 2001?). Nonetheless, it’s an interesting watch that gives a good overview of the work Canon has been putting into its CMOS sensors in recent years—technology that will hopefully impact the Canon DSLRs and mirrorless cameras of the future.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Canon shows off its latest CMOS sensor tech in new promo video

Posted in Uncategorized

 

Tech Insights teardown confirms Galaxy S9 uses Samsung and Sony image sensors

12 Apr

The analysts at Tech Insights have torn down the Samsung Galaxy S9 in order to analyze the device’s camera sensors and, as usual, the summary of their findings makes interesting reading for anyone who has an interest in image sensor technology. The main takeaway from Tech Insight’s report is that Samsung is once again using different image sensors by region.

Depending on where you buy the Galaxy S9, your device will either come with a Samsung S5K2L3 or Sony IMX345 chip.

Both imagers use a 3-layer stacked structure, comprising a CMOS image sensor, image signal processor (ISP) and DRAM. The Sony IMX345 is very similar in structure to the IMX400, the world’s first 3-layer stacked imager that was introduced on the Sony Xperia XZ flagship a year ago.

The Samsung S5K2L3 ISOCELL Fast sensor is the Korean manufacturer’s first 3-layer stacked model. In contrast to Sony’s custom solution with the DRAM in the middle, Samsung has opted for connecting the DRAM chip face-to-back on the ISP. The assembly also includes a dummy silicon structure filling the unoccupied space next to the DRAM chip.

This definitely won’t translate into noticeable performance or image quality differences between Galaxy S9 smartphones, but it does seem to show that Samsung is far from its goal of dethroning Sony to become #1 in the global image sensor market—it’s hard to dethrone the competition when you’re still using their sensors.

For a lot more detail on the sensor structure and assembly head over to Tech Insights, where you can also purchase even more in-depth reports if you really want to dive deep.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Tech Insights teardown confirms Galaxy S9 uses Samsung and Sony image sensors

Posted in Uncategorized

 

Samsung explains the sensor tech behind the Galaxy S9’s super-slow-motion mode

05 Apr

Samsung published a couple of technical blog posts today, providing some detail on the stacked sensor technology used in the new Galaxy S9 and S9 Plus smartphones, and specifically how this tech is used to power the devices’ super-slow-motion mode.

This mode can record 960 frames per second at HD resolution for a duration of 0.2 seconds, which translates into 6 seconds playback time at 30 fps—32 times slower than standard video. The resulting videos can be reversed, exported as GIFs and edited in other ways.

To achieve the blistering fast frame rates, Samsung has adopted similar imaging technology to what we’ve previously seen on some Sony devices. The S9 sensor offers faster sensor readout-times, bandwidth and video processing of the application sensor than on previous Galaxy generations by using a three-layer stacked sensor design that consists of the CMOS image sensor itself, a 4x faster readout circuit, and a dedicated DRAM memory chip for buffering:

In addition to slow-motion, the stacked sensor helps reduce rolling shutter effects in video mode, and counter camera shake through frame-stacking methodologies.

“We were able to achieve a readout speed that is four times faster than conventional cameras thanks to a three-layer stacked image sensor that includes the CMOS image sensor itself, a fast readout circuit, and a dedicated dynamic random-access (DRAM) memory chip, which previously was not added to image sensors,” explained Dongsoo Kim. “Integrating DRAM allowed us to overcome obstacles such as speed limits between the sensor and application processor (AP) in a high-speed camera with 960fps features.”

You can see some of the Samsung super-slow-motion video results in the video below. Samsung’s article on the technology is available on its blog, where you’ll also find an interview with the team behind the new sensor.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Samsung explains the sensor tech behind the Galaxy S9’s super-slow-motion mode

Posted in Uncategorized

 

GoPro licensing deal will let third parties use the company’s camera tech

24 Mar
Photo by Dose Media

GoPro has announced a new deal with manufacturing services company Jabil. The multi-year agreement, officially announced Thursday, will involve GoPro licensing its intellectual property and reference design to Jabil, which will use it to incorporate GoPro sensor modules and camera lenses into third-party products.

According to the GoPro announcement, the company will reserve approval over any third-party services and products in which its technology is used. The agreement includes an equipment license, ultimately covering “a range of products and services each company offers,” among them being “certain digital imaging and consumer products,” says GoPro.

This isn’t the first time Jabil and GoPro have worked together. Jabil Optics’ vice president Irv Stein calls this new deal “a natural extension” of the companies’ involvement with each other, explaining that the GoPro tech will likely be used in “enterprise” segment:

This agreement is a natural extension of our long-standing relationship with GoPro and our commitment to developing innovative technologies. Early market feedback indicates strong demand in the enterprise action camera segment for applications in smart homes, military, fire, police, rescue, and security.

Additional details about the agreement, including financial numbers, weren’t disclosed. However, the expanded partnership comes at a time when GoPro faces ongoing financial troubles that have resulted in multiple layoffs over past months. Licensing its goods may help GoPro survive its turbulent action camera sales.

Press Release

GoPro and Jabil Announce Global Technology and Equipment License

San Mateo, CA and St. Petersburg, FL, March 22, 2018 – GoPro, Inc. and Jabil Inc. today announced a global, multi-year technology and equipment license. With this agreement, Jabil will leverage GoPro’s cutting-edge reference design and IP to produce camera lens and sensor modules for incorporation into GoPro-approved third-party products and solutions. This agreement builds on GoPro and Jabil’s longstanding relationship.

“This collaborative approach with Jabil will enable innovative, GoPro enabled products and services from some of the most exciting hardware and software companies out there,” said Sandor Barna, GoPro’s chief technology officer. “Imagine a world where video conferencing, robotics, and even self-driving cars are powered by GoPro’s camera lenses and image sensors. Together, GoPro and Jabil can make this a reality.”

This agreement covers a range of products and services each company offers, including certain digital imaging and consumer products. GoPro and Jabil have a history of collaborating to bring high-quality, cutting-edge products to consumers, including GoPro’s line of HERO cameras, starting with HERO4.

“This agreement is a natural extension of our long-standing relationship with GoPro and our commitment to developing innovative technologies,” said Irv Stein, Jabil’s vice president of Jabil Optics. “Early market feedback indicates strong demand in the enterprise action camera segment for applications in smart homes, military, fire, police, rescue, and security.”

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on GoPro licensing deal will let third parties use the company’s camera tech

Posted in Uncategorized

 

Vivo’s AI-powered ‘Super-HDR’ tech takes on Google’s HDR+

15 Mar

Google’s HDR+ mode is widely regarded as the current benchmark for computational imaging on smartphones, but Chinese manufacturer Vivo wants to unseat the champion. Earlier today, Vivo announced its AI-powered Super HDR feature—a direct competitor to the Google system found in Pixel devices.

Super HDR is designed to improve HDR performance while keeping a natural and “unprocessed” look. To achieve this, the system captures 12 exposures (Google uses 9) and merges them into a composite image, allowing for a fine control over image processing.

Additionally, AI-powered scene detection algorithms identify different elements of a scene—for example: people, the sky, the clouds, rocks, trees, etc.—and adjust exposure for each of them individually. According to Vivo, the end result looks more natural than most images that use the simpler tone-mapping technique.

Looking at the provided sample images, the system appears to be doing an impressive job. That said, these kind of marketing images have to be swallowed with a pinch of salt; we’ll see what the system is really capable of when it’s available in a production device we can test.

Speaking of which, as of now, we don’t know which device Super HDR will be shipping on first, but there is a chance it might be implemented on the upcoming Vivo V9, which is expected to be announced on March 22nd. The V9 is currently rumored to feature a Snapdragon 660 chipset and 12+8MP dual-camera.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Vivo’s AI-powered ‘Super-HDR’ tech takes on Google’s HDR+

Posted in Uncategorized

 

Google explains the tech behind the Pixel 2’s Motion Photos feature

15 Mar

Apple was the first mobile manufacturer to popularize still/video hybrid files with its Live Photos that were introduced on the iPhone 6s. Google then launched the Motion Stills app to improve and stabilize Apple’s Live Photos, and ported the system to the Android world soon after.

For the new Motion Photos feature on its latest Pixel 2 devices Google built on Motion Stills, improving the technology by using advanced stabilization that combines the devices’ soft and hardware capabilities. As before, Motion Photos captures a full-res JPEG with an embedded 3 second video clip every time you hit the shutter.

However, on the Pixel 2, the video clip also contains motion metadata that is derived from the gyroscope and optical image stabilization sensors.

This data is used to optimize trimming and stabilization of the motion photo and, combined with software based visual tracking, the new approach approach aligns the background more precisely than we’ve seen in the previous Motion Stills system (which was purely software-based). As before, the final results can be shared with friends or on the web as video files or GIFs.

If you are interested in more technical details of the Motion Photos feature, head over to the Google Research Blog. A gallery of Motion Photo files is available here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google explains the tech behind the Pixel 2’s Motion Photos feature

Posted in Uncategorized

 

Google just made the tech behind its ‘portrait mode’ open source

14 Mar

Semantic image segmentation is the task of categorizing every pixel in an image and assigning it a semantic label, such as “road”, “sky”, “person” or “dog”. And now, Google has released its latest image segmentation model as open source, making it available to any developers whose apps could benefit from the technology.

The function can be used in many ways. One recent application in the world of smartphones is the portrait mode on Google’s latest Pixel 2 devices. Here, semantic image segmentation is used to help separate objects in the foreground from the image background. However, you could also imagine applications for optimizing auto exposure or color settings.

This kind of pixel-precise labeling requires a higher localization accuracy than other object recognition technologies, but can also deliver higher-quality results. The good news is that Google has now released its latest image segmentation model, DeepLab-v3+, as open source, making it available to any developers who might want to bake it into their own applications.

Modern semantic image segmentation systems built on top of convolutional neural networks (CNNs) have reached accuracy levels that were hard to imagine even five years ago, thanks to advances in methods, hardware, and datasets. We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.

If you are interested in finding out more about DeepLab-v3+, head over to the Google Research Blog for more details.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Google just made the tech behind its ‘portrait mode’ open source

Posted in Uncategorized

 

DPReview on TWiT: tech trends in smartphone cameras

20 Feb

As part of our regular appearances on the TWiT Network (named after its flagship show, This Week in Tech) show ‘The New Screen Savers’, our Science Editor Rishi Sanyal joined host Leo Laporte and co-host Megan Morrone to talk about how smartphone cameras are revolutionizing photography. Watch the segment above, then catch the full episode here.

Rishi has also expounded upon some of the topics covered in the segment below, with detailed examples that clarify some of the points covered. Have a read after the fold once you’ve watched the segment.

You can watch The New Screen Savers live every Saturday at 3pm Pacific Time (23:00 UTC), on demand through our articles, the TWiT website, or YouTube, as well as through most podcasting apps.


So who wins? iPhone X or Pixel 2?

Not so fast. Neither.

Each has its strengths, which we hope to tell you about in our video segment above and in our examples below. Google and Apple take different approaches, and each has its pros and cons, but there are common overlapping practices and themes as well. And that’s before we begin discussing video, where the iPhone’s 4K/60p HEVC video borders on professional quality while Google’s stabilization may make you want to chuck your gimbal.

Smartphones have to deal with the fact that their cameras, and therefore sensors, are tiny. And since we all (now) know that, generally speaking, it’s the amount of light you capture that determines image quality, smartphones have a serious disadvantage to deal with: they don’t capture enough light. But that’s where computational photography comes in. By combining machine learning, computer vision, and computer graphics with traditional optical processes, computational photography aims to enhance what is achievable with traditional methods.

Intelligent exposure and processing? Press. Here.

One of the defining characteristics of smartphone photography is the idea that you can get a great image with one button press, and nothing more. No exposure decision, no tapping on the screen to set your exposure, no exposure compensation, and no post-processing. Just take a look at what the Google Pixel 2 XL did with this huge dynamic range sunrise at Banff National Park in Canada:

Sunrise at Banff, with Mt. Rundle in the background. Shot on Pixel 2 with one button press. I also shot this with my Sony a7R II full-frame camera, but that required a 4-stop reverse graduated neutral density (‘Daryl Benson’) filter, and a dynamic range compensation mode (DRO Lv5) to get a usable image. While the resulting image from the Sony was head-and-shoulders above this one at 100%, I got this image from the Pixel 2 by just pointing and shooting.

Apple’s iPhones try to achieve similar results by combining multiple exposures if the scene has enough contrast to warrant it. But iPhones can’t achieve these results (yet) since they don’t average as many ‘samples’ as the Google Pixel 2. Sometimes Apple’s longer exposures can blur subjects, and iPhones tend to overexpose and blow highlights for the sake of exposing the subject properly. Apple is also still pretty reticent to enable HDR in ‘Auto HDR’.

The Pixel 2 was able to achieve the image above by first determining the correct focal plane exposure required to not blow large bright (non-specular) areas (an approach known as ETTR or ‘expose-to-the-right’). When you press the shutter button, the Pixel 2 goes back in time 9 frames, aligning and averaging them to give you a final image with quality similar to what you might expect from a sensor with 9x as much surface area.

How does it do that? It’s constantly keeping the last 9 frames it shot in memory, so when you press the shutter it can grab them, break each into many square ’tiles’, align them all, and then average them. Breaking each image into small tiles allows for alignment despite photographer or subject movement by ignoring moving elements, discarding blurred elements in some shots, or re-aligning subjects that have moved from frame to frame. Averaging simulates the effects of shooting with a larger sensor by ‘evening out’ noise.

That’s what allows the Pixel 2 to capture such a wide dynamic range scene: expose for the bright regions, while reducing noise in static elements of the scene by image averaging, while not blurring moving (water) elements of the scene by making intelligent decisions about what to do with elements that shift from frame to frame. Sure, moving elements have more noise to them (since they couldn’t have as many of the 9 frames dedicated to them for averaging), but overall, do you see anything but a pleasing image?

Autofocus

Who focuses better? Google Pixel 2, hands down. Its dual pixel AF uses nearly the entire sensor for autofocus (binning the high-resolution sensor into a low-resolution mode to decrease noise), while also using HDR+ and its 9-frame image averaging to further decrease noise and have a usable signal to make AF calculations from.

Google Pixel 2 can focus lightning fast even in indoor artificial light, which allowed me to snap this candid before it was over in a split second. The iPhone X captured a far less interesting moment seconds later when it finally achieved focus, missing the candid moment.

And despite the left and right perspectives the split pixels in the Pixel 2 sensor ‘see’ having less than 1mm stereo disparity, an impressive depth map can be built, rendering an optically accurate lens blur. This isn’t just a matter of masking the foreground and blurring the background, it’s an actual progressive blur based on depth.

That’s what allowed me to nail this candid image the instant after my wife and child whirled around to face the camera. Nearly all my iPhone X images of this scene were either out-of-focus or captured a less interesting, non-candid moment because of the shutter lag required to focus. The iPhone X only uses approximately 3% of its pixels for its ‘Dual PDAF’ autofocus, as opposed to the Pixel 2’s use of its entire sensor combined with multi-frame noise reduction, not just for image capture but also for focus.

Portrait Lighting

While we’ve been praising the Pixel phones, Apple is leading smartphone photography in a number of ways. First and foremost: color accuracy. Apple displays are all calibrated and profiled to display accurate colors, so no matter what Apple or color-managed device (or print) you’re viewing, colors look the same. Android devices are still the Wild West in this regard, but Google is trying to solve this via a proper color management system (CMS) under-the-hood. It’ll be some time before all devices catch up, and even Google itself is struggling with its current display and CMS implementation.

But let’s talk about Portrait Lighting. Look at the iPhone X ‘Contour Lighting’ shot below, left, vs. what the natural lighting looked like at the right (shot on a Google Pixel 2 with no special lighting features). While the Pixel 2 image is more natural, the iPhone X image is far more interesting, as if I’d lit my subject with a light on the spot.

Apple iPhone X, ‘Contour Lighting’ Google Pixel 2

Apple builds a 3D map of a face using trained algorithms, then allows you to re-light your subject using modes such as ‘natural’, ‘studio’ and ‘contour’ lighting. The latter highlights points of the face like the nose, cheeks and chin that would’ve caught the light from an external light source aimed at the subject. This gives the image a dimensionality you could normally only achieve using external lighting solutions or a lot of post-processing.

Currently, the Pixel 2 has no such feature, so we get the flat lighting the scene actually had on the right. But, as you can imagine, it won’t be long before we see other phones and software packages taking advantage of—and even improving on—these computational approaches.

HDR and wide-gamut photography

And then we have HDR. Not the HDR you’re used to thinking about, that creates flat images from large dynamic range scenes. No, we’re talking about the ability of HDR displays—like bright contrasty OLEDs—to display the wide range of tones and colors cameras can capture these days, rather than sacrificing global contrast just to increase and preserve local contrast, as traditional camera JPEGs do.

iPhone X is the first device ever to support the HDR display of HDR photos. That is: it can capture a wide dynamic range and color gamut but then also display them without clipping tones and colors on its class-leading OLED display, all in an effort to get closer to reproducing the range of tones and colors we see in the real world.

iPhone X is the first device ever to support HDR display of HDR photos

Have a look below at a Portrait Mode image I shot of my daughter that utilizes colors and luminances in the P3 color space. P3 is the color space Hollywood is now using for most of its movies (it’s similar, though shifted, to Adobe RGB). You’ll only see the extra colors if you have a P3-capable display and a color-managed OS/browser (macOS + Google Chrome, or the newest iPads and iPhones). On a P3 display, switch between ‘P3’ and ‘sRGB’ to see the colors you’re missing with sRGB-only capture.

Or, on any display, hover over ‘Colors in P3 out-of-gamut of sRGB’ to see (in grey) what you’re missing with a sRGB-only capture/display workflow.

iPhone X Portrait Mode, image in P3 color space iPhone X Portrait mode, image in sRGB color space Colors in P3 out-of-gamut of sRGB highlighted in grey

Apple is not only taking advantage of the extra colors of the P3 color space, it’s also encoding its images in the ‘High Efficiency Image Format’ (HEIF), which is an advanced format aimed to replace JPEG that is more efficient and also allows for 10-bit color encoding (to avoid banding while allowing for more colors) and HDR encoding to allow the display of a larger range of tones on HDR displays.

But will smartphones replace traditional cameras?

For many, yes, absolutely. You’ve seen the autofocus speeds of the Pixel 2, assisted by not only dual pixel AF but also laser AF. You’ve seen the results of HDR+ image stacking, which will only get better with time. We’ve seen dual lens units that give you the focal lengths of a camera body and two primes, and we’ve seen the ability to selectively blur backgrounds and isolate subjects like the pros do.

Below is a shot from the Pixel 2 vs. a shot from a $ 4,000 full-frame body and 55mm F1.8 lens combo—which is which?

Full Frame or Pixel 2? Pixel 2 or Full Frame?

Yes, the trained—myself included—can pick out which is the smartphone image. But when is the smartphone image good enough?

Smartphone cameras are not only catching up with traditional cameras, they’re actually exceeding them in many ways. Take for example…

Creative control…

The image below exemplifies an interesting use of computational blur. The camera has chosen to keep much of the subject—like the front speaker cone, which has significant depth to it—in focus, while blurring the rest of the scene significantly. In fact, if you look at the upper right front of the speaker cabinet, you’ll see a good portion of it in focus. After a certain point, the cabinet suddenly-yet-gradually blurs significantly.

The camera and software has chosen to keep a significant depth-of-focus around the focus plane before blurring objects far enough away from the focus plane significantly. That’s the beauty of computational approaches: while F1.2 lenses can usually only keep one eye in focus—much less the nose or the ear—computational approaches allow you to choose how much you wish to keep in focus even if you wish to blur the rest of the scene to a degree where traditional optics wouldn’t allow for much of your subject to remain in focus.

B&W speakers at sunrise. Take a look at the depth-of-focus vs. depth-of-field in this image. If you look closely, the entire speaker cone and a large front portion of the black cabinet is in focus. There is then a sudden, yet gradual blur to very shallow depth-of-field. That’s the beauty of computational approaches: one can choose extended (say, F5.6 equivalent) depth-of-focus near the focus plane, but then gradually transition to far shallower – say F2.0 – depth-of-field outside of the focus plane. This allows one to keep much of the subject in focus, bet achieve the subject isolation of a much faster lens.

Surprise and delight…

Digital assistants. Love them or hate them, they will be a part of your future, and they’re another way in which smartphone photography augments and exceeds traditional photography approaches. My smartphone is always on me, and when I have my full-frame Sony a7R III with me, I often transfer JPEGs from it to my smartphone. Those images (and 720p video proxies) automatically upload to my Google Photos account. From there any image or video that has my or my daughter’s face in it automatically gets shared with my wife without my so much as lifting a finger.

Better yet? Often I get a notification that Google Assistant has pulled a cute animated GIF from my movie it thinks is interesting. And more often than not, the animations are adorable:

Splash splash! in Xcaret, Quintana Roo, Mexico. Animated GIF auto-generated from a movie shot on the Pixel 2.

Machine learning allowed Google Assistant to automatically guess that this clip from a much longer video was an interesting moment I might wish to revisit and preserve. And it was right. Just as it was right in picking the moment below, where my daughter is clapping in response to her cousin clapping at successfully feeding her… after which my wife claps as well.

Claps all around!

Google Assistant is impressive in its ability to pick out meaningful moments from photos and videos. Apple takes a similar approach in compiling ‘Memories’.

But animated GIFs aren’t the only way Google Assistant helps me curate and find the important moments in my life. It also auto-curates videos that pull together photos and clips from my videos—be it from my smartphone or media I’ve imported from my camera—into emotionally moving ‘Auto Awesome’ compilations:

At any time I can hand-select the photos and videos, down to the portions of each video, I want in a compilation—using an editing interface far simpler than Final Cut Pro or Adobe Premiere. I can even edit the auto-compilations Google Assistant generates, choosing my favorite photos, clips and music. And did you notice that the video clips and photos are cut down to the beat in the music?

This is a perfect example of where smartphone photography exceeds traditional cameras, especially for us time-starved souls that hardly have the time to download our assets to a hard drive (not to mention back up said assets). And it’s a reminder that traditional cameras that don’t play well with such automated services like Google and Apple Photos will only be left behind simpler services that surprise and delight a majority of us.

The future is bright

This is just the beginning. The computational approaches Apple, Google, Samsung and many others are taking are revolutionizing what we can expect from devices we have in our pockets, devices we always have on us.

Are they going to defy physics and replace traditional cameras tomorrow? Not necessarily, not yet, but for many purposes and people, they will offer pros that are well-worth the cons. In some cases they offer more than we’ve come to expect of traditional cameras, which will have to continue to innovate—perhaps taking advantage of the very computational techniques smartphones and other innovative computational devices are leveraging—to stay ahead of the curve.

But as techniques like HDR+ and Portrait Mode and Portrait Lighting have shown us, we can’t just look at past technologies to predict what’s to come. Computational photography will make things you’ve never imagined a reality. And that’s incredibly exciting.

Hungry for more? We’ve updated our standard studio scene to allow you to compare the Pixel 2 and iPhone X against each other and other cameras in Daylight and Low Light, as well as updated our galleries. Follow the links below:

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on DPReview on TWiT: tech trends in smartphone cameras

Posted in Uncategorized

 

Skydio R1 4K camera drone boasts game-changing autonomous tech

16 Feb

California-based company Skydio has announced the R1, a drone described as a “self-flying camera” that autonomously follows and records a subject. Unlike some competing models, R1 was built specifically for autonomous flight; it is able to fly itself at up to 25mph / 40kph while maneuvering around obstacles thanks in part to Skydio’s Autonomy Engine.

While many drones depend on GPS for autonomous flight, Skydio’s R1 is different—it features 13 cameras that work with the Autonomy Engine to perceive and map the world around the UAV. Skydio packed an NVIDIA Jetson AI supercomputer into R1, and the little drone is using it to power intelligent features like real-time movement planning and complex environment navigation.

Here’s a quick intro video that explains how it’s all done:

Users launch the drone in either Side, Follow, or Orbit modes using the companion mobile app. No manual operation is necessary, and in fact, Skydio claims that users can move through complex environments, such as dense woods, without interrupting the R1’s tracking and recording abilities. Skydio goes so far as to claim R1 is “the most advanced autonomous device—of any kind—available today.”

Skydio goes so far as to claim R1 is “the most advanced autonomous device—of any kind—available today.”

The R1, which is small enough to fit in a backpack, is made with carbon fiber and lightweight aluminum. The drone’s primary camera can record subjects at Full HD/30/60fps and 4K/30fps with a 150-degree FOV. The primary camera is isolated from vibrations, stabilized with a 3-axis gimbal, and joined by 64GB of onboard storage. The remaining 12 cameras provide omnidirectional vision for navigation.

Skydio R1 is available now in the United States and Canada through the company’s website for $ 2,500 USD. Buyers are currently limited to one unit with orders shipping 2 to 3 weeks after being placed. To learn more, head over to the Skydio website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Skydio R1 4K camera drone boasts game-changing autonomous tech

Posted in Uncategorized

 

Sony Xperia XA2 and XA 2 Ultra put high-end camera tech in mid-range phones

10 Jan

Most mobile manufacturers tend to unveil new flagship smartphones at or around MWC in February or IFA in September, but occasionally interesting mid-rangers pop up at CES as well. That’s the case with the Sony Xperia XA2 and XA2 Ultra devices, which were just launched this morning at the Las Vegas show.

Powered by Qualcomm’s Snapdragon 630 chipset and sporting 1080p Full-HD displays, the new devices fit squarely in the mid-range bracket of the market, and yet they boast a lot of camera technology from the Japanese manufacturer’s high-end Xperia XZ flagship models, making them an appealing option for mobile photographers who can do without the most powerful processor or highest screen resolution.

Both models feature 1/2.3-inch 23MP Exmor RS sensors in the rear camera. The imager chip is coupled with an F2.0 aperture and the camera offers a 24mm equivalent focal length, phase detection autofocus, LED flash and 4K video recording. There’s also a 120 fps slow-motion mode; however, XA2 users will have to make do without the XZ models’ unique 1000 fps ultra-slow-motion feature. In typical Sony fashion, optical image stabilization has been omitted as well.

While the main cameras are identical on the standard XA2 and the Ultra model, there is a difference at the front. The XA2 features an 8MP camera with a 120° field of view, while the Ultra model features an additional stabilized 16MP camera with a 80° field of view, allowing users to easily switch between solo and group selfies. Other differences between the two models are pretty much limited to screen size (5.3-inch on the XA2 vs 6-inch on the Ultra) and battery (3200 mAh on the XA2 vs 3500 mAh on the Ultra).

The XA2 models will be available from February, and will retail at 350 Euros (~$ 420 USD) for the standard XA2 and 450 Euros (~$ 535 USD) for the Ultra—no word yet on official US pricing. To find out more about these phones or check out some image samples shot with the XA2 and XA2 Ultra, visit the Sony website.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Sony Xperia XA2 and XA 2 Ultra put high-end camera tech in mid-range phones

Posted in Uncategorized