RSS
 

Posts Tagged ‘Uses’

Samsung project uses Galaxy S10+ to capture 943km panorama of Portugal’s coast

14 Jun

Samsung Portugal has detailed a unique project that involved its Galaxy S10+ smartphone, a boat trip and the creation of a panorama featuring 943 kilometers (585 miles) captured from the cities Moledo to Monte Gordo. The panorama, which is available to view on Samsung’s website, was created from images of the coastline captured from the water over an eight-day trip.

The Galaxy S10+ photography project boat trip involved José Gomes as skipper, Carlos Bernardo as ‘chronicler’ and André Carvalho as photographer. Entries from the trip’s logbook detailing each of the 8 days can be found—written in Portuguese—here. The final result is intended to highlight the flagship smartphone’s triple-lens camera, which Samsung refers to as ‘pro-grade.’

This is one of many massive panoramas, past examples including a 600,000 pixel panorama of Tokyo and NASA’s 360-degree panorama of Mars published earlier this year.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Samsung project uses Galaxy S10+ to capture 943km panorama of Portugal’s coast

Posted in Uncategorized

 

Healthy.io uses your smartphone’s camera for medical lab testing at home

25 Apr

Thanks to the attached mobile computing power your smartphone camera can be used for more than just taking holiday snaps. A product from Israel-based company Healthy.io is a prime example. Dip.io uses a smartphone and a dipstick to perform urine tests that can detect ten indicators of disease, infection, and pregnancy-related complications.

The system is very simple from a user point of view. You capture a photo of the dipstick against a color and Dip.io does the rest. The app uses machine learning to color correct the image, considering camera make and model, lighting conditions and a variety of additional variables. The app then performs an instant analysis.

In clinical trials undertaken in the process of receiving FDA approval, Dip.io was capable of matching the accuracy of professional laboratories. This is achieved at a considerably lower cost and less inconvenience to the patient as the system removes the need for visits to a physician and lab referrals. In addition, it does away with waiting time for the results.

The makers of the system say that this could encourage more patients to undertake regular screenings which could save them dialysis or even a transplant by detecting signs of kidney problems early.

According to an analysis by York Health Economics Consortium, in the UK alone the new technology could result in early diagnosis of more than 33,000 cases over five years and estimated savings of more than £670 million ($ 867 million). Healthy.io is currently running a pilot program with the pharmacist Boots UK. Women who suspect they have urinary tract infections can use the system to self-test and receive treatment from a pharmacist without seeing a doctor or visiting a lab. The results of the pilot are expected to be announced in May.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Healthy.io uses your smartphone’s camera for medical lab testing at home

Posted in Uncategorized

 

Kandao uses AI to convert 30fps 360-degree video into super-slow-motion footage

19 Apr

Kandao, the makers of professional-grade 360-degree cameras and the Kandao Raw+ image stacking tool for Raw files has launched another potentially very useful software feature. AI Slow-motion is designed to convert 360-degree video footage that has been recorded at a regular 30 frames per second into 300 fps super-slow-motion clips.

The software uses artificial intelligence and machine learning methods to predict and generate intermediate frames for a smooth and detailed slow-motion output from existing 360/VR footage.

The company says that compared to optical flow or interpolation methods that are used in other applications, the AI-generated footage offers more accurate frame interpolation as well as fewer jagged edges and other artifacts. The software also requires less powerful hardware than comparable systems.

The feature will first be implemented into the Kandao QooCam Studio and Kandao Studio applications, allowing for an up to 10x slow-motion effect. For example, 360-degree video originally captured at 8k 30fps can be converted into 8K 240fps slow-motion or 4k 60fps video into 4K 480fps footage, by selecting a factor of eight during the 360 stitching workflow in the software software.

The bad news is that, although the algorithm behind the feature can work with any existing videos, in a first step the technology will only work with video from Kandao cameras. However, the company says it will make AI slow motion available for other cameras in the future, which is good news for 360-degree videographers who would like to work with super-slow-motion without splashing out on ultra-powerful hardware.

Kandao camera users can now download Qoocam Studio with AI slow motion free of charge on the Kandao website. Kandao Studio V3.0 with AI slow-motion will available on 23rd April.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Kandao uses AI to convert 30fps 360-degree video into super-slow-motion footage

Posted in Uncategorized

 

Report: Huawei P30 Pro uses Sony image sensors and technology from Corephotonics

17 Apr

With its quad-camera (triple-camera plus ToF-sensor) the Huawei P30 Pro is, from an imaging perspective, definitely the most exciting new smartphone this year.

The analysts from French company System Plus Consulting now have had a closer look at the camera hardware, which was co-designed with Leica, and talked about their findings with EE Times. According to costing analyst expert Stéphane Elisabeth, all four image sensors have been supplied by market leader Sony.

The primary camera module uses a RYYB color filter (Red, Yellow, Yellow, Blue) instead of the usual RGGB, which increases light sensitivity, while the wide-angle and tele camera units still rely on an RGB filter. The green channel is usually used to make up the luminance (detail) information in an image so yellow filters, which let in red as well as green light, would give cleaner results than an RGGB sensor, at the cost of some ability to distinguish between colors.

Unlike some other devices, the time-of-flight (ToF) sensor is not only used for augemented reality applications but also to measure subject distance for autofocusing. Signals from all three cameras are processed to create a map of a scene and let the photographer focus on a specific object.

Arguably the most innovative element of the camera is the periscope-style tele lens, though. It is placed horizontally inside the body and a mirror angled at 45 degrees channels light into the optics and onto the sensor. This allows for an extended optical unit – generally a requirement for telephoto lenses. The result is the first 5X tele zoom in a smartphone. Super resolution and computational techniques allow for 10x digital zoom using the 5x tele unit, though image quality drops. The analysts also believe the entire camera unit has been assembled by Chinese company Sunny Optical Technology using IP from Corephotonics in Israel. The latter is particularly interesting as Corephotonics has just been acquired by Huawei rival Samsung.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Report: Huawei P30 Pro uses Sony image sensors and technology from Corephotonics

Posted in Uncategorized

 

NVIDIA Research project uses AI to instantly turn drawings into photorealistic images

21 Mar

NVIDIA Research has demonstrated GauGAN, a deep learning model that converts simple doodles into photorealistic images. The tool crafts images nearly instantaneously, and can intelligently adjust elements within images, such as adding reflections to a body of water when trees or mountains are placed near it.

The new tool is made possible using generative adversarial networks called GANs. With GauGAN, users select image elements like ‘snow’ and ‘sky,’ then draw lines to segment an image into different elements. The AI automatically generates the appropriate image for that element, such as a cloudy sky, grass, and trees.

As NVIDIA reveals in its demonstration video, GauGAN maintains a realistic image by dynamically adjusting parts of the render to match new elements. For example, transforming a grassy field to a snow-covered landscape will result in an automatic sky change, ensuring the two elements are compatible and realistic.

GauGAN was trained using millions of images of real environments. In addition to generating photorealistic landscapes, the tool allows users to apply style filters, including ones that give the appearance of sunset or a particular painting style. According to NVIDIA, the technology could be used to generate images of other environments, including buildings and people.

Talking about GauGAN is NVIDIA VP of applied deep learning research Bryan Catanzaro, who explained:

This technology is not just stitching together pieces of other images, or cutting and pasting textures. It’s actually synthesizing new images, very similar to how an artist would draw something.

NVIDIA envisions a tool based on GauGAN could one day be used by architects and other professionals who need to quickly fill a scene or visualize an environment. Similar technology may one day be offered as a tool in image editing applications, enabling users to add or adjust elements in photos.

The company offers online demos of other AI-based tools on its AI Playground.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA Research project uses AI to instantly turn drawings into photorealistic images

Posted in Uncategorized

 

This website uses AI to generate portraits of people who don’t actually exist

16 Feb

A new website called This Person Does Not Exist went viral this week, and it has one simple function: displaying a portrait of a random person each time the page is refreshed. The website is pointless at first glance, but there’s a secret behind its seemingly endless stream of images. According to a Facebook post detailing the website, the images are generated using a generative adversarial networks (GANs) algorithm.

In December, NVIDIA published research detailing the use of style-based GANs (StyleGAN) to generate very realistic portraits of people who don’t exist. The same technology is powering This Person Does Not Exist, which was created by Uber software engineer Phillip Wang to ‘raise some public awareness for this technology.’

In his Facebook post, Wang said:

Faces are most salient to our cognition, so I’ve decided to put that specific pretrained model up. Their research group have also included pretrained models for cats, cars, and bedrooms in their repository that you can immediately use.

Each time you refresh the site, the network will generate a new facial image from scratch from a 512 dimensional vector.

Generative adversarial networks were first introduced in 2014 as a way to generate images from datasets, but the resulting content was less than realistic. The technology has improved drastically in only a few years, with major breakthroughs in 2017 and again last year with NVIDIA’s introduction of StyleGAN.

This Person Does Not Exist underscores the technology’s growing ability to produce life-like images that, in many cases, are indistinguishable from portraits of real people.

As described by NVIDIA last year, StyleGAN can be used to generate more than just portraits. In the video above, the researchers demonstrate the technology being used to generate images of rooms and vehicles, and to modify ‘fine styles’ in images, such as the color of objects. Results were, in most cases, indistinguishable from images of real settings.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on This website uses AI to generate portraits of people who don’t actually exist

Posted in Uncategorized

 

Lexar’s new USB 3.0 flash drive uses your fingerprint to keep your photos safe

23 Oct

Lexar has announced a new flash drive that features a fingerprint reader to protect its content from unauthorised access. The JumpDrive Fingerprint F35 can record up to ten fingerprints to allow it to be shared between users, and it comes with the fingerprint software already loaded.

Lexar claims recognising a user’s fingerprint takes less than a second, and the drives have read speeds of up to 150MB/s. While normal memory is compatible with Mac, PC and Linux, the fingerprint software is Windows-only.

The JumpDrive Fingerprint F35 will be available in capacities of 32GB, 64GB, 128GB and 256GB and will cost €29.99/$ 32.99 (32GB), €44.99/$ 49.99 (64GB), €79.99/89.99 (128GB). The 256GB version will arrive later this year, and will cost €149.99/169.99. Each of the drives comes with a three-year limited warranty.

For more information see the Lexar website.

Press release:

Lexar Announces New JumpDrive® Fingerprint F35 with an Added Touch of Security

New USB 3.0 Flash Drive Securely Protects Files Using 256-bit AES Encryption

Key messages:

  • Up to 10 fingerprint IDs allowed
  • Ultra-fast recognition – less than 1 second
  • Easy set-up, no software driver needed
  • Securely protects files using an advanced security solution with 256-bit AES encryption

Lexar, a leading global brand of flash memory solutions, today announced the new Lexar® JumpDrive® Fingerprint F35 USB 3.0 Flash Drive.

One of the most secure USB 3.0 flash drives available, Lexar JumpDrive F35 uses an ultra-fast fingerprint authentication that allows you to protect your data against unauthorized users – in under one second so that you will have no discernible impact on workflow. The F35 can save up to 10 fingerprint IDs, making sure only you and your closest collaborators have access to your files. It also boasts an easy set-up with no software driver required*, so you can quickly start transferring your files with speeds up to 150MB/s**. And for added peace of mind, it also features an advanced 256-bit AES security solution to protect your valuable files.

“The F35 combines reliable and secure data storage with biometric technology to prevent unauthorized access to your files – adding an extra layer of security for your drive by using fingerprint authentication. It is ideal for business professionals and photographers who require high-privacy protection to meet their needs,” said Joel Boquiren, Director of Global Marketing.

The Lexar JumpDrive Fingerprint F35 USB 3.0 Flash Drive is compatible with PC and Mac®* systems and comes with a three-year limited warranty***. It will be available in 32GB, 64GB, 128GB and 256GB capacities with read speeds of 150MB/s**. The new Lexar JumpDrive F35 is available now at MSRP of €29.99 (32GB), €44.99 (64GB), €79.99 (128GB), and the 256GB version will be €149.99, arriving in Q4 of this year. For more information visit www.lexar.com

Lexar will be exhibiting new product demonstrations at this year’s PhotoPlus Expo held at Javits Convention Center in New York City, New York, from October 25th – 27th.

*Fingerprint registration software only compatible with Windows XP, Vista, 7, 8, 10. Software required to create/edit accounts and adjust partition size. Regular flash drive use compatible with Windows, Linux and macOS.
**Up to 150MB/s read transfer, write speeds lower. Speeds based on internal testing. Actual performance may vary.
***http://www.lexar.com/support/warranties/

About Lexar
For more than 20 years, Lexar has been a trusted leading global brand of memory solutions. Our award-winning lineup includes memory cards, USB flash drives, card readers, and solid-state drives. With so many options, it’s easy to find the right Lexar solution to fit your needs. All Lexar product designs undergo extensive testing in the Lexar Quality Labs with more than 1,100 digital devices, to ensure performance, quality, compatibility, and reliability. Lexar products are available worldwide at major retail and e-tail stores. For more information or support, visit www.lexar.com.

About Longsys
Longsys – a leader in consumer NAND flash applications, is committed to supporting Lexar in its quest to reach new achievements in high-performance, quality, and reliability while maintaining its position as a leading global brand in memory cards, USB flash drives, readers, and storage drives for retail and OEM customers.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Lexar’s new USB 3.0 flash drive uses your fingerprint to keep your photos safe

Posted in Uncategorized

 

Why celebrity photographer Manfred Baumann uses ACDSee Photo Studio Ultimate 2019

22 Oct

Celebrity photographer Manfred Baumann has been using a pre-release version of ACDSee Photo Studio Ultimate 2019 for a while, and in this article he shares his impressions of using the software.


As a photographer, there are plenty of software programs out there that all want my attention (and my money). ACDSee is a name that will be familiar to many digital photographers, going right back to the 1990s. Designed originally as an image organization tool for digital photographs, ACDSee has evolved over more than 20 years to become a fully featured digital asset manager and editing platform. These days it’s basically a ‘one-stop shop’ for digital photographers.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_2867308819″,”galleryId”:”2867308819″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Like most photographers, I prefer taking pictures to sitting in front of a computer. For that reason, the software I use has to be fast, uncomplicated and self-explanatory. A Raw Converter is like a digital darkroom for me – everything else is optional. I’ve been using ACDSee for years. The latest version, Photo Studio Ultimate 2019, competes directly with the world’s best Raw editors, offering in-depth editing features alongside advanced image cataloging and organizational tools.

New in the 2019 version is face detection and automatic face recognition,

One of my favorite things about Photo Studio Ultimate’s editing power is the option to use layers when working on my Raw files. New in the 2019 version is face detection and automatic face recognition, which allows you to find photos of clients, friends or relatives at the click of a button. I don’t think many people would have difficulty recognizing some of my portrait subjects, but face detection and recognition are useful features when I’m organizing images for my clients.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_8017788866″,”galleryId”:”8017788866″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Photo Studio Ultimate also brings improvements to black and white editing, which let me individually adjust the contrast and brightness of different channels. I can even use the Edit Brush to paint these adjustments onto specific parts of an image. Monochrome editing is at the heart of a lot of my workflow, and the improved black and white mode features in Ultimate 2019 are really useful.

ACDSee is ideal for photographers who prefer to take photos rather than sit in front of the computer

It is important to continue growing, and as an artist, you always want to make sure that viewers can recognize your signature in your photographs. I like to think that I catch what others might not have seen. My primary focus is using images to say something about the essence of the person I’m photographing, and it’s all about the imagery: quality before quantity. Quality can be recognized by the fact that a good image doesn’t go out of date.

I would say ACDSee is ideal for photographers who prefer to take photos outdoors or in the studio rather than constantly sitting in front of the computer. It is cost effective, fast, and offers more features than most of its competitors. With Photo Studio Ultimate I really don’t need to use additional software in my workflow; I can usually do everything I need to do without leaving the app.

When it comes to image organization and cataloguing I do this exclusively in ACDSee Studio Ultimate now. It’s the quickest and easiest way for me to work.

Learn more about ACDSee Photo Studio Ultimate 2019


Manfred Baumann lives and works in Europe and the USA with his wife, Nelly. Throughout a long and varied career he has photographed celebrities from the worlds of acting, sports, and fashion for some of the top publications in the world.

A passionate advocate for animal rights, images from Baumann’s ‘Mustangs’ project have been exhibited in the Natural History Museum, Vienna.

See more of Manfred’s work


This is sponsored content, created by ACD Systems. What does this mean?

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Why celebrity photographer Manfred Baumann uses ACDSee Photo Studio Ultimate 2019

Posted in Uncategorized

 

LumaPod is an ultra-compact tripod that uses tension to keep your shots steady

13 Sep

Tripods are one of the few pieces of camera equipment that haven’t seen a lot of innovation over the years. LumaPod is hoping to change that though, with its all new tripod that’s currently available to back on Kickstarter.

Deemed the ‘world’s fastest tripod,’ the LumaPod is a compact tripod that uses patented tension technology to stabilize your shots without weighing a ton. It comes in two models — the Go85 and Go120 — for varying camera sizes and can also be used as a monopod and selfie stick.

Unlike traditional tripods, which rely on three legs attached to a centralized column and mounting point, the LumaPod is essentially two tripods in one that folds down into a single tube that looks something like the handle of a lightsaber. The base of the LumaPod is similar to a standard tripod in that it uses three rigid aluminum legs to keep the thing upright and steady. These low-profile legs serve as the attachment point for a telescoping column and three kevlar cables that hold the central column in place using tension.

The Go85 LumaPod weighs just 400g/0.88lbs, measures in at 85cm/33.5in and can hold 1kg/2.2lbs of camera equipment. The larger Go120 weighs 690g/1.65lbs, measures in at 120cm/47.3in when closed, and can hold 2kg/4.4lbs of camera equipment.

The Go85 includes a collection of accessories designed to make the most of shooting with smartphones and GoPro cameras, while the Go120 includes a compact Z-plate for more versatile mounting of mirrorless and DSLR cameras. Other accessories are available as add-ons through the Kickstarter. These include a Bluetooth remote, quick release plate, travel sling, compact ball head, and more.

The lower legs of both LumaPods are modular and adaptable to fit your shooting needs. They can be hot swapped with rubber feet, terrain levelers, dolly wheels, and other accessories by simply swapping them out. LumaPod claims it takes just four seconds to set up the tripod.

It remains to be seen just how stable this setup is, but it’s an interesting design that may very well work for smaller camera setups.

The Go85 starts for a pledge of €69 (approximately $ 80), while the Go120 starts at €85 (approximately $ 99). Both models are expected to ship in May 2019. To find out more and secure your pledge, head on over to the Kickstarter campaign.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on LumaPod is an ultra-compact tripod that uses tension to keep your shots steady

Posted in Uncategorized

 

Apollo app for iOS uses dual-cam depth map to create impressive lighting tricks

25 May
Apple’s dual-camera setup can create a depth map to simulate background blur – but now, someone’s figured out how to simulate lighting effects with an impressive level of control.

Apple’s dual camera devices (the 7 Plus, 8 Plus and X to be precise) generate a depth map to create the effects of Portrait Mode and Portrait Lighting that we’ve all come to know well. Whether you love, hate or feel generally ‘meh’ toward fake background blur, things get interesting when Apple makes that depth map information available to third party app developers. Enter Apollo: Immersive illumination, a $ 1.99 iOS app with an unusual name and a few interesting tricks up its sleeve.

Apollo uses the depth map not for background-blurring purposes, but to allow users to add realistic lighting effects to photos after they’re taken. Up to 20 light sources can be positioned throughout an image, with the ability to adjust intensity, color and distance. With the depth information provided, light sources interact with subjects in a three-dimensional fashion, and can even be positioned behind a subject to create a rim light.

It’s hard not to be a little taken aback the first time you drag a light source around your image and see how it interacts with your subject

It’s essentially an interactive version of Apple’s Portrait Lighting, which applies different light style effects to images. Apollo’s effects are highly customizable, and with so many parameters to play with it’s naturally quite a bit more complicated to use than Apple’s very simple lighting modes.

In use

We’ve been messing around with the Apollo app (for an admittedly short period of time), and have to say we’re impressed with what it’s capable of – but that doesn’t mean we don’t have a few requests for the next version.

$ (document).ready(function() { SampleGalleryV2({“containerId”:”embeddedSampleGallery_7103999524″,”galleryId”:”7103999524″,”isEmbeddedWidget”:true,”selectedImageIndex”:0,”isMobile”:false}) });

Click through to see the images full-screen and see how many lights were used in the Apollo app.

It’s hard not to be a little taken aback the first time you drag a light source around your image and see how it interacts with your subject(s). You are able to adjust the color, brightness and spread of your source, which are all fairly self descriptive.

You can also change the ‘Distance’ of your light, or it’s position in Z-space; this means you can move the light to be closer to you, the photographer, or further away into the background of your scene.

Lastly, there are two global adjustments, ‘Shadows’ and ‘Effect Range.’ Shadows essentially controls overall image brightness, though it biases toward the darker tones. Effect Range adjusts the brightness of all of your lights simultaneously in the image, though keeping the brightness ratios between them constant as it does so.

Along the bottom are the parameters you’re allowed for each light source you create (up to 20). Two global adjustments are ‘Shadows’ which adjusts overall brightness and Effect Range which adjusts the brightness of all lights simultaneously.

Overall, it’s an incredibly neat – and kind of addictive – first effort. But there are a few things that we’d like to see addressed in future versions.

Currently, every new ‘light’ you create starts out with a certain set of default parameters. This is alright, except for the fact that the default color is a yellowy tungsten sort of thing; it should really just begin as ‘white.’

Also, if I’ve already tuned in a ‘light’ and just want another one based on those, it’d be nice to be able to duplicate one that I’ve already created instead of having to start from scratch each time.

And once you’ve finished with your new creation, you can save it out as a JPEG – but there’s no way to save the lights themselves so that you can come back and tweak later. Each time you exit to tackle another image, the app asks you, ‘Close photo and discard all changes?’ Well, I’d rather not discard them, but if I have to, then I suppose that’s that.

Lastly, it doesn’t look like there’s any way to preserve the blurriness of the background once you’ve added your lights. It’d be great to be able to still take advantage of the depth map and progressive blurring while adding in your own lighting sources.

Wrapping up

Okay, so those are some fairly major requests on our part. But we make them because we’re really blown away by what the app already offers, and are excited to see how it evolves. It wasn’t so long ago you’d need a powerful workstation and some serious software skills to manipulate lighting in the same way that this app does with a few taps and drags.

If you have a dual camera iPhone and want to give the Apollo app a try, head on over to the App Store yourself and take it for a spin.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Apollo app for iOS uses dual-cam depth map to create impressive lighting tricks

Posted in Uncategorized