RSS
 

Posts Tagged ‘Researchers’

NVIDIA researchers develop AI that removes noise from images with incredible accuracy

10 Jul

A team of NVIDIA researchers, in partnership with researchers from Aalto University and Massachusetts Institute of Technology (MIT), has shared details of a new artificial intelligence (AI) program that can remove grain from images with such accuracy that it’s almost scary.

‘Using NVIDIA Tesla P100 GPUs with the cuDNN-accelerated TensorFlow deep learning framework, the team trained [its] system on 50,000 images in the ImageNet validation set,’ says NVIDIA in its announcement blog post.

What’s incredible about this particular AI is its ability to know what a clean image looks like without ever actually seeing a noise-free image. Rather than training the deep-learning network by giving it a noisy image and a clean image to learn how to make up the difference, NVIDIA’s AI is trained using two images with different noise patterns.

‘It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars,’ say the researchers in a paper published on the findings. The paper goes so far as to say ‘[The neural network] is on par with state-of-the-art methods that make use of clean examples — using precisely the same training methodology, and often without appreciable drawbacks in training time or performance.’

In addition to being used on photographs, researchers note the AI will also be beneficial in scientific and medical fields. In particular, the researchers detail how magnetic imaging resonance (MRI) scans — which are very susceptible to noise — could be dramatically improved using the program, leading to improved diagnoses.

The team behind the AI will present their work at the International Conference on Machine Learning on July 12, 2018.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA researchers develop AI that removes noise from images with incredible accuracy

Posted in Uncategorized

 

Researchers develop method for revealing images on degraded daguerreotypes

28 Jun

Researchers at Western University in Canada have developed a method for restoring damaged daguerreotypes, including plates so degraded that no portion of the original image remains discernible to the eye. The method, the university explains, involves using rapid-scanning micro-X-ray fluorescence imaging to analyze the silver-coated plate and identify the mercury element used to develop it.

“Mercury is the major element that contributes to the imagery captured in these photographs. Even though the surface is tarnished, those image particles remain intact,” explained study co-author Tsun-Kong (TK) Sham. “By looking at the mercury, we can retrieve the image in great detail.”

Whereas a human hair is around 75 microns thick, the X-ray beam used in this method is as small as 10 x 10 microns, resulting in about 8 hours of scanning time per daguerreotype plate. This method can be used by art conservators to reveal a daguerreotype’s image when cleaning the degraded plate is not possible.

Via: TechCrunch

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers develop method for revealing images on degraded daguerreotypes

Posted in Uncategorized

 

NVIDIA researchers can now turn 30fps video into 240fps slo-mo footage using AI

20 Jun

NVIDIA researchers have developed a new method to extrapolate 240fps slow-motion video from 30fps content using artificial intelligence.

Detailed in a paper submitted to the Cornell University Library, NVIDIA researchers trained the system by processing more than 11,000 videos through NVIDIA Tesla V100 GPUs and a cuDNN-accelerated PyTorch deep learning framework. This archive of videos, shot at 240fps, taught the system how to better predict the positioning differences in videos shot at only 30fps.

This isn’t the first time something like this has been done. A post-production plug-in called Twixtor has been doing this for almost a decade now. But it doesn’t come anywhere close to NVIDIA’s results in terms of quality and accuracy. Even in scenes where there is a great amount of detail, there appears to be minimal artifacts in the extrapolated frames.

The researchers also note that while there are smartphones that can shoot 240fps video, it’s not necessarily worth it to use all of that processing power and storage when something that will get you 99% of the way there is possible using a system such as theirs. ‘While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,’ the researchers wrote in the paper.

The research and findings detailed in the paper will be presented at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA researchers can now turn 30fps video into 240fps slo-mo footage using AI

Posted in Uncategorized

 

Researchers use AI to brighten ultra-low light images without adding noise

15 May

Researchers with the University of Illinois Urbana–Champaign and Intel have developed a deep neural network that brightens ultra-low light images without adding noise and other artifacts. The network was trained using 5,094 raw short-exposure low-light and long-exposure image pairs—the end result is a system that automatically brightens images at a much higher quality than traditional processing options.

The deep learning system was detailed in a newly published study that points out the limitations of alternative “denoising, deblurring, and enhancement techniques” on what the team calls “extreme conditions,” such as low-light images that are too dark to discern without processing.

Using traditional methods to process these images often results in high levels of noise that isn’t present when using the machine learning technique:

The team used images captured with a Fujifilm X-T2 and Sony a7S II, and also demonstrated the system on photos taken with an iPhone X and Google Pixel 2 smartphone. High-resolution comparison images are available here, and a PDF of the full study can be found here.

This is the latest example of machine learning ‘AI’ being used to automatically enhance images—ideally speeding up post-processing tasks while reducing the user’s workload. Last year, for example, a system called Deep Image Prior was demonstrated using an image’s existing elements to intelligently repair damage, and Adobe and NVIDIA are both working on AI-powered Content Aware Fill.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers use AI to brighten ultra-low light images without adding noise

Posted in Uncategorized

 

Researchers develop low-power HD streaming tech for wearable cameras

26 Apr
Dennis Wise/University of Washington

Wearable cameras, such as the type found in Snap Spectacles, are often limited to low-resolution video streaming due to their tiny batteries and small size. But now, researchers with the University of Washington in Seattle have developed a solution to that problem, one that involves offloading the processing burden to a nearby smartphone in order to stream high-definition content from the wearable camera.

The new low-power HD video streaming method utilizes backscatter technology and works by transmitting pixel intensity values via an antenna directly to the user’s smartphone. Unlike the wearable camera, which by its nature is small and lightweight with limited hardware resources, a smartphone offers way more processing power and a much larger battery.

When used as part of this new system, the phone receives the pixel information from the wearable camera, then processes it into a high-definition video for streaming. The prototype system was tested using a 720p HD YouTube video, which was successfully fed into the backscatter system and streamed at 10fps to a smartphone located 14ft / 4.2m away.

The wearable camera features only a small battery and uses between 1,000 and 10,000 times less power than existing streaming methods; however, the researchers plan to go a step further and develop a battery-free camera system with potential applications outside of smart glasses and body cameras.

Security systems, for example, could benefit from this technology, which would eliminate the need to either plug the cameras into a power source or frequently recharge internal batteries. Instead, the video data would be transmitted via antennas from the cameras to a central processing unit connected to a large battery or wired powered source.

As study co-author Joshua Smith explained:

Just imagine you go to a football game five years from now. There could be tiny HD cameras everywhere recording the action: stuck on players’ helmets, everywhere across the stadium. And you don’t have to ever worry about changing their batteries.

If the idea of “tiny cameras everywhere” also sounds mildly disturbing and like a privacy nightmare to you, you’re not alone… but we digress.

The full paper detailing this technology is available here.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers develop low-power HD streaming tech for wearable cameras

Posted in Uncategorized

 

Researchers let AI loose on 100 million Instagram photos to study style

16 Jun

Cornell University researchers have found the mother lode of data to inform their studies about clothing trends around the world: Instagram. They’ve applied machine learning to a set of 100 million photos uploaded to the image sharing app, and while the results aren’t earth shattering (red hats are big at Christmas!) they’ve paved the way for the anthropologists of the future.

The photos come from 44 cities across the world. A machine learning algorithm was trained to identify faces and articles of clothing. After weeding out photos without faces or a visible torso, the algorithm went to work on 15 million images. While findings were a bit basic this time around, it represents a step toward mining a massive data set that may help anthropologists conduct broad studies of culture and fashion in a way that’s never been done before.

Read more about this research at MIT Technology Review.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers let AI loose on 100 million Instagram photos to study style

Posted in Uncategorized

 

Researchers create method for photorealistic Prisma-style effects

28 Mar

Popular app Prisma applies painting styles to photographs using neural networks, turning a snapshot into an artwork in the style of ‘The Scream,’ for example. But what if you could transfer photorealistic effects from one photo to another? Researchers at Cornell and Adobe have successfully demonstrated a method that will translate a variety of styles from a reference photo to another image, including things like lighting, time of day and weather.

Input image on the left, reference style image in the center, output image on the right. It’s not incredibly realistic-looking, but more realistic than your average Prisma treatment.

Images via Fujun Luan

This could open up a whole new world of possibilities for ‘lazy’ photo editing. Say you snapped a photo of a rock formation in the middle of the day, but you’d rather it had the orange glow of golden hour. With this method, you could apply the textures and colors of a reference style image, i.e. some other rock formation at sunset, to your own image.

This photo-style-transfer method augments the neural-style approach Prisma takes by constraining the colorspace of the transformation applied to the source image. Taking a content-aware approach and classifying features like sky and water in each image helps to avoid mismatched textures and distortions.

Advanced photographers would likely be wary of making such drastic edits to their photos. However, the technology might appeal to someone who wants to apply the effects of professional lighting to a badly lit photo of an interior, for example.

What do you think? Could this technology be useful to you? Let us know in the comments.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Researchers create method for photorealistic Prisma-style effects

Posted in Uncategorized

 

MIT researchers use ordinary cameras to create extraordinary interactive videos

03 Aug

Augmented reality is in the news plenty lately, but some researchers from MIT have put an interesting twist on the popular technology. Using new algorithms and as little as a few seconds of video created by a traditional camera, they’ve been able to create Interactive Dynamic Video, or IDV. The objects respond in a surprisingly realistic way as they’re poked, prodded and manipulated.

IDV records the tiny vibrations of an object in motion during a short video clip, and then uses that information to allow users to interact with the object virtually. The potential applications include things like monitoring the structural integrity of bridges or buildings. The technology could also provide filmmakers with a cheaper, less time-intensive alternative to 3D modeling. Case in point: this little green monster running around a playground.

See the video below to learn more about the research and its applications.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on MIT researchers use ordinary cameras to create extraordinary interactive videos

Posted in Uncategorized

 

New infrared image of Orion Nebula surprises ESO researchers

12 Jul

ESO/H. Drass et al. Music: Johan B. Monell (www.johanmonell.com)

A new image from the European Southern Observatory in Chile is making researchers reconsider what they thought they knew about the Orion Nebula. The image comes courtesy of the Very Large Telescope’s HAWK-I infrared imager, and provides the deepest view of the nebula ever recorded. According to ESO, the imagery ‘reveals many more very faint planetary-mass objects than expected.’

Multiple infrared exposures were layered to get this new look into the nebula, and you can see a comparison of how the infrared images compare to visible light. ESO has made the videos available for download in resolutions up to 4K.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on New infrared image of Orion Nebula surprises ESO researchers

Posted in Uncategorized

 

Columbia University researchers create self-powered video camera

17 Apr

Columbia University researchers have created a self-powered video camera featuring a sensor that both captures images and powers the device. Although it can only record low-resolution 30×40 pixel images at 1fps, the photodiodes on the camera’s sensor can switch between being photoconductive, and photovoltaic. In the latter mode – given enough light – the photodiodes supply enough power to a built-in supercapacitor to keep the camera operating indefinitely. Click through for more information

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Columbia University researchers create self-powered video camera

Posted in Uncategorized