RSS
 

Posts Tagged ‘Nvidia’

NVIDIA Research develops a neural network to replace traditional video compression

06 Oct

NVIDIA researchers have demonstrated a new type of video compression technology that replaces the traditional video codec with a neural network to drastically reduce video bandwidth. The technology is presented as a potential solution for streaming video in situations where Internet availability is limited, such as using a webcam to chat with clients while on a slow Internet connection.

The new technology is made possible using NVIDIA Maxine, a cloud-AI video streaming platform for developers. According to the researchers, using AI-based video compression can strip video bandwidth usage down to 1/10th of the bandwidth that would otherwise be used by the common H.264 video codec. For users, this could result in what NVIDIA calls a ‘smoother’ experience that uses up less mobile data.

In a video explaining the technology, researchers demonstrate their AI-based video compression alongside H.264 compression with both videos limited to the same low bandwidth. With the traditional video compression, the resulting low-bandwidth video is very pixelated and blocky, but the AI-compressed video is smooth and relatively clear.

This is made possible by extracting the key facial points on the subject’s face, such as the position of the eyes and mouth, then sending that data to the recipient. The AI technology then reconstructs the subject’s face and animates it in real time using the keypoint data, the end result being very low bandwidth usage compared to the image quality on the receiver’s end.

There are some other advantages to using AI-based compression that exceed the capabilities of traditional video technologies, as well. One example is Free View, a feature in which the AI platform can rotate the subject so that they appear to be facing the recipient even when, in reality, their camera is positioned off to the side and they appear to be staring into the distance.

Likewise, the keypoints extracted from the subject’s face could also be used to apply their movements to other characters, including fully animated characters, expanding beyond the AI-powered filters that have become popular some video apps like Snapchat. Similar technology is already on the market in the form of Apple’s AI-based Animoji.

The use of artificial intelligence to modify videos isn’t new; most major video conferencing apps now include the option of replacing one’s real-life background with a different one, including intelligent AI-based background blurring. However, NVIDIA’s real-time AI-based video compression takes things to a new level by using AI to not only generate the subject in real time, but also modify them in convenient ways, such as aligning their face with a virtual front-facing camera.

The technology could usher in an era of clearer, more consistent video conferencing experiences, particularly for those on slow Internet connections, while using less data than current options. However, the demonstration has also raised concerns that largely mirror ones related to deepfake technologies — namely, the potential for exploiting such technologies to produce inauthentic content.

Artificial intelligence technology is advancing at a clipped rate and, in many cases, can be used to imperceptibly alter videos and images. Work is already underway to exceed those capabilities, however, by fully generating photo-realistic content using AI rather than modifying existing real-world content.

The Allen Institute for AI recently demonstrated the latest evolution in this effort by using both images and text to create a machine learning algorithm that possesses a very basic sense of abstract reasoning, for example. NVIDIA Research has also contributed extensively to this rapidly evolving technology, with past demonstrations including generating landscapes from sketches, generating photo-realistic portraits and even swapping facial expressions between animals.

A number of companies are working to develop counter technologies capable of detecting manipulated content by looking for markers otherwise invisible to the human eye. In 2019, Adobe Research teamed up with UC Berkeley to develop and demonstrate an AI capable of not only identifying portrait manipulations, but also automatically reversing the changes to display the original, unmodified content.

The general public doesn’t yet have access to these types of technologies, however, generally leaving them vulnerable to the manipulated media that permeates social media.

Via: NVIDIA

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA Research develops a neural network to replace traditional video compression

Posted in Uncategorized

 

HP Envy 32 all-in-one PC with built-in wireless charging is an NVIDIA RTX Studio system

09 Jan

During its time at CES 2020, HP has unveiled its new Envy 32 all-in-in PC with built-in wireless charging and a 31.5-inch 4K HDR600 display. This PC AiO model is offered with up to a 9th-gen Intel Core i7 processor and the NVIDIA GeForce RTX 2080 graphics card, as well as up to 1TB of storage and 32GB of RAM. The Envy 32 has the widest display available in an all-in-one system.

According to HP, its Envy 32 is the first all-in-one PC to join the NVIDIA RTX Studio program, which means that it is capable of running more than 40 design and creativity apps that feature RTX-accelerated ray tracing and AI-based features. This is particularly useful for filmmakers who engage in real-time high-resolution video editing and photographers who deal with large quantities of high-resolution images.

In addition to its considerable graphics capabilities, the HP Envy 32 is the first all-in-one PC to feature Advanced Audio Stream and the loudest volume level among AiOs, according to HP. A pair of integrated front-firing tweeters and subwoofers with Bang & Olufsen tuning offer audio even when the PC is turned off, eliminating the need to use an external speaker.

Other features include an aluminum stand with built-in wireless charging, edge-to-edge display glass, an included multi-device keyboard, heathered acoustic cloth, Nightfall Black matte finish and Dark Ash woodgrain accents. The HP Envy 32 AiO is available from HP, Amazon, Best Buy and other retailers with a starting price of $ 1,599.99.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on HP Envy 32 all-in-one PC with built-in wireless charging is an NVIDIA RTX Studio system

Posted in Uncategorized

 

Nvidia Studio will boost the performance of your creative apps

28 May

Nvidia has launched a new software and hardware initiative at the Computex Taipei trade show. Nvidia Studio is targeted at video editors, photographers and other content creators and consists of a collection of APIs, SDKs and drivers for Nvidia RTX GPUs that have all been designed to increase performance in use with creative software from providers like Adobe, Epic, Autodesk, Unity and Blackmagic Design.

App developers can make use of an AI-powered software that provides automation of some tasks, including image upscaling or video color matching.

Acer, Asus, Dell, Gigabyte, HP, MSI and Razer will all be announcing 17 RTX Studio-branded laptops between each other at the trade show this week. Graphics options in the new models include Nividia’s RTX 2080, 2070, and 2060 GPUs as well as the Quadro 5000, 4000, and 3000 workstation models.

Nvidia says that in testing with apps like Maya and RedCine-X Pro, an RTX Studio laptop with Intel Core i7 CPU and RTX 2080 Max-Q GPU was seven times faster than a top-end MacBook Pro with a Core i9 and AMD Radeon Pro Vega 20 GPU.

The first RTX Studio laptops will be available in June, with pricing starting at $ 1,599.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Nvidia Studio will boost the performance of your creative apps

Posted in Uncategorized

 

NVIDIA Research project uses AI to instantly turn drawings into photorealistic images

21 Mar

NVIDIA Research has demonstrated GauGAN, a deep learning model that converts simple doodles into photorealistic images. The tool crafts images nearly instantaneously, and can intelligently adjust elements within images, such as adding reflections to a body of water when trees or mountains are placed near it.

The new tool is made possible using generative adversarial networks called GANs. With GauGAN, users select image elements like ‘snow’ and ‘sky,’ then draw lines to segment an image into different elements. The AI automatically generates the appropriate image for that element, such as a cloudy sky, grass, and trees.

As NVIDIA reveals in its demonstration video, GauGAN maintains a realistic image by dynamically adjusting parts of the render to match new elements. For example, transforming a grassy field to a snow-covered landscape will result in an automatic sky change, ensuring the two elements are compatible and realistic.

GauGAN was trained using millions of images of real environments. In addition to generating photorealistic landscapes, the tool allows users to apply style filters, including ones that give the appearance of sunset or a particular painting style. According to NVIDIA, the technology could be used to generate images of other environments, including buildings and people.

Talking about GauGAN is NVIDIA VP of applied deep learning research Bryan Catanzaro, who explained:

This technology is not just stitching together pieces of other images, or cutting and pasting textures. It’s actually synthesizing new images, very similar to how an artist would draw something.

NVIDIA envisions a tool based on GauGAN could one day be used by architects and other professionals who need to quickly fill a scene or visualize an environment. Similar technology may one day be offered as a tool in image editing applications, enabling users to add or adjust elements in photos.

The company offers online demos of other AI-based tools on its AI Playground.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA Research project uses AI to instantly turn drawings into photorealistic images

Posted in Uncategorized

 

NVIDIA researchers create AI that generates photo-realistic portraits

19 Dec

NVIDIA researchers have published a new paper detailing their latest artificial intelligence work, which involves generating photo-realistic portraits of humans that are indistinguishable from images of real people. The technology revolves around an alternative generator architecture for generative adversarial networks (GANs) that utilizes style transfer for producing the final result.

Though GANs have improved substantially in only a few years, the researchers say in their paper that the generators ‘continue to operate as black boxes, and despite recent efforts, the understanding of various aspects of the image synthesis process, e.g., the origin of stochastic features, is still lacking.’ That’s where the newly developed alternative architecture comes in.

The team’s style-based architecture enables GANs to generate new images based on photos of real subjects, but with a twist: their generator learns to distinguish between separate elements in the images on its own. In the video above, NVIDIA’s researchers demonstrate this technology by generating portraits based on separate elements from images of real people.

“Our generator thinks of an image as a collection of ‘styles,’ where each style controls the effects at a particular scale,” the team explains.

Image elements are split into three style categories: “Coarse,” “Middle,” and “Fine.” In terms of portraits, these categories include elements like facial features, hair, colors, eyes, the subject’s face shape, and more. The system is also able to target inconsequential variations, including elements like texture and hair curls/direction.

The video above demonstrates changes involving inconsequential variation on non-portrait images, which includes generating different patterns on a blanket, altering the hair on a cat, and subtly changing the background behind a car. The style-transfer GANs offer superior results to traditional GAN generator architecture, the researchers conclude, with the photo-realistic results underscoring their assessment.

The latest work further refines a technology that has been growing rapidly over only a few years. Though GANs have been used in the past to generate portraits, the results were far from photo-realistic. It’s possible that technology like this could one day be offered as a consumer or enterprise product for generating on-demand life-like images.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA researchers create AI that generates photo-realistic portraits

Posted in Uncategorized

 

NVIDIA researchers develop AI that removes noise from images with incredible accuracy

10 Jul

A team of NVIDIA researchers, in partnership with researchers from Aalto University and Massachusetts Institute of Technology (MIT), has shared details of a new artificial intelligence (AI) program that can remove grain from images with such accuracy that it’s almost scary.

‘Using NVIDIA Tesla P100 GPUs with the cuDNN-accelerated TensorFlow deep learning framework, the team trained [its] system on 50,000 images in the ImageNet validation set,’ says NVIDIA in its announcement blog post.

What’s incredible about this particular AI is its ability to know what a clean image looks like without ever actually seeing a noise-free image. Rather than training the deep-learning network by giving it a noisy image and a clean image to learn how to make up the difference, NVIDIA’s AI is trained using two images with different noise patterns.

‘It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars,’ say the researchers in a paper published on the findings. The paper goes so far as to say ‘[The neural network] is on par with state-of-the-art methods that make use of clean examples — using precisely the same training methodology, and often without appreciable drawbacks in training time or performance.’

In addition to being used on photographs, researchers note the AI will also be beneficial in scientific and medical fields. In particular, the researchers detail how magnetic imaging resonance (MRI) scans — which are very susceptible to noise — could be dramatically improved using the program, leading to improved diagnoses.

The team behind the AI will present their work at the International Conference on Machine Learning on July 12, 2018.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA researchers develop AI that removes noise from images with incredible accuracy

Posted in Uncategorized

 

NVIDIA researchers can now turn 30fps video into 240fps slo-mo footage using AI

20 Jun

NVIDIA researchers have developed a new method to extrapolate 240fps slow-motion video from 30fps content using artificial intelligence.

Detailed in a paper submitted to the Cornell University Library, NVIDIA researchers trained the system by processing more than 11,000 videos through NVIDIA Tesla V100 GPUs and a cuDNN-accelerated PyTorch deep learning framework. This archive of videos, shot at 240fps, taught the system how to better predict the positioning differences in videos shot at only 30fps.

This isn’t the first time something like this has been done. A post-production plug-in called Twixtor has been doing this for almost a decade now. But it doesn’t come anywhere close to NVIDIA’s results in terms of quality and accuracy. Even in scenes where there is a great amount of detail, there appears to be minimal artifacts in the extrapolated frames.

The researchers also note that while there are smartphones that can shoot 240fps video, it’s not necessarily worth it to use all of that processing power and storage when something that will get you 99% of the way there is possible using a system such as theirs. ‘While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,’ the researchers wrote in the paper.

The research and findings detailed in the paper will be presented at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA researchers can now turn 30fps video into 240fps slo-mo footage using AI

Posted in Uncategorized

 

This NVIDIA algorithm copies the artistic style of one photo onto another

26 Mar

Struggling with stylistic consistency, or wanting to transpose the style of your best picture onto the rest of your Instagram feed? Thanks to a group of scientists at Cornell University, you can now do just that with surprisingly accurate and realistic results.

The team created an algorithm for graphics card company NVIDIA that lifts the stylistic characteristics of one picture and drops them onto a completely different image with startling precision. The algorithm is called FastPhotoStyle, and it’s capable of transferring the coloration, drama and atmosphere of one picture and making an entirely different frame look as though it was taken at the same time even if the subject matter is totally unrelated.

According to the developers, the goal of photorealistic image style transfer is:

…to change the style of a photo to resemble that of another one. For a faithful stylization, the content in the output photo should remain the same, while the style of the output photo should resemble the one of the reference photo. Furthermore, the output photo should look like a real photo captured by a camera.

There are programs already invented to do this, but the inventors of this algorithm claim that what already exists is slow, and doesn’t produce realistic results anyhow.

FastPhotoStyle is different, they say, because it uses a smoothing process after the initial whitening and Coloring Transfer step—or PhotoWCT step. This smoothing step tries to ensure that neighboring pixels receive similar styling and, by using what they call Matting Affinity, individual areas of the image can be subjected to slightly different treatment. This is what helps the algorithm produce such realistic looking results.

Another major difference is that this program reportedly operates as much as 60x faster than existing algorithms.

The code can be downloaded from NVIDIA’s GitHub for anyone to use under Creative Commons license (BY-NC-SA 4.0), and a user manual download is included on the page. If you’re brave, you can read the full technical paper as well.

Technical Paper Abstract:

A Closed-Form Solution to Photorealistic Image Stylization

Photorealistic image style transfer algorithms aim at stylizing a content photo using the style of a reference photo with the constraint that the stylized photo should remains photorealistic.

While several methods exist for this task, they tend to generate spatially inconsistent stylizations with noticeable artifacts. In addition, these methods are computationally expensive, requiring several minutes to stylize a VGA photo. In this paper, we present a novel algorithm to address the limitations.

The proposed algorithm consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step encourages spatially consistent stylizations. Unlike existing algorithms that require iterative optimization, both steps in our algorithm have closed-form solutions.

Experimental results show that the stylized photos generated by our algorithm are twice more preferred by human subjects in average. Moreover, our method runs 60 times faster than the state-of-the-art approach.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on This NVIDIA algorithm copies the artistic style of one photo onto another

Posted in Uncategorized

 

NVIDIA Computational Zoom lets you change perspective and focal length in post

03 Aug

Researchers with the University of California, Santa Barbara (UCSB) and NVIDIA have detailed a new type of technology called ‘computational zoom’ that can be used to adjust the focal length and perspective of an image after it has been taken. The technology was detailed in a recently published technical paper, as well as a video (above) that shows the tech in action. With it, photographers are able to tweak an image’s composition during post-processing.

According to UCSB, computational zoom technology can, at times, allow for the creation of ‘novel image compositions’ that can’t be captured using a physical camera. One example is the generation of multi-perspective images featuring elements from photos taken using a telephoto lens and a wide-angle lens.

To utilize the technology, photographers must take what the researchers call a ‘stack’ of images, where each image is taken slightly closer to the subject while the focal length remains unchanged. The combination of an algorithm and the computational zoom system then determines the camera’s orientation and position based on the image stack, followed by the creation of a 3D rendition of the scene with multiple views.

“Finally,” UCSB researchers explain, “all of this information is used to synthesize multi-perspective images which have novel compositions through a user interface.”

The end result is the ability to change an image’s composition in real time using the software, bringing a photo’s background seemingly closer to the subject or moving it further away, as well as tweaking the perspective at which it is viewed. Computational zoom technology may make its way into commercial image editing software, according to UCSB, which says the team hopes to make it available to photographers in the form of software plug-ins.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on NVIDIA Computational Zoom lets you change perspective and focal length in post

Posted in Uncategorized

 

NVIDIA 3D Vision Discover Test

20 Nov

NVIDIA 3D Vision Discover Test. You can watch the movie in 3D by using red/blue glass.

 
Comments Off on NVIDIA 3D Vision Discover Test

Posted in 3D Videos