RSS
 

Posts Tagged ‘deepfaked’

Microsoft unveils a new AI-powered tool for spotting deepfaked images and videos

03 Sep

On Tuesday, Microsoft introduced Video Authenticator, a new AI-powered tool that analyzes still images and videos to determine the likelihood that they feature digital manipulation. The tool is designed to address the growing problem of ‘deepfakes,’ a type of highly realistic manipulated content generated or modified by artificial intelligence systems.

Deepfake refers to images, videos and audio modified using AI tools. Though this technology can be used creatively, it is most associated with manipulating media to present something that didn’t happen in real life. This could include, for example, a video of a politician saying something they never said or doing something they never did.

Because these deepfakes are created using machine learning algorithms, the resulting content is typically very high quality and difficult (or impossible) for someone to discern from authentic media just by looking at it or listening to it. The solution to AI manipulation is another AI trained to spot the changes.

Microsoft has introduced Video Authenticator under its Defending Democracy Program, pointing out that dozens of ‘foreign influence campaigns’ targeting countries around the world have been identified in the past several years. These campaigns are intended to manipulate the public into certain beliefs or ideologies; others attempt to stir up debate and further polarize groups against each other.

Of 96 different campaigns identified (PDF) in part with support from Microsoft, 93% of them involved original content, which can be particularly difficult to detect. Microsoft explains that while ‘no single technology will solve the challenge of helping people decipher what is true and accurate,’ its Video Authenticator is an important tool that will help counteract disinformation by detecting subtle evidence of AI involvement in its creation.

Though Video Authenticator isn’t a long-term solution to what is inevitably an evolving technology, Microsoft explains that ‘in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.’

Video Authenticator works by analyzing every frame in a video, as well as still images, and assigning them a ‘percentage chance,’ also called a confidence score, that indicates its likelihood of having been manipulated. When analyzing videos, Video Authenticator presents users with a realtime percentage for each frame.

In a sample provided by Microsoft, the tool isn’t able to detect evidence of manipulation in every frame; some pass without triggering the system, while others may have enough greyscale elements, blending boundaries, subtle fading or other aspects to trigger the detection system.

Ultimately, Video Authenticator is just the start. Microsoft explains:

We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.

Microsoft isn’t making Video Authenticator available to the public in order to protect it against manipulation that may hamper the effort.

Video Authenticator is the latest example of a deep learning algorithm designed to counter the negative use of other AI algorithms. Last year, for example, Adobe Research and UC Berkeley introduced a method for detecting subtle face manipulations made using the Face Aware Liquify tool in Photoshop.

Conversely, we’ve also seen AI-based technologies that empower users to better protect themselves in this new digital landscape. Most recently, researchers with the University of Chicago SAND Lab released a free tool that uses AI to subtly ‘cloak’ images of one’s own face in order to poison facial recognition algorithms trained to recognize them.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Microsoft unveils a new AI-powered tool for spotting deepfaked images and videos

Posted in Uncategorized

 

Twitter will start labeling manipulated and deepfaked content

06 Feb

Twitter plans to crack down on ‘deceptively edited’ and deepfaked media content by labeling tweets that contain these images and videos. According to Reuters, the company also plans to remove edited or computer-generated content in cases where it may put someone’s privacy or safety at risk, suppress voters or cause ‘widespread civil unrest.’

Twitter’s decision comes ahead of the contentious 2020 presidential election season, which is expected to include huge quantities of manipulated and deepfaked content intended to shape the outcome of the election. Other major online platforms have enacted similar rules about deepfaked content, though some critics say they aren’t taking a hard enough stance against such content.

Facebook, for example, likewise labels fake or otherwise altered images as ‘false,’ but won’t remove this type of content unless it is a video generated using AI to make it appear that someone is saying something they never said. According to Reuters, Twitter will take into account the text in a tweet and ‘other contextual signals’ to decide whether a post will be labeled as false or removed entirely.

It remains unclear how Twitter’s platform will detect manipulated and deepfaked content.

Articles: Digital Photography Review (dpreview.com)

 
Comments Off on Twitter will start labeling manipulated and deepfaked content

Posted in Uncategorized