meyerandmeyer - stock.adobe.com

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Despite risks, deepfake AI technology has enterprise potential

The use of GANs to duplicate data has been portrayed as one of AI's biggest potential risks -- but enterprises can also use deepfake technology for positive content production.

Viral videos show Barack Obama criticizing Donald Trump, Mark Zuckerberg bragging about how Facebook owns its users, and Nicolas Cage as Indiana Jones -- and James Bond. Sound plausible? Maybe, but in this case, the offending videos are deepfakes.

 Deepfakes are counterfeit, simulated video or audio created using generative adversarial networks -- a technology where two machine learning algorithms look at a data set and compete against each other to infer what additional data belongs in the data set.

Easily available apps, such as FakeApp, DeepFaceLab and MachineTube, have made this once-skill-based technology accessible to anyone, whether lighthearted or sinister intent. While companies like Google and Facebook are working to curb the popularity and spread of these videos, deepfake AI technology is still something that every enterprise needs to be aware of.

GANs for good

While deepfakes have earned a bad reputation from their association with X-rated videos and political scams, the main deepfake AI technology -- generative adversarial networks (GANs) -- are not inherently malicious or misleading.

"The purpose behind inventing [generative adversarial networks] was to create the ability to augment data sets if you didn't have enough data, or if you have incomplete data," said Michael Clauser, head of data and trust at Access Partnership, a global tech policy consultancy. "This is a really powerful artificial intelligence that can create near data and similar data."

Synthetic data from GANs have been used to program algorithms in everything from detecting breast cancer to studying dark matter, and there are several research teams using it to train autonomous vehicles. Disney used a deepfake to add young Harrison Ford as Han Solo into its newest Star Wars films. In these benevolent use cases, the technology is generally referred to as a "synthetic video" rather than a "deepfake."

A successful deepfake

This technology has the potential to be an asset to enterprises in content production, particularly when it comes to personalized content. Businesses that utilize mass personalization, need to up their game on the volume and variety of content that they can produce -- and GANs' simulated data can help, said Andrew Frank, research VP and analyst at Gartner.

"Content production is still rather expensive. I think there is a transformation that uses more computer techniques to generate a lot more video communications than was previously feasible," Frank said.

GANs are important for U.S. technological leadership, particularly when it comes to the U.S.-China artificial intelligence race. Due to the nature of its surveillance laws, policies and history, China has access to a backlog of citizen data that the U.S. doesn't. While the U.S. has increased its surveillance activities, it has also enacted many restrictions on surveillance, which limits the data pool it uses for AI research.

"AI supremacy is determined by access to training data in order to get better trained, more intelligent algorithms and AI systems," Clauser said. "For the U.S. to compete it needs to create data that it doesn't have and that China does have, especially when it comes to facial recognition, motion picture video, surveillance and audio logs."

How to mitigate risks of deepfakes

While there are benevolent and positive uses of this technology, the risks associated with deepfake AI technology more often make buzz. In 2019, thieves used a deepfake to steal $243,000 from a company by calling the office after business hours and using an audio deepfake with the voice of the company's CEO to ask the managing director to transfer money in order to avoid late payment fines.

Social media companies are grappling with how to handle this increasingly prevalent form of disinformation. Google, Twitter, Facebook and Reddit have all made various policy changes in recent months to balance the risks posed by deepfake videos with freedom of speech on their platforms. However, Frank said there aren't any technical solutions that specifically protect against deepfakes yet, so he recommends focusing on process-oriented measures.

If one is keeping track of all of the kind of things that would require some kind of crisis management, [deepfake management] goes on the list.
Andrew FrankResearch VP and analyst, Gartner

"This is just an extension of any public relations response mechanism that deals with escalating situations," he said. "If one is keeping track of all of the kind of things that would require some kind of crisis management, this goes on the list."

As technological solutions develop in the coming years, Clauser believes that businesses should take a risk-based approach because security and prevention are expensive.

"If you're a bulge bracket bank or a nuclear power company, your posture toward a deepfake threat factor should be quite different than if you're a candy company or a small business," Clauser said.

The emergence of deepfakes has created a demand for video authentication tools to help viewers and publishing platforms distinguish between real videos and synthetic or deepfake videos.

"There has been talk about using blockchain technology to authenticate the provenance of video by capturing something in the camera when a video is recorded that would authenticate its origin," Frank said. "These are similar to some of the techniques that are being used to authenticate the origin of physical products to fight counterfeiting and make sure that there are no leaks in the supply chain."

Regaining trust

The way social media has evolved has led to a huge loss of trust in digital content. According to a 2017 Pew Research Center study, only 5% of web-using adults trust the information they get from social media. Deepfakes thrive in -- and are a product of -- this atmosphere of distrust. Now, the current concern about deepfakes is the potential political fallout, but this technology has ramifications for any person or business that operates in the digital realm.

The existential question for brands is how they regain customers' trust and establish a dependable reputation in a world where people no longer inherently believe what they see on their screens. The answer, according to Frank, is authenticity.

"Brands really need to think about how they can establish more direct relationships," Frank said.
"Brands are becoming too dependent on artifice when they do things like deploy synthetic customer service representatives, which initially appear to be real people and then you later discover that they are not. All of that contributes to a general loss of trust, so maybe rediscovering the human element is the key to fighting all of this."

Dig Deeper on Neural networks and deep learning

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchBusinessAnalytics

SearchCIO

SearchDataManagement

SearchERP

Close