Most of us are raised with the guiding principle that “seeing is believing.” But this motto may soon expire with the quick onset of “deepfake,” a graphic technique to synthesize ultra-realistic human video and audio using artificial intelligence and machine learning algorithms.
The technology, complex as it may sound, is far from exclusive to pros. In the consumer space, deepfake mobile apps, notably Russia’s FaceApp and China’s ZAO, have enjoyed overnight popularity among entertainment-seeking users who don’t mind giving away a bit of facial data so they can morph their faces to look younger, older, of a different gender, or onto the body of a celebrity.
On the other side of the coin, though, the boom of the so-called “democratized deepfake technology” has inspired a rise of tools tasked to bust the fakeness of it all so as to prevent this technology from being exploited for creating harmful content, such as fake news and propaganda material.
SEE ALSO: Why AI Deepfakes Should Scare the Living Bejeezus Out of You
“This technology is amoral,” Shamir Allibhai, founder and CEO of Amber Video, a platform for authenticating video and audio, said at Bloomberg’s Sooner Than You Think conference in New York this week. “Deepfake can be used for entertainment purposes, like creating satire or bringing back Marilyn Monroe in the next feature film. And we are all going to be laughing. But there’s a seriousness—we still want to preserve truth.”
Similar to how a deepfake is created, Amber Video uses artificial intelligence and signal processing to identify whether an audio or video file has been maliciously altered. The two-year-old company has attracted a loyal client base, mostly journalists, Allibhai said. However, he’s not optimistic that the fighting-fire-with-fire approach will win out in the long run.
“Ultimately I think it’s a losing battle,” Allibhai said. “The whole nature of this technology is built as an adversarial network where one tries to create a fake and the other tries to detect a fake. The core component is trying to get machine learning to improve all the time…Ultimately it will circumvent detection tools.”
To address the challenge, he added, would require actions by multiple stakeholders in the content ecosystem, including distributing channels and lawmakers.
“One of the areas we’ve talked a lot about is the weaponization of this deepfake technology against women, putting their heads into pornography. That’s where I feel there can be a strong legislative approach,” Allibhai explained. “Ultimately you can’t legislate against the technology itself, because it’s out there and it’s getting better with each new version. You can’t control what’s on the internet. But I do think being able to force the Facebooks, the Twitters, the YouTubes not to share that knowledge is the first step.”