I know what you’re thinking: I hope someday there will be a really scary and easy way to create fake videos and misinformation—in which it appears that people are saying words that they never originally said.
Rest assured, this Philip K. Dick dystopian nightmare… is now a reality.
Subscribe to Observer’s Business Newsletter
Scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University and Adobe Research have developed a software that will allow you to edit and change what people are saying in videos—and pull it off to create a realistic-looking fake.
Yikes.
Glad this will never fall into the wrong hands and cause a major international incident.
Just when life wasn’t satisfying enough to have people catfish via Tinder or Twitter, now there’s a software that uses machine learning and 3D models of a target’s face to generate new footage which allows you to change, edit and remove words coming out of a person’s mouth on video—by simply typing in the text. And these changes appear to have a seamless audio-visual flow without jump cuts.
I could break down exactly how the software’s technology works, but there’s actual a video that does all of that for me.
Like I mentioned, there’s already an unlimited pool of ideas that could harness this technology for evil purposes. Let’s keep this far, far away from any super-villains. Right?
Remember how mad people got the other week when someone re-edited a video of Nancy Pelosi to make her appear drunk? Now, just imagine if someone used this software to make it appear as if Nancy Pelosi is swearing like a drunken sailor—or spewing racist rhetoric—and then that video is leaked out into the world.
Or imagine after the Charlottesville Unite the Right white supremacist rally, if someone took footage of Trump and manipulated it to make it appear like he was saying there were “very fine people on both sides…” Oh wait, he actually did say that… never mind.
It seems like deepfake software is the equivalent of Christmas coming early for a Russian troll farm—now that the 2020 election season is underway. We already have revenge porn in the world; just imagine what words a jilted lover could put into his or her ex’s mouth before sending a deepfake video off to the ex’s family members. Ugh.
Since Adobe Research is involved in the development process, I’m sure it will only be a few years before we see this deepfake tool pop up in the latest update to Adobe’s video editing software Premiere Pro. How could this possibly go wrong?
The software research site is packed with huge disclaimers and paragraphs on ethical considerations:
We also believe that it is essential to obtain permission from the performers for any alteration before sharing a resulting video with a broad audience.
And…
We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals. We are concerned about such deception and misuse.
More and more, deepfakes seem to be popping up everywhere. Two artists, Bill Posters and Daniel Howe, collaborated with the advertising company Canny to create a video of Mark Zuckerberg sitting at a desk giving a sinister speech about Facebook’s power.
Deepfake Zuck was created using CannyAI’s video dialogue replacement (VDR) technology and matched with a September 2017 video of Zuckerberg giving an address about Russian election interference on Facebook. The video was posted to put Facebook’s content moderation policies to the test.
Joe Rogan was also given the deepfake treatment by the AI company Dessa, which recently released audio that distinctly sounds like the podcaster talking about chimp hockey.
Good thing this is only going to be used for the purposes of good, right? No ‘bad actors’ have ever used a software for not good purposes.
Best Case Scenario: Great tool for fixing glitches in post-production when producing a documentary and for making funny Snapchats.
Worst Case Scenario: World War III