So, we’ve already warned you of the dangers of deepfakes. Security experts have provided a cautionary tale that deepfakes will play a sinister role in the 2020 election. And we’ve already seen the mayhem that erupted when a Nancy Pelosi video was slowed down to make it appear like she was drunk. Though not a deepfake, the footage showcased how fast an altered video can go viral and make people question the validity of what they are seeing.
Now, to bring us up to speed, a deepfake is like a digital puppet that uses machine learning and 3D models of a target’s face, which allows you to manipulate a person in a video, as well as edit and change what he or she say. As the technology advances, these changes appear more-and-more as a seamless audio-visual flow without jump cuts.
SEE ALSO: Ex-Google Engineer Warns AI ‘Killer Robots’ Could Cause Catastrophes
What problems could occur with that?
Well, let’s say during election season, someone creates a deepfake of a candidate spewing racial epithets. The video could easily go viral before it could be proven to be a fake, and the damage would already be done.
In the world of deepfakes, so much attention is focused on the havoc that could potentially be wreaked by altering and/or manipulating the faces of political leaders, celebrities and ordinary people.
But that’s just the beginning. Ready for the next level of deepfakery?
There’s actually much more damage these deep learning algorithms can do to destroy people’s lives. Imagine this if you will: synthetic media technology that’s capable of creating… full-body deepfakes.
Holy great balls of deepfake fire! Yes, it was only just a matter of time.
Before I go into the nuts-and-bolts of how this work, let’s take a breath and think of all the possible hell-raising this could cause. So, instead of breaking news reports speculating about President Donald Trump’s fabled “Pee Tape,” deepfakers could potentially create a full-body mimicry of said act and leak it, so to speak, out into the world. (Yuck.)
The seeds of full-body deepfakes were already in place back in those archaic days of August 2018. University of California Berkeley researchers presented a paper called: Everybody Dance Now. The premise of their research is how deep learning algorithms can transfer a professional dancers’ moves onto the bodies of amateurs.
In the words of the paper:
Given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject.
Though the technology is still a little on the primitive side, it’s pretty cool stuff, as it can manipulate an entire body’s movements from a source subject to a target subject
So, this is a technology that is great for making amateur flatfooted people who can’t dance look like professional hoofers. Using the right video, you could make any average person appear to dance like Baryshnikov.
But UC Berkeley is not alone in their body manipulation research. In April, Data Grid, a Japanese artificial intelligence company, created artificial intelligence (AI) that can automatically generate virtual models for advertising and fashion. The company’s AI technology can generate whole body models from people who don’t exist in reality using Generative Adversarial Networks (GANs).
Data Grid’s technology is great for cutting costs in the fashion and apparel industries—by not having to hire those bothersome, expensive models and just simply creating a realistic looking AI model.
In a way, deepfakes are democratizing what Hollywood has done for years with CGI technology by putting the tech in the hands of anyone who downloads the app.
But there’s more. Dr. Björn Ommer, a professor of computer vision at the Heidelberg University Collaboratory for Image Processing (HCI) & Interdisciplinary Center for Scientific Computing (IWR), is one of the authors of the research paper: Towards Learning a Realistic Rendering of Human Behavior.
Ommer’s team is also developing full-body synthetic media. Their goal is to have AI learn human movements through video. The algorithm will transform a person into a particular pose; from a target video to a target subject. One of the scenarios presented could make you look like you’re doing 100 chin ups.
Pretty amazing technology, for those who want to look like a chin up champ; but in the world of politics, the technology could be a problematic different story—especially when the advances of full-body deepfakes become impossible to distinguish from the real thing.
So it’s a good thing no one will ever, ever use this technology for malicious means. Right?