We all need to get on the same page when it comes to fake news — now, not the day after Election Day 2020.
Whether we’re talking about a disinformation campaign on Twitter and Facebook, a doctored video (such as the one that hurt Hillary Clinton in 2016), or the recent Speaker Nancy Pelosi video that made her look drunk (she wasn’t) — we need to talk about what we mean when we talk about “fake news” — real fake news — and how we can deal with it before it creates another questionably legitimate president.
While the Pelosi video wasn’t a deepfake, I find myself thinking about the specter of deepfakes with increasing frequency as the next presidential election approaches.
Manipulation is bad enough
The Pelosi “drunk” tape was manipulated. What the perpetrators did there was pretty old school, and about as simple as can be. The video was slowed down by 25 percent. The result created the auditory illusion of the Speaker of the U.S. House of Representatives slurring her words. In layman’s terms: She appeared to be drunk as a skunk, three sheets to the wind, wasted.
In a world where fakes can be very high tech, we might do well to call the Pelosi manipulation a “shallow fake.” And while it was “fake lite,” it’s a serious problem, not to be underestimated. For starters, while many social sharing platforms took down the video, Facebook did not, because, essentially, that’s not how it rolls. (Memo to Zuckerberg: Please change your policy before the 2020 Election.)
While such low-tech smear tactics could sway popular opinion (and with that, the way folks vote) the Pelosi “drunk tape” is by no means representative of the tech signified by the term “deepfake.”
A deepfake is a video that has been significantly altered using digital means, specifically with AI-generated imagery. It’s based on a broader field of research and applied science in the area of AI called “deep learning.”
Deepfakes implement this deep learning technology to predict visually how an existing image might look in an image field to which it is non-native and in actions with which it is not associated. The doctored video implementing this AI technology to superimpose one image over another effectively creates an amalgam. The resulting visual is a digital puppet. The person behind this sort of “edit” can make the “person” in the faked situation do or say anything they’d like.
The biggest problem — until lately — has been making the deepfake believable. It’s no longer the biggest concern.
The best known examples of widely distributed deepfake content have been pornographic, and featured celebrities — among these Gal Gadot, Scarlet Johansson and Daisy Ridley. While the current porn version of this attack is both lurid and hurtful to its victims, the deepfake has not yet reared its ugly head in the contact sport that is our electoral process.
“Yet” is the operative word here. Unfortunately, when the deepfake makes its debut during an election cycle, it will be identified immediately as the threat to democracy that it is.
The most well-known example in politics probably belongs to Jordan Peele, who made a PSA “featuring” President Obama a year ago (or perhaps featuring “Obama” is more accurate). There are other examples. In January, a Seattle Fox affiliate ran a badly doctored clip of Donald Trump, but that instance more than anything illustrated the inherent difficulty of creating a believable deepfake.
Deepfakes could kill democracy
Peele’s video demonstrated the AI tech that was available at the time, and it has only improved since. The Department of Defense is working on identifying them — which should signal that at least one government agency believes deepfakes represent a significant threat to our national security.
Even if a deepfake is terrible — even if it is as obvious as the Pelosi “drunk tape” — it may still gain traction because people are distracted. Then there is the rather large segment of the population that would believe a talking squirrel if it told them what they wanted to hear.
This matters because an increasing number of elections in this country are being decided by razor-thin margins. Winning or losing can hinge on a small number of not even remotely clear-thinking voters believing they saw Nancy Pelosi drunk or Vladimir Putin saying Donald Trump scares him, and, of course, the word-of-mouth resulting from a video barely watched and misreported might also be enough to tip an election in one direction or another.
Fast forward to the Friday or so before Election Day. A deepfake video drops. It will take a while to be sorted out and debunked, and by then? Say hello to Mr. or Mrs. May-The-Worst-Candidate-Win.
There was no time to let the news cycle right-size then FBI Director James Comey’s decision to make an election-swaying announcement about Hillary Clinton’s email, and there will be no time to undo the Pandora’s Box of ideation that can come from one well-produced deepfake video that hits exactly the right note to move the needle of the American electorate.
Adam K. Levin is chairman and founder of CyberScout (formerly IDT911) and co-founder of Credit.com. He is a former director of the New Jersey Division of Consumer Affairs and is the author of Swiped: How to Protect Yourself In a World Full of Scammers, Phishers, and Identity Thieves, which debuted at #1 on the Amazon Hot New Releases List.