The views expressed by contributors are their own and not the view of The Hill

Forget deepfake videos. Text and voice are this election’s true AI threat. 

An attendee sends a text message during an event with US President Joe Biden, not pictured, at the Wilmington Convention Center in Wilmington, North Carolina, US, on Thursday, May 2, 2024. (Photographer: Cornell Watson/Bloomberg via Getty Images)
An attendee sends a text message during an event with US President Joe Biden, not pictured, at the Wilmington Convention Center in Wilmington, North Carolina, US, on Thursday, May 2, 2024. (Photographer: Cornell Watson/Bloomberg via Getty Images)

Election 2024 represents a flashback in many ways. Biden and Trump again occupy the top of the ticket, and with the country sharply divided, the election will once again come down to a razor-thin margin.

What’s new, however, is artificial intelligence and its rapidly advancing power, for both good and bad. 

It is premature to say exactly how this rapidly evolving technology will transform politics. Still, we do know that the conversation on AI is far too focused on video deepfakes used to change people’s minds on platforms such as TikTok and Facebook. 

A subtler but more pervasive threat looms: AI-generated text and voice misinformation aimed at voter suppression, spread on hard-to-monitor platforms such as private messaging apps. 

It may seem contrarian that we don’t put video deepfakes at the top of our AI election concerns list. Yet for all the headlines, there are only a handful of examples of video deepfakes in politics. And journalists and the general public are more aware and apt to ask, “Is this real?” The proliferation of false videos of President Biden saying obviously ridiculous things have helped promote skepticism. And the much-covered Republican National Committee video from April, a supposed foreshadowing of a second Biden term, was openly acknowledged from the outset as an AI creation and never passed off as real.

In contrast, short-form text messages are easy to produce convincingly at scale, and the hardest to detect as AI-generated. This makes them the most concerning, even if AI voice generation is catching up quickly.

And while setting the record straight, let’s talk about what wins close elections. Trump’s narrow victory in Michigan in 2016 and Biden’s close win in Arizona in 2020 were each by about 10,000 votes, and down-ballot races often hinge on even thinner, sometimes unbelievable margins. In tight races, voter turnout is decisive, not persuasion. 

Persuasion — convincing people to believe in a candidate or an ideology — is quite challenging; the number of persuadable voters has declined, while political advertising mostly has little or no persuasive effect. Voter turnout is the easier route to changing electoral outcomes. And there is a straightforward strategy for political disenfranchisement: lie to people about how to vote. 

Importantly, you can more easily, cost-effectively and convincingly do this in many languages with AI.

“Hi {first name}, it’s Sally from [Vote Secure]. It looks like your drop box location has moved to [insert wrong address]. Any questions? Or just let me know you got this by replying ‘OK’ so I know you’re set.” 

“Hi {first name}, it’s Mario from [Ballot Guard Consortium]. It looks like you haven’t returned your ballot and I wanted to save you time and let you know the deadline has passed. Here’s a link to all the deadlines for next time [insert link to webpage with misinformation on deadlines]. Any questions, just let me know.” 

All you need to do is alert a voter to an erroneous registration form, wrong deadline or incorrect polling location, and a malicious actor will have achieved his or her goals. There is no need for prolonged conversations about beliefs or training an AI model to understand what different types of people might be interested in. Trying to change people’s minds is far more complicated than lying to people about where or how to vote. 

The scariest part of this is where these messages will likely be exchanged. Platforms like Facebook, Instagram, YouTube, X (formerly Twitter) and TikTok are well-known channels for spreading misleading content, but they’re relatively public compared to private messaging services. 

WhatsApp, Telegram, Signal and regular texting are all extraordinarily popular, used for one-on-one and small group conversations, and all but texts are encrypted. That means few friends can respond to questions about the validity of messages, nor can reporters see what’s happening. Monitoring for quickly spreading rumors is nearly impossible. 

There’s history here. In 2020, WhatsApp became a favorite channel to spread false information, particularly among Latino voters in Florida (Hispanic adults are twice as likely to use WhatsApp as the general public.) 

None of this is to say that video deepfakes or persuasion via more advanced AI techniques is impossible and isn’t worthy of concern. It’s just to say that a potentially more ominous and simpler threat is likely to appear in less visible places in 2024. Swing voters are scarce, margins are thin and automated text or voice voter suppression campaigns are easy to do and hard to detect. 

Like any powerful technology, AI can be used in both constructive and underhanded ways. But if we want to protect voters and democracy between now and November, we need to go beyond panicking about video deepfakes and educate voters about the other likely threats. 

Jessica Alter is a tech entrepreneur and co-founder of Tech for Campaigns. Jonathan Stray is a senior scientist at the Berkeley Center for Human-Compatible AI.

Tags 2024 presidential election AI Biden Deepfake videos Donald Trump Donald Trump election misinformation Facebook Joe Biden Joe Biden Politics of the United States Tech for Campaigns TikTok trump whatsapp

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more