The views expressed by contributors are their own and not the view of The Hill

We can’t let AI take over the 2024 election

The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (AP Photo/Richard Drew, File)
The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (AP Photo/Richard Drew, File)

Last month, seven leading artificial intelligence companies agreed to voluntary commitments with the White House on AI safety, security and trust. But there’s one glaring and urgent omission: a deadline. No timeline or details regarding implementation has been spelled out, but there’s one that we know we need to plan for: November 5, 2024.

Two transformative events in American democracy are headed for a near-on collision: the explosion of generative AI and the upcoming presidential election. Yet the United States is in a Wild West era when it comes to AI-generated content and the use of AI-enabled tools to reach voters and inform campaigns. AI companies, election officials, media organizations, civil society, political campaigns and members of Congress need to come together to secure the 2024 election against AI-enabled manipulation.

Candidates are already trying their hands at AI. An “AI Mayor Suarez” chatbot joined the race courting constituents with canned campaign answers. The DeSantis campaign released a video featuring deceptive, AI-generated photos appearing to depict Trump embracing Fauci. In a pro-Trump deepfake, Hilary Clinton appeared to endorse DeSantis because “he does what he’s told.” The stunts have varied in their deceptiveness and impact, but one thing is clear: The age of political AI experimentation has officially begun. 

Policymakers, news organizations and voters alike must prepare for a wave of automated disinformation as new generative AI tools democratize the powers of manipulation. For years, disinformation has focused on misleading text. In the 2024 election season, a spate of AI-generated audio and visual content will likely pollute the wires as well, enabling new forms of deception. Defense contractor and AI communications firm DeepMedia estimates that 500,000 video and voice deepfakes will be shared on social media this year. Fake images of electoral ballot stuffing or a polling place disruption, especially paired with AI audio purporting to be from trusted sources or authorities, today can be generated by anyone.

We may see experimentation in AI’s use in targeted messaging and voter sentiment analysis — the 2016 Cambridge Analytica–Facebook voter profiling scandal on steroids. In a worst-case scenario, AI might one day run politics. As tools get refined and prove useful, sophisticated AI systems could pair voter profiles with A/B testing for successful messages, using AI-generated text to tweak the message to the voter. AI-enabled messaging could tempt and even automate candidates to say one thing to key voters in one state and an opposite message to those in another without a human even being in the loop. If systems become adept enough at identifying voter sentiment, suggested messaging framing could readily turn into AI-generated policy positions — and AI campaign promises into actual policy once elected. 

Artificial intelligence voice generation and text opens the door to garden variety scammers looking to make a quick buck mimicking political outreach for fraudulent donations. Confusing voters on which calls and emails are real could have an overall suppressive effect on political engagement and voter turnout.

This moment of AI frenzy falls amidst the backdrop of a struggling democratic information space, with platform and audience fragmentation and bleak progress. Verification on Twitter, which once separated authentic election accounts from imposters, now holds little meaning. Election security cells, such as on foreign interference, that have been set up in the past have not become permanent. And long-standing calls for stronger data privacy regimes and transparency in political advertising remain unanswered.

There is still time to enact basic rules of the road for political AI, but the window is closing fast. We aren’t starting from regulatory zero. The EU’s AI Act identifies AI tools that could sway voters or influence elections as high-risk systems requiring regulatory oversight. The recent AI commitments lean into digital watermarking but provided few details. Policymakers, technology developers, social media platforms and news outlets need to fast-track the adoption of content authenticity architectures that can provide content manipulation histories on all the visual media we see online — authenticating not only what’s fake but also what’s real.

Congress needs to put AI’s threat to democracy high on its agenda as it considers AI risks. State and federal legislation such as the REAL Political Advertisements Act that require disclosures of AI-generated text, images, video and audio in political content would provide a much-needed guardrail. In the meantime, campaigns could commit to a voluntary pledge to label all AI-generated content. Finally, the executive branch needs a rapid coordination cell with AI expertise to assess, anticipate and communicate on threats to the 2024 election in real time.

We need to get out ahead before the Wild West turns into normalized bad practices. We have failed in the case of textual disinformation. Let’s not make the same mistake with AI.

Lindsay Gorman is a Senior Fellow and Head of the Technology and Geopolitics Team at the Alliance for Securing Democracy at the German Marshall Fund.

Tags 2024 presidential election Advertising Artificial intelligence campaigns Deepfakes political advertising Social media Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.