The views expressed by contributors are their own and not the view of The Hill

AI deepfakes are causing chaos in elections across the world. The US isn’t ready. 

Olivier Douliery, AFP via Getty Images
This illustration photo taken on Jan. 30, 2023, shows a phone screen displaying a statement from the head of security policy at META with a fake video (R) of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons shown in the background in Washington. Chatbots spouting falsehoods, face-swapping apps generating fake porn and cloned voices defrauding companies of millions — governments are scrambling to regulate AI-powered deepfakes widely feared to be a misinformation super spreader.

AI-generated TV anchors falsely claimed that a Taiwanese presidential candidate has illegitimate children. Suspicious audio surfaced of a Slovak candidate making plans to buy votes and raise the price of beer. And a deepfake video emerged of a Bangladeshi opposition politician in a bikini — an image likely to offend voters in the Muslim-majority nation. 

With more than 80 countries holding elections this year, the U.S. is getting a disturbing preview of how AI can be used to spread lies and manipulate voters — and how social media companies are failing to contain the damage. 

The boom in generative AI, which has spawned an array of cheap tools capable of churning out hyper-realistic images, video and audio, is coinciding with a wave of global elections in countries encompassing half the world’s population. That is creating an ideal testing ground for bad actors willing to use the technology to interfere in elections and destabilize democratic societies. 

And it is happening at a time when major tech platforms are cutting their trust and safety teams and rolling back their efforts to fight election disinformation, making their platforms even more attractive for those spreading AI-fueled deceptions. 

Taiwan provides an eye-opening example of how this is playing out. As the island’s January election approached, videos popped up in social media of AI-generated TV anchors reading false reports that presidential candidate Lai Ching-te had children out of wedlock. Researchers suspect that the People’s Republic of China, which opposes Lai and other pro-sovereignty politicians, was behind the video, part of a sprawling disinformation campaign targeting Taiwan’s vote. 

In Bangladesh, one fake video — likely generated by AI — showed an exiled opposition figure urging her party to “keep quiet” about Gaza, while another depicted a female opposition politician in a bikini. Both videos, apparently calculated to offend voters in the Muslim-majority nation, circulated ahead of Bangladesh’s January election. 

Slovakia is another alarming case study. Days before the country’s September election, a pair of audio recordings — thought to be manipulated using AI — spread on social media, appearing to show a liberal party leader plotting to buy votes from the country’s Roma minority and raise the price of beer. The vote-buying audio emerged during a 48-hour quiet period ahead of voting, making it even more difficult to debunk.  

Meanwhile in the UK, which is likely to hold a general election this year, a slew of AI-generated Facebook ads implicated Prime Minister Rishi Sunak in a fake financial scandal, while deepfake audio of opposition leader Keir Starmer verbally abusing his staff also made the rounds on social media. 

Big Tech companies say they take the AI threat seriously. Most have committed to voluntary AI safeguards set by the White House, and some formed a consortium to fight AI abuse in elections. Meta, the parent company of Facebook and Instagram, plans to expand its labeling of AI-generated content in May 2024 and has called for a common industry standard for identifying AI material. Google-owned YouTube and TikTok are requiring creators to self-label videos made with AI.

But behind this cloud of public relations, the companies have actually been undermining their own ability to deal with election-related disinformation. Many of the platforms have slashed their trust and safety teams in the last year or two, decimating their internal units charged with monitoring and flagging policy-violating content. Meta and YouTube have begun allowing false claims that the 2020 election was rigged or stolen, reversing policies that previously banned such content. Musk has boasted about fully dismantling X’s election integrity team. 

These Big Tech policy rollbacks, and the AI-fueled chaos playing out in elections around the world, paint a dark picture of what awaits the U.S. ahead of its own election in November. We have already seen glimmers of what to expect here, including a New Hampshire robocall that used AI to mimic President Biden’s voice and spread disinformation about voting, and fake AI images that showed Donald Trump posing with Black voters. 

Such dirty tricks may be just the beginning of how AI can be used to distort voter perceptions. The wide availability of tools that make it easy to create AI content, and social media companies’ unwillingness to police election lies, are creating the perfect conditions for unprecedented voter manipulation. As the parade of 2024 global elections continues, including the massive vote underway in India, the U.S. will likely get further previews of how much damage deepfakes can cause.

It’s not too late for the major tech platforms to recalibrate their approach. By rebuilding their trust and safety teams, tightening their election protection policies, and devoting sufficient resources to enforcing those policies, the companies could make it much harder for those trying to use AI to interfere in the democratic process. Unfortunately, that doesn’t appear to be part of their corporate plans.   

Katie Paul is director of the Tech Transparency Project, which seeks to hold large technology companies accountable.

Tags 2024 election Artificial intelligence Big tech

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.