The views expressed by contributors are their own and not the view of The Hill

Deepfakes didn’t disrupt the election, but they’re changing our relationship with reality 

At the start of this year, experts warned of chaos in the 2024 U.S. election, fueled by artificial intelligence and deepfakes that could sway public opinion and muddy the truth. According to a recent Pew poll, over half of Americans were deeply concerned about AI’s potential to disrupt voting and spread disinformation. Yet as Election Day came and went, fears of AI upending the democratic process turned out to be overblown. 

Deepfakes did appear here and there, of course — scattered political ads, memesviral videos, even fake endorsements. Foreign players like Russia tapped AI to enhance their propaganda efforts, though the U.S. Intelligence Community noted in September that these tools did not “revolutionize” foreign interference attempts. Even Betsy Hoover, a political tech insider, observed that 2024 was not the “breakthrough year” for AI in politics that many had feared.  

The deeper reality, however, is not that deepfakes failed to fool voters; it’s that these technologies have redefined how we perceive truth. 

Studies show that even when people are warned they’re about to see a deepfake, they still fail to detect it roughly four out of five times. But what’s even more troubling is that, in our deeply divided society, many viewers don’t care if the content is fake. The true influence of deepfakes isn’t in their ability to deceive but their power to let people see the truth they want to believe. Synthetic media is uniquely capable of catering to our innate desire for simplified, emotionally satisfying narratives

Take the recent case of Hurricane Helene. An AI-generated image went viral, showing a child and a dog stranded in floodwaters. While not real, it resonated deeply with viewers and became the basis of online debates over governmental responsibility and disaster response. People who were thousands of miles away argued passionately in comment sections, often disregarding real accounts from actual rescue workers. The deepfake image wasn’t just misleading people; it was giving them an emotionally charged version of events that felt “truer than reality” because it aligned with their preconceptions. 

In this way, deepfakes tap into a fundamental human tendency: our need for clear narratives. We are not merely consumers of information; we are narrative-driven creatures, constantly crafting stories that help us make sense of the world. AI-generated content — whether deepfake images, videos or audio — feeds this appetite by providing stories that confirm our biases, stir our emotions and demand minimal critical thought. It’s this very appeal that makes synthetic media such an effective tool for disinformation. Manipulators don’t need to convince us something is true — they just need to make it feel true enough. 

These dynamics make society ripe for exploitation. Political actors and propagandists no longer need to persuade us that their content is real; they only need to ensure it aligns with our biases. This type of “customized reality machine” can churn out tailored narratives for specific audiences, reinforcing beliefs without ever needing to rely on facts. Unlike traditional propaganda, which targets broad audiences, synthetic media allows for personalized disinformation that feels intimately relevant to each individual

This year, the influence of deepfakes on the election may have been limited. But as AI technology advances, the threat it poses to democratic resilience will only grow.  

In places like South Asia, deepfakes have already played a significant role, with AI-generated speeches, images and news coverage influencing elections. In the U.S., deepfakes were used to mislead voters with falsified robocalls and fabricated audio messages from President Biden. While regulatory agencies like the Federal Communications Commission have stepped in, the issue goes beyond enforcement; it’s about societal resilience to these distorted realities

Some countries have started to prioritize “cognitive defense” as a form of national security. Finland, for example, includes critical thinking and narrative analysis in its school curriculum, teaching students to question the narratives they encounter. Estonia and Taiwan have also embraced cognitive defense, treating resilience to disinformation as an essential aspect of national security. These countries understand that detecting deepfakes alone is insufficient; people need the tools to resist emotionally appealing but false narratives. 

So where does this leave us?  

Recognizing our own vulnerability to compelling but artificial narratives is a crucial first step. We need to cultivate a willingness to pause when a story aligns too perfectly with our views or stirs our strongest emotions. Embracing complex and sometimes uncomfortable truths — stories without neat heroes and villains — is essential to building resilience in a world increasingly filled with synthetic content. 

This is not just about individual responsibility but about fostering a collective cultural shift toward questioning easy narratives and embracing complexity. Truth, after all, is rarely black and white; it often comes in shades of grey. As we navigate the age of AI, we must resist the allure of simplified, emotionally gratifying “realities” over the complexities of actual truth. 

The real crisis isn’t just that we can’t detect deepfakes — it’s that we’re growing indifferent to whether something is real or artificial. The viral Hurricane Helene image symbolizes the danger of our increasing preference for comforting lies over complex truths. In a world where synthetic media is only becoming harder to distinguish from reality, our ability — and willingness — to question these narratives may be the only safeguard against a future where truth becomes little more than a matter of personal preference. 

Perry Carpenter is author of “FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions” (Wiley, 2024). 

Tags 2024 presidential election Artificial intelligence Deepfakes Disinformation misinformation

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Technology News

See All
See all Hill.TV See all Video

Log Reg

NOW PLAYING

More Videos