The views expressed by contributors are their own and not the view of The Hill

Entering the age of artificial truth

People walk through an artificial intelligence (AI) exhibition called “Sex, Desire and Data,” depicting transformed pornographic images, at the Centre Phi in Montreal, Quebec, Canada, on August 8, 2023. (Photo by ANDREJ IVANOV / AFP) (Photo by ANDREJ IVANOV/AFP via Getty Images)

Gary Marcus, cofounder of the Center for the Advancement of Trustworthy AI, has for years been highly critical of generative artificial intelligence and large language model applications like OpenAI’s ChatGPT. These programs consume vast quantities of data to perform various functions, from creating new cocktail recipes to sharing insights about the folding sequences of proteins.

Marcus recently wrote that there are “not one, but many, serious, unsolved problems at the core of generative AI.” He isn’t alone. During an interview earlier this month, theoretical physicist Michio Kaku dismissed AI chat bots as “glorified tape recorders” that are only a “warped mirror of what’s on the internet the last 20 years.”

Yet that hasn’t stopped popular culture, business blogs, and tech enthusiasts from contemplating their supposedly revolutionary implications. There are many unknowns about general artificial intelligence and its role in American society, but one point is becoming clear: Open-source AI tools are turning the internet into an even murkier den of confusion. 

One of Marcus’s chief concerns is that these models can create self-amplifying echo chambers of flawed or even fabricated information, both intentionally and unintentionally. AI researchers Maggie Harrison and Jathan Sadowski have each drawn attention to what the latter cleverly termed “Habsburg AI,” which appears when AI-generated information is fed back into another AI program on a loop. What results is a sort of information “inbreeding” that drives the AI mad, causing it to spew abominations of data. Yet even absent these conditions, human influence on the information filtering process creates opportunities for additional forms of distortion.

Practices known as search-engine poisoning, keyword stuffing, or spamdexing involve programmers boosting the visibility of certain sites or products artificially by manipulating a search engine’s indexing system. Unfortunately, AI can supercharge these manipulative schemes. But malicious intent aside, the sheer breadth of online data can lead programs to mistake such volume for veracity. Take, for instance, something as simple as a famous quote.

“The society that separates its scholars from its warriors will have its thinking done by cowards and its fighting done by fools” is one of the most misattributed phrases on the internet. Google and Bing searches yield an avalanche of results giving credit for this wisdom to the fifth-century B.C. Athenian general Thucydides. Indeed, the quote is one of ChatGPT’s top three responses to the prompt “Thucydides quotes.”

Though he was a warrior and a scholar, Thucydides never wrote those words. The quote, transformed over the years through endless paraphrasing, is from a biography of British General Charles George Gordon written by Sir William Francis Butler and published in 1891. Ironically, another quote frequently misattributed to Thucydides is that “knowledge without understanding is useless.”

Yet according to the dominant search engines — increasingly popular sources of human knowledge — Thucydides did say those things. This is one example of an artificial historical fact. The problem might seem trivial when quoting an ancient Athenian, but what about when vetting U.S. foreign policy for historical context, responding to a rapidly evolving pandemic or trying to make sense of potentially cherry-picked declassified intelligence that could lead a nation to war?

Earlier this month, I published a study describing how disinformation made its way into trusted sources and shaped the consensus to invade Iraq in 2003. If available at the time, AI-powered news filters could have further reinforced that narrative and stifled or altogether silenced opposition. Such a predicament emerged during the COVID-19 pandemic and the 2020 presidential election, as social media platforms banned what they considered suspect reports that wound up being true. Society’s insatiable demand for rapid and continuous information access has also become a lucrative market that large language models are perfectly suited to exploit.

Questionable AI-authored literature now floods online bookstores, luring buyers with trending topics and promises of expertise on a budget. One error-riddled book about the recent fires in Maui appeared on Amazon within two days of the disaster. It had the same title as Michael Wolff’s wildly popular 2018 book on the Trump administration, “Fire and Fury.” The book was #1 in the natural disasters category before Amazon took it down, and this incident is far from isolated. 

If these practices are not curbed, they could produce a Tower of Babel effect by creating an online ecosystem of self-replicating fictions. Americans read fewer books, have less faith in the news, view higher education as less important and rely more than ever on TikTok for their news, all of which makes the modern world fertile ground for algorithmic manipulation. Making matters worse, traditional checks on specious information — such as expert knowledge, reputable publishing agencies and hard news sources — have lost much of their influence.

AI’s threat to society therefore looks less like James Cameron’s vision of a cyborg Armageddon and more like a hopelessly polluted information environment in which everything is disputed and meaningful communication is impossible. Advanced search tools can reinforce rather than refine the prejudices or policies of their human masters, which hyper-accelerates human bias online.

AI itself poses less of a risk to humanity than malign actors who seek to abuse it or those who put misplaced faith in its role as a gatekeeper of human knowledge. If Washington and Silicon Valley wade into the age of artificial truth without a clear strategy for managing its risks, America could end up drowning in a sea of incoherence.

Capt. Michael P. Ferguson, U.S. Army, is a Ph.D. student in the Department of History at the University of North Carolina at Chapel Hill. He is coauthor of “The Military Legacy of Alexander the Great: Lessons for the Information Age.” The views expressed are those of the author and do not reflect the official policies or positions of the U.S. Army, U.S. Department of Defense or U.S. Government.

Tags AI Artificial intelligence ChatGPT Gary Marcus generative artificial intelligence Iraq War misinformation Technology

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.