A global watermarking standard could help safeguard elections in the ChatGPT era
The new year will be critical for the principle of representative democracy. More than two billion people across more than 50 countries, including the U.S., will vote in a record-breaking number of democratic elections.
These elections come on the heels of OpenAI’s release of GPT-4, which represents a significant leap forward in large language model generative AI capability. Generative AI absorbs raw data and learns to generate realistic, high-quality, and probable outputs in response to prompts.
Similarly, large language models use deep learning techniques and massive amounts of text to predict, for example, the next word or series of words in a sentence, and to produce original content in response to prompts. To prevent disinformation from eroding democratic values worldwide, the U.S. must establish a global watermarking standard for text-based AI-generated content.
Large language models now feature more nuanced responses to prompts, greater creativity, and enhanced understanding. This will make possible the weaponization of mass disinformation on a massive scale to influence these elections. Groups or states intent on sowing confusion or fooling voters can pump out vast amounts of convincing text or unleash millions of chatbots to engage with social media users convincingly.
During the 2019 Indian general elections, both of the main parties — the governing party of Prime Minister Narendra Modi and the opposition — deployed AI-powered chatbots on the messaging platform WhatsApp to influence India’s 900 million eligible voters. These chatbots often tailored their text to appeal to specific ethnic and social groups. This was before the dawn of ChatGPT and advanced large language models. India is among the countries holding general elections in this new year.
India’s 2019 elections underscore the sophisticated nature of AI-driven disinformation campaigns. Such efforts are widespread and highly personalized, exploiting societal divisions and amplifying existing tensions. The capability to generate massive amounts of hyper-customized content which appears indistinguishable from human-generated text poses a significant threat to the integrity of the democratic process.
President Biden’s October executive order on AI regulation demands watermarking of AI-derived video and imagery but offers no standard. The Chinese government goes further by establishing a national standard watermark process required for all AI-derived visual content. Neither country, however, has addressed text-based content. The European Union AI Act, announced earlier this month, offers no watermarking requirement.
Text-based AI represents the greatest danger to election misinformation, as it can respond in real-time, creating the illusion of a real-time social media exchange. Chatbots armed with large language models trained with reams of data represent a catastrophic risk to the integrity of elections and democratic norms.
Watermarking text-based AI content involves embedding unique, identifiable information — a digital signature documenting the AI model used and the generation date — into the metadata generated text to indicate its artificial origin. Detecting this digital signature requires specialized software, which, when integrated into platforms where AI-generated text is common, enables the automatic identification and flagging of such content.
This process gets complicated in instances where AI-generated text is manipulated slightly by the user. For example, a high school student may make minor modifications to a homework essay created through Chat-GPT4. These modifications may drop the digital signature from the document. However, that kind of scenario is not of great concern in the most troubling cases, where chatbots are let loose in massive numbers to accomplish their programmed tasks. Disinformation campaigns require such a large volume of them that it is no longer feasible to modify their output once released.
The U.S. should create a standard digital signature for text, then partner with the EU and China to lead the world in adopting this standard.
Once such a global standard is established, the next step will follow — social media platforms adopting the metadata recognition software and publicly flagging AI-generated text. Social media giants are sure to respond to international pressure on this issue.
The call for a global watermarking standard must navigate diverse international perspectives and regulatory frameworks. For instance, Brazil and Indonesia, two countries with vast AI capabilities and a recent history of contentious elections, may see this initiative as critical to safeguarding democratic processes. However, nations with stringent controls over information dissemination, such as Russia or Iran, are certain to view such a standard as an infringement over their sovereign control of digital spaces.
Meanwhile, smaller democracies such as New Zealand and Norway, known for their proactive approaches to digital ethics, could emerge as early adopters, advocating for the standard’s adoption in international forums.
In Africa, where mobile technology rapidly transforms social and political landscapes, responses could vary widely. Tech-forward nations such as Kenya might embrace these standards to bolster their growing digital economies and democratic institutions, while others might be cautious, weighing the benefits against the potential for external influence over their internal affairs. So this requires a nuanced approach, respectful of national sovereignty, which that promotes a unified front against the perils of AI-generated disinformation in electoral processes.
A global standard for watermarking AI-generated text ahead of 2024’s elections is ambitious — an undertaking that encompasses diplomatic and legislative complexities as well as technical challenges. A foundational step would involve the U.S. publicly accepting and advocating for a standard of marking and detection. This must be followed by a global campaign to raise awareness about the implications of AI-generated disinformation, involving educational initiatives and collaborations with the giant tech companies and social media platforms.
In 2024, generative AI and democratic elections are set to collide. Establishing a global watermarking standard for text-based generative AI content represents a commitment to upholding the integrity of democratic institutions. The U.S. has the opportunity to lead this initiative, setting a precedent for responsible AI use worldwide.
The successful implementation of such a standard, coupled with the adoption of detection technologies by social media platforms, would represent a significant stride towards preserving the authenticity and trustworthiness of democratic norms.
Joe Buccino is a retired U.S. Army colonel who serves as an A.I. research analyst with the U.S. Department of Defense Defense Innovation Board. He served as U.S. Central Command communications director from 2021 until September 2023. His views do not necessarily reflect those of the U.S. Department of Defense or any other organization.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.