Lawmakers struggle to recognize AI-generated emails, study finds

The ChatGPT app is displayed on an iPhone in New York, Thursday, May 18, 2023. The free app started to become available on iPhones in the U.S. on Thursday and will later be coming to Android phones. Unlike the web version, you can also ask it questions using your voice. (AP Photo/Richard Drew)

Since OpenAI launched its artificial intelligence (AI) platform, ChatGPT, in late 2022, it has raised concerns about how the technology could impact everyday life. A recent study from Cornell University addressed one of the latest dilemmas plaguing skeptics when it suggested AI could open up new ways for malicious actors to manipulate representative democracy.

Researchers at Cornell wanted to see if it would be possible to influence lawmakers by using AI to generate fake constituent emails. Their study, published March 20 in the journal New Media and Society, found that lawmakers struggle to tell the difference between emails written by humans and those written by artificial intelligence.

Such a tactic could potentially be used to sway public policy, said Sarah Kreps, one of the study’s authors.

“We know that there are a bunch of studies that show that letters and emails are influential in setting the agenda,” Kreps said.

The researchers had undergraduate students write emails on various policy issues. They then used the AI language model GPT-3, which uses machine learning to generate text in response to prompts, to generate emails on the same topics.

The researchers sent both student and AI-generated emails to around 7,000 state legislators. They measured the response rate to judge how well the lawmakers could recognize that an email was written by AI.

“We know from a lot of different studies that response rates are an important indication of legislative priority because they have limited time, and so they’re not going to respond to things that they can tell are spam, or they can tell are disingenuous,” Kreps said.

The response rate from the lawmakers was only 2 percent less for the AI-generated emails compared to the ones written by humans, which the researchers said was statistically significant but substantially small.

Kreps discussed the study’s findings at a May 19 meeting of the President’s Council of Advisors on Science and Technology. She said AI allows people to easily generate unique pieces of text. To illustrate why this would be a powerful tool for bad actors, she pointed to a 2017 Pew Research study that found only 6 percent of regulatory comments to the Federal Communications Commission on net neutrality were unique. 

The vast majority of comments were submitted multiple times, in some cases hundreds of thousands of times. Pew said this was “clear evidence of organized campaigns to flood the comments with repeated messages,” finding evidence that at least some share of the comments were from automated bot campaigns.

Kreps said AI could eliminate some of the signs betraying an automated message, making it harder to tell the difference and appropriately allocate time and energy toward messages from real people.

With AI language models, Kreps said, “you wouldn’t have those telltale signs because every message could be different.”

In an interview, Kreps said AI could also be used by foreign actors to craft messages in fluent English. Currently, she said, messages can contain subtle nuances that can tip the reader off that the message wasn’t written by a native speaker, but artificial intelligence would eliminate these and therefore make it more difficult to recognize foreign influence campaigns. 

Although Kreps said she isn’t aware of any instances of fake constituent emails being sent by malicious actors, she said it’s important to keep an eye on threats that could potentially pose risks to democracy in the future.

“We have a tendency of fighting the last war,” she said. “You want to understand the potential array of risks, not just the ones that you see, but the ones that might be on the horizon.”

Kreps suggested some ways to guard against this problem. She said lawmakers could potentially use computer programs to recognize AI-generated emails, and constituents could also rely on more direct forms of communication with lawmakers like phone calls and town halls.

Marci Harris, executive director of the POPVOX Foundation, which helps members of Congress and their staff understand new technologies, said the problem of AI-generated emails presents an opportunity to fix what she calls an already broken system of communicating with lawmakers.

Currently, she said, emails to lawmakers largely consist of mass campaigns by advocacy organizations in which people click a button to send a pre-written message to their representatives.

Harris said this isn’t high quality communication between constituents and lawmakers. 

“It’s just kind of a low quality interaction for both participants in that old system and a whole lot of staff time dedicated to keeping those gears turning,” she said.

Harris said AI presents an opportunity to rethink how lawmakers and constituents communicate with each other. Artificial intelligence could actually be used as a way of improving this communication, she said.

Harris explained that AI could potentially be used to help constituents draft more informed messages and to more efficiently summarize the multitude of messages lawmakers receive daily.

“I agree that there are reasons that these new technologies will create difficulties for the old system,” she said. “But I am very optimistic about how they can be used for better systems going forward.”

Tags Artificial Intelligence ChatGPT Federal Communications Commission lawmakers OpenAI

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.