The views expressed by contributors are their own and not the view of The Hill

ChatGP 2024: AI is better than no intelligence 

The Microsoft Bing logo and the website’s page are shown in this photo taken in New York on Tuesday, Feb. 7, 2023. Microsoft is fusing ChatGPT-like technology into its search engine Bing, transforming an internet service that now trails far behind Google into a new way of communicating with artificial intelligence. (AP Photo/Richard Drew)

“The use of AI language models like ChatGPT in political campaigns can raise a number of ethical concerns.” 

At least that’s what ChatGPT told Jessica Nix, a former student of mine at George Washington University when she asked it to answer a question about the ethics of using chat ChatGPT on political campaigns. ChatGPT told Jessica, now a graduate journalism student at Columbia University, that “the technology could be used to spread false information or propaganda to influence voters.” It also pointed out that it could “raise questions about the use of personal data and privacy” and that a campaign with access to more advanced AI could have an unfair competitive advantage. The bot is on to something. 

AI bots are already working in public relations. One service called GoCharlie.AI “helps entrepreneurs and enterprises create content that performs.” It writes blog posts, press releases, social media content, and more. Other companies in the space include ContentBot.ai, Jasper, Anyword, to name only a few.  

Bots are coming for political campaigns. In writing about ChatGPT in Campaigns and Elections (an online trade publication for the political consulting industry), consultant Matthew “Mudcat” Arnold said it “wasn’t unlike working with an entry-level writer, but one who was blazingly fast.” Similarly, digital campaign veteran Colin Delany tried ChatGPT and wrote that the content it produced is “about the level of a brand-new staffer’s first draft…” Neither Arnold nor Delany said they were using AI for their clients — but if they aren’t, I am certain others are. 

Efficacy (and job security for entry-level campaign staff) aside, AI raises important ethical questions. 

The first challenge raised by ChatGPT, about ChatGPT, is that content generating tools make it free and easy to flood social media and reporters’ inboxes with false and misleading messages. Rhetorical nonsense has always been part of American politics; in 1800 the president of Yale declared that if Thomas Jefferson were elected president “our wives and daughters would be subjected to legal prostitution.” But just as social media made it easier to generate and spread political nonsense, AI can serve as a disinformation accelerant. It could make the already bad, worse. 

That garbage is easier to generate and spread means it is easier to claim that disparaging, and true, claims are garbage. Rather than living with the political consequences of a wrongdoing, the accused could say a bot made up the news. A public already drowning in AI generated messages might be inclined to believe the denial.  

Not flagged by ChatGPT is the ethics of a closed loop of American politics. Bots could write press releases they send to bots at news outlets, that turn them into content read by actual people — as well as by other bots that turn the content into yet more bot generated and consumed content. The result risks a politics devoid of the populous. A candidate could feed ideas into an app, and sometime later voters will show up at the polls, having consumed campaign and news content largely generated by GoCharlie.AI’s cartoon dog mascot. Entire campaigns could be run and reported on by a smartphone. 

All may not be the stuff of William Gibson nightmares. Less expensive campaigns also means more people can afford to run for office. Free, or close to free, compelling content can drive down the cost of campaigns. Less expensive campaigns means less time fundraising and more time talking to voters. It also makes campaigning a lot less time consuming, giving people who can’t afford to quit their jobs to campaign full time the ability to run for office. A result could be policymakers who better reflect the breadth of America.  

Of course most new technology is promoted as finally delivering endless rainbows and puppies, or derided as finally bringing the end of times. The reality is usually somewhat more complicated. 

It’s easy to joke about the absence of any intelligence — artificial or actual — in politics, or that ethics in politics is an oxymoron. But the ethics of AI in our campaigns is something we need to take seriously. I agree with ChatGPT’s conclusion: “the ethical implications of using AI language models in political campaigns will depend on how the technology is used and the specific context in which it is applied. It is important for political campaigns to be transparent about their use of AI language models and to use the technology in ways that are ethical and responsible.”   

You can read ChatGPT’s complete answer at https://ethicsinpoliticalcommunication.org/recent-blog  

Peter Loge is director of the Project on Ethics in Political Communication and associate professor at the School of Media and Public Affairs at The George Washington University. 

Tags ChatGPT

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.