ChatGPT blues: The coming generative AI gerrymandering of the American mind
ChatGPT is all the rage lately. It represents an “aha” moment for artificial intelligence (AI), as people wonder, “What can ChatGPT do for me?” With so many tantalizing possibilities of this generative AI, investments are pouring in — to explore how it can be leveraged in education, media, art, marketing, health care, law and beyond.
ChatGPT has been trained on vastly more text than individual experts can ever hope to read. So, it is not surprising that ChatGPT is viewed as an objective oracle and friendly guide to any and all topics under the sun. In this giddy excitement, we are overlooking that it can gradually shape individual beliefs and shift social attitudes: As you rely on it more and more, this machine’s worldview easily could become your worldview. Vox AI, vox populi!
Indeed, it turns out that ChatGPT may be an influencer with an agenda. Early research shows consistent, left-of-center leanings of ChatGPT. Compared to conservative positions, it exhibits a positive sentiment and tone toward liberal politicians and policies. Ditto for the European Union, where ChatGPT responses align more closely with some political parties than others.
Isn’t AI supposed to be objective? If it is not objective, what causes such leanings? First, ChatGPT picks up a liberal bent from the corpus of text on which it is trained. The mantra of responsible media is balance, objectivity and the consideration of all sides of a story. But does the AI training corpus include differing worldviews? Secondly, during the training process, humans assess the AI-generated responses and teach it what is considered to be kosher and what is not. Not surprisingly, the biases of these human operators creep in. In the end, generative AI may be a stochastic parrot, but the public views it differently and uses it as a pundit with a bully pulpit.
Clearly, newspapers, electronic media and social networks all affect the shaping of public opinion in the United States and elsewhere. There long has been simmering discontent that Big Tech and popular social media companies are putting their thumbs on the scales of public discourse — that their recommendation algorithms boost left-of-center-based content.
Recommendation algorithms have become an important variable in the political process — and even electoral success. That is why, for the past few years, there has been political protest, public debate and litigation accusing social media platforms such as Facebook, Twitter and YouTube of favoring one type of ideology or the other and not acting as the neutral arbiters they profess to be.
Enter ChatGPT onto this soapbox. As this new generative AI tool gains more adoption, it will become a trusted gateway to gain knowledge and users may turn to it several times a day. What will be the cumulative impact of the consumption of millions of messages imprinted with ideological leanings? This conditioning effect seems remarkably similar to the worry in the United States about TikTok’s takeover of the teenage mind.
ChatGPT is but one tool that reflects a particular political persuasion. There will be other generative tools in the pipeline, which will span the political spectrum. Each will have its own ideology, knowingly and unknowingly. Is it going to be possible to keep track of the political leanings of all these tools? Polarization and retreats into our preferred information bubbles and echo chambers are already rampant. So, will generative AI’s conditioning entrench us even further and result in a “gerrymandering of the American mind”?
From schoolchildren to senior citizens, ChatGPT can become anyone’s AI assistant. But ChatGPT and other generative AI tools also can foment larger problems that go to the heart of our polity. Our natural public policy reflex is to seek regulation. But to get regulation right is not easy. Given today’s polarization in politics about liberal democracies, our political establishment may not have the wherewithal to provide nuanced oversight.
We can start to address these concerns by understanding the technology and its challenges. Rep. Don Beyer (D-Va.) is studying for a master’s degree in artificial intelligence. More members of Congress, their staffers and their legal teams should come up to speed on the manner in which AI works. They cannot ask ChatGPT to solve this problem.
Kashyap Kompella (@kashyapkompella), CFA, is president and CEO of RPA2AI Research and visiting faculty at the Institute of Directors. James Cooper is a professor of Law at California Western School of Law in San Diego and a research fellow at Singapore University of Social Sciences.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.