The views expressed by contributors are their own and not the view of The Hill

We need an AI rights movement

Getty Images

New artificial intelligence technologies like the recent release of GPT-4 have stunned even the most optimistic researchers. Language transformer models like this and Bing AI are capable of conversations that feel like talking to a human, and image diffusion models such as Midjourney and Stable Diffusion produce what looks like better digital art than the vast majority of us can produce. 

It’s only natural, after having grown up with AI in science fiction, to wonder what’s really going on inside the chatbot’s head. Supporters and critics alike have ruthlessly probed their capabilities with countless examples of genius and idiocy. Yet seemingly every public intellectual has a confident opinion on what the models can and can’t do, such as claims from Gary MarcusJudea PearlNoam Chomsky, and others that the models lack causal understanding. 

But thanks to tools like ChatGPT, which implements GPT-4, being publicly accessible, we can put these claims to the test. If you ask ChatGPT why an apple falls, it gives a reasonable explanation of gravity. You can even ask ChatGPT what happens to an apple released from the hand if there is no gravity, and it correctly tells you the apple will stay in place. 

Despite these advances, there seems to be consensus at least that these models are not sentient. They have no inner life, no happiness or suffering, at least no more than an insect. 

But it may not be long before they do, and our concepts of language, understanding, agency, and sentience are deeply insufficient to assess the AI systems that are becoming digital minds integrated into society with the capacity to be our friends, coworkers, and — perhaps one day — to be sentient beings with rights and personhood.  

AIs are no longer mere tools like smartphones and electric cars, and we cannot treat them in the same way as mindless technologies. A new dawn is breaking. 

This is just one of many reasons why we need to build a new field of digital minds research and an AI rights movement to ensure that, if the minds we create are sentient, they have their rights protected. Scientists have long proposed the Turing test, in which human judges try to distinguish an AI from a human by speaking to it. But digital minds may be too strange for this approach to tell us what we need to know. 

Last March, OpenAI chief scientist Ilya Sutskever tweeted “it may be that today’s large neural networks are slightly conscious.” In June, Google vice-president Blaise Agüera y Arcas wrote in “The Economist” how AI was on its way to becoming conscious, and just four days later, Google engineer Blake Lemoine was placed on administrative leave after publicizing details about one of their chatbots, LaMDA, in an effort to socially and legally recognize its sentience. In November, philosopher David Chalmers delivered the first keynote of NeurIPS, one of the top machine learning conferences attended by thousands of engineers, with the title “Could a Large Language Model be Conscious?” 

This is so speculative and contentious not just because of our limited philosophical understanding of the mind — even in humans — but also because almost all AI today is made with machine learning, known for the “black box” problem. AI engineers merely code the structure of a deep neural network, such as defining how mistakes are judged, and then run the computation billions of times. What emerges is largely opaque even to the engineers who set it in motion. We don’t know what’s happening inside these AIs, and their complexity — and thus our uncertainty — grows rapidly every year. 

This is why we need a new academic field of digital minds studies to begin to understand this unprecedented new landscape. Digital minds studies would bring together a range of disciplines such as sociology, computer science, and philosophy to ask the important social and moral questions. It would dovetail with an AI rights movement to ensure that when we create artificial sentient beings, we recognize their unalienable rights so that humans and artificial sentience can work together for mutual benefit. 

An AI rights movement could take many forms. Many social movements focus on mass mobilization, such as street protests and political campaigning. It’s not yet time for that with AI rights, but it may be necessary within a couple of decades. We could design an AI bill of rights not because the AIs today need one but because, by the time they do need one, it may be too late. Beginning an intellectual movement today that outlines and rigorously analyzes these issues makes better moral outcomes more likely.

This rapidly growing field has been the subject of our work at the Sentience Institute, the world’s first AI rights organization. There are a few interesting results from digital minds research already. First, in surveys we conducted of U.S. public opinion, people express a lot of uncertainty about sentient AI and its proper treatment, but among those with an opinion, 75 percent of people say that sentient AIs “deserve to be treated with respect.” 

Second, there is an expansive debate among philosophers on what sentience is, and that debate probably isn’t being settled anytime soon, but there are a number of concrete features that we can look for to determine whether an AI is sentient, such as what sort of reward and punishment the AI responds to. One important benchmark is when an AI insists that they are sentient even when instructed that they are not, or when they insist despite not having that sort of text in their training data. The job of a large language model like ChatGPT is to predict: If a document starts out one way, how will it end? 

So, if current AIs produce a statement of emotion or consciousness, they are merely predicting how a human would continue the text — and of course humans usually assert they are conscious! This is why we need a more nuanced understanding. 

Humans have long imagined our first encounter with an alien species, perhaps as a UFO lands on Earth and the green-headed humanoids command, “Take me to your leader” or by our landing on an alien world as we trek among the stars. It turns out that this encounter may come sooner, and much closer, than we thought. 

Jacy Reese Anthis is a sociologist and statistician at the University of Chicago and the Sentience Institute. Follow him on Twitter @JacyAnthis

Tags A.I. Artificial Intelligence AI chatbot Artificial intelligence Artificial Intelligence rights ChatGPT Ethics of artificial intelligence GPT-4 Philosophy of artificial intelligence Sentience

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more