The views expressed by contributors are their own and not the view of The Hill

The inherent paradox of AI regulation 

Getty Images

Nary a day goes by when we don’t learn about a new regulation of artificial intelligence (AI). This is not surprising, with AI being widely touted as the most powerful new technology of the 21st century, but because there is no agreed-upon definition of AI, as I’ve previously written in these pages, and the landscape is constantly growing and changing, many new regulations are steeped in contradiction. 

Among such regulatory paradoxes is a tendency to regulate an activity only when it is conducted using AI, when (from the end user’s perspective) the exact same human activity is unregulated. For example, impersonations and parodies of celebrities and politicians have been around since ancient times, and are often considered acceptable commentary. And yet, we may be moving toward an environment in which a human impersonator and an AI-generated impersonator who appear exactly the same and do exactly the same thing are classified, for regulatory purposes, entirely differently. 

The current chair of the Federal Trade Commission (FTC), Lina Khan, is a brilliant attorney who is attempting to address such paradoxes in emerging FTC AI regulations. During a recent Carnegie Endowment program, I asked Khan how the FTC deals with the paradox of regulating some AI activities when the exact same human activities might not be regulated. She replied that the commission’s focus is the opposite: ensuring “that using AI doesn’t give you some type of free pass.” 

Even at this early stage, we can see that LLGAI (“large language” because computers use the internet to instantly scoop up millions of data points and “generative” because computers then generate complex instructions, texts, images, videos, audios, etc.) can revolutionize such things as medicine and science for good — or fraud or conflict for harm. It is the risk of harm to people that understandably draws the attention of regulators. It’s not difficult to foresee malicious yet perfect LLGAI impersonations (“deep fakes”) of, say, a government official deceiving entire societies. Frightening but realistic scenarios like these have understandably led governments to establish regulations of consumer LLGAI before a disaster occurs.  

But there is an often-overlooked problem here: because we are at such an early stage of consumer LLGAI, any regulations made now will be based on what little we know today. And with such a rapidly evolving technology, what makes perfect sense in 2024 may be irrelevant — or counterproductive — by 2029.  

This is not the first-time disruptive consumer technologies have offered mass promise and peril. Between 1915 and 1930, gasoline-powered automobiles in the U.S. grew from an exotic curiosity of around 2 million to a common means of transportation of around 23 million, reorganizing both work and leisure. Many governments understandably set auto regulations in the 1910s. Those regulations made sense on the day they were developed, but the results show how difficult it is to regulate a fast-changing, revolutionary technology during its infancy. In 1915, a major regulatory issue was automotive interaction with horses and buggies; what could those 1915 regulators have said about parking lots, gas stations and passing lanes? 

We need not go back a century to understand how difficult it is to regulate revolutionary consumer technologies during their infancies. During the early/mid-1990s, the internet was renamed and opened to the public. It quickly became clear that the technology was revolutionary, permitting global communications among individuals or groups using text, images, video, voice and more. Fearing widespread pornography or worse, 1990s regulators began to establish rules and regulations for the internet as they then understood it.  

But almost everyone’s vision of the future internet was wrong.  

To oversimplify, most of us thought that the internet of 2024 would look much like the internet of 1995, only on a larger scale. As a result, few of the internet regulations devised during the 1990s would be devised exactly the same today. And many 1990s internet regulations suffer from unintended consequences or significant omissions.   

But the choice should not, and need not, be between doing nothing in the 2020s to regulate consumer LLGAI or imposing permanent AI regulations based on what little we know today. The most important lesson that we should learn from 1910’s efforts to regulate the auto industry and 1990’s efforts to regulate the internet industry is that lawmakers and regulators should have enough humility and wisdom to know that today’s regulations could well be meaningless or even counterproductive in a few years. The solution is for lawmakers and regulators to sunset their AI rules and regulations every few years, and to continually evolve based on current conditions.  

AI regulations must be reviewed, updated and rewritten on an ongoing basis; sunsetting these regulations ensures that any rules developed during AI’s infancy may or may not be — but are not expected to be — permanent. Even a cursory look at consumer LLGAI shows that nothing whatsoever about it is stable.   

Roger Cochetti has served as a senior executive with COMSAT, IBM, VeriSign and CompTIA. A former U.S. government official, he has helped found a number of nonprofits in the tech sector and is the author of textbooks on the history of satellite communications.  

Tags AI Artificial intelligence Automobiles Internet Regulation

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more