The views expressed by contributors are their own and not the view of The Hill

A simple solution to regulate AI

Getty Images

Executives from the leading artificial intelligence (AI) companies, like Sam Altman of OpenAI and Brad Smith of Microsoft, are begging Congress to enact laws to regulate their industries. And Congress is responding, with Senate Majority Leader Chuck Schumer (D-N.Y.) releasing his “framework” for regulating AI just last week. We might even get a new agency, not unlike the Environmental Protection Agency (EPA) or the Food and Drug Administration (FDA).

These executives may be genuinely concerned about the profound dangers of AI. But as a law professor, I have seen this dynamic before, and I worry that these executives may write laws that actually leave us less safe. They could do so, accidentally or intentionally, by eviscerating the flexible and robust legal structure that we already have on the books, replacing it with an unimaginative and corruptible bureaucracy.

It is hard for Congress or a new agency to pre-specify all the ways that AI could go awry. But that is OK. I am reminded of the industrial-age transition from horses to trains and then automobiles, a revolution that also changed the way the law manages risk at an industrial scale. The older common law was formulaic: Liability would arise only if bad guys intended harm or for very direct, peculiar accidents.

But what if a train emitted a spark that traveled downwind, burning one house, which then ignited many more? That harm is neither intended nor direct. Courts developed a flexible set of laws that require everyone who creates risks to take reasonable precautions to protect against foreseeable harms. This is the law that got asbestos out of our products, got cars to stop exploding on impact and eventually brought the tobacco industry to bear. It is one reason why Americans now smoke cigarettes at less than half the rate of Europeans.

One might be surprised to realize that this same law applies squarely to AI. Suppose an adolescent is considering suicide and asks Microsoft’s AI chatbot for advice about how to do it. Surely Microsoft does not intend its product to be used this way. But is the risk foreseeable? Of course.

The beauty of negligence law is its flexibility; it puts the onus on companies to recognize those risks and to fix them, long before anyone is injured. In contrast, given the almost limitless number of ways that AI could cause harm, I worry that legislators or regulators will be slow and unimaginative.

Tort law is also powerful for a second reason: Its institutions are robust against corruption. The Seventh Amendment to the U.S. Constitution requires cases to be tried by a jury, randomly selected from the population. Tort law also relies on plaintiffs lawyers, who are paid contingent fees to find the cases of AI abuse and win them.

In contrast, the corporate titans of the AI industry are already cozy with Congress and the White House. (For example, Microsoft, the maker of the Bing AI product, contributed to 66 members of the U.S. Senate in 2022.) Accordingly, I worry that the new rules will align with the interests of the big corporations while also raising barriers to entry for the most innovative newcomers.

Likewise, just as with the EPA and FDA, any new AI regulatory agency will be staffed by well-intentioned experts, who are likely to be underpaid and know that on the other side of the revolving door is a better-paid job with the regulated industry, if they play nice. (In fact, of Microsoft’s lobbyists in 2022, 79 percent previously held government jobs.)

Corporations have long understood these dynamics, so they often ask Congress to eliminate state tort law, using a tool called preemption. Congress has preempted safety laws in a range of areas, including medical devices, social media, computer glitches, drinking water, automobiles and guns.

Still, Congress can act more carefully, creating both a federal regulatory system and allowing tort law to continue its important work. For example, the FDA requires companies to list all known risks of their products, but if decades later a drugmaker learns of another risk and fails to update its label, it can be liable to injured patients in court. This dual system is necessary because the FDA is unable to monitor emerging risks for the thousands of products it approved in years past. State tort law puts that burden on the companies, where it belongs.

This is what Congress must do for AI. Americans deserve to have well-functioning legal systems at both the state and federal levels, protected by both liability and regulation.

Christopher Robertson, JD, Ph.D., teaches torts and health law at Boston University. He is the co-editor of “The Future of Medical Device Regulation: Innovation and Protection” (2022).

Tags Artificial intelligence Brad Smith Chuck Schumer Microsoft Negligence OpenAI Regulation Sam Altman Tort law

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Technology News

See All