There’s been a great deal of talk in Congress about regulating AI, but most of the legislative action is happening at the state level. If America is to lead the AI charge and create a stable environment for this tech to reach its enormous potential, Congress must think about something else: preemption.
Earlier this year, Tennessee passed the ELVIS Act to protect recording artists’ names, images and voices from AI-generated deepfakes. Colorado passed a law which will require businesses of all sizes in the state, as well as many AI model developers outside of the state, to conduct “algorithmic impact assessments” on racial, political, religious and other biases within their models.
But the most ambitious is California’s still-pending law, which would create a new administrative agency to oversee the development of the AI models themselves. Because the bill applies to any model “distributed” in California, and because any AI model generally available on the internet is distributed everywhere, this new agency would, in a practical sense, have jurisdiction over all AI models produced in the U.S.
Some who rightfully worry about the impact of AI on society might cheer on such forceful legislation, believing that the U.S. is “behind” the European Union in regulating it. But these people should think twice: AI regulation is not a race, and taking a few steps down the wrong regulatory path may leave us stuck there for a long time. It’s more important to regulate correctly than quickly.
Congress should therefore use preemption to put the brakes on misguided state attempts to regulate AI models.
Through preemption, the federal government reserves specific areas of lawmaking to itself. It already does this with many technologies, such as smartphones, because a patchwork of state laws for such products would create undue challenges in producing things we all rely upon.
While states should have a right to pass many AI laws — for example, policing AI-enabled deepfakes or fraud — the models themselves should be handled at the federal level, if they are regulated at all. Otherwise, the complexity of complying with as many as 50 model standards may deter investment in an important technology that China and other nations will fully exploit if we do not.
Congress has a few preemption options. It could simply restrict states from limiting the distribution of models on the internet. Many AI models are released as “open-weight,” meaning that the underlying numbers that define the model’s capabilities are freely available for download. Although it may sound crazy, this decades-old open-source model has produced trillions of dollars in economic value for the global economy.
Open-source undergirds the “permission-less” nature of software innovation. Because anyone can share ideas without permission from a regulator or other centralized body, innovation proceeds much faster and more flexibly. If Congress were to restrict states from impinging on this concept, many state-based model regulations — such as licensing regimes, agency preapprovals and similar approaches — would be preempted.
Another option would be to preempt state efforts to create novel liability regimes for AI.
California’s proposal, for example, assumes that any sufficiently damaging act enabled by an AI model — even if AI was only one tool among many — is the model developer’s responsibility, rather than the sole responsibility of the person who committed the act. This is the equivalent of holding Apple responsible for things people do with their MacBooks or iPhones.
Congress could create a much more reasonable nationwide liability standard: a rebuttable presumption that users rather than developers are responsible for misuse of AI models.
Finally, Congress could reserve the responsibility of setting technical standards to federal agencies such as the National Institute for Standards and Technology. NIST already houses the U.S. AI Safety Institute, which is dedicated to setting safety and evaluation standards for advanced AI models. No state could be expected to have similar technical expertise, so it would be logical for Congress to keep this duty with the federal government.
In all likelihood, the ideal preemption law would contain some combination of these three approaches. Some argue that bills like California’s are essential because Congress refuses to act. But the Senate’s bipartisan AI Roadmap rebuts this criticism. It explicitly downplays new regulatory regimes in favor of applying existing laws to specific applications of AI.
Furthermore, neither presidential candidate has made regulating AI models a campaign priority. Thus, it is reasonable to conclude that federal policymakers have deliberately chosen not to pursue model-based regulations for fear that they would harm American innovation.
AI presents many unanswered and profound questions. Answering them will take time. As that process plays out, Congress should not let one state decide our technological future.
Dean W. Ball is a research fellow with the Mercatus Center at George Mason University.