Senate Majority Leader Chuck Schumer (D-N.Y.) has proposed artificial intelligence (AI) regulations that would control how AI is created and the decisions it makes. The proposed “guardrails” will provide answers to key questions such as who, where and how, while aiming to “protect” Americans. It also requires companies to allow a review of their AI software before its release.
All of this has the explicit goal of aligning “systems with American values.” Unfortunately, his proposal undermines some of the values he claims to protect.
Schumer’s framework threatens to limit programmers’ freedom of expression during and after the development process. It goes beyond typical regulations imposed by federal authorities on commercial decisions and other areas of control. Beyond these challenges, excessive regulation of AI is bad for American competitiveness and inventors.
Regulating how technology is developed — as opposed to how it is used — risks slowing development and stifling creativity. A complex regulatory regime may also make it all but impossible for new firms to enter the market, thus providing significant benefits to established firms that are better equipped to deal with regulation systems.
Schumer rightly expresses concern about America’s foreign competitors getting a leg up in AI technology and suggests that this is a reason for his proposed regulations. This echoes comments by Russian President Vladimir Putin that “Whoever becomes the leader in this sphere will become the ruler of the world.” AI advancement may well become the next cold war and America’s foreign competitors would like nothing better than for the U.S. to damage its ability to innovate.
Artificial intelligence systems are pieces of software. They are created by programmers and, in some cases, they are given the capability to adapt over time. We call this process of adaptation learning; however, it is not at all like humans’ ability to learn across all areas. In most cases, the software can only learn how to advance in its area of focus. For example, a credit decision-making AI may get better at predicting risk over time by evaluating its past decisions’ outcomes but it’s not going to somehow learn how to play chess.
Even the seemingly more general-purpose AIs, like ChatGPT, are still quite limited. ChatGPT learns patterns of speech; however, it isn’t suitable for something like controlling robots. AI learning is a highly controlled process that allows a system to incorporate new knowledge from its operations, the world around it and selected training data.
The notion that entities chosen by the federal government will tell the developers of an AI system what it can learn from, how it must learn, or require a review before the system can begin learning poses questions about constitutional rights. We are talking about going beyond the regulation of speech — which by itself is constitutionally problematic — to regulating the way that programmers approach the intellectual challenge of developing systems.
Software code is written by humans and can be read by both humans and computers. Therefore, one can argue that many types of code are protected as a form of expression under the First Amendment. However, its use or application is, inarguably, open to regulation. Just like existing software used by banks, landlords and other organizations are regulated to ensure fairness, the actions taken by an organization’s AI systems can also be monitored to make sure the public’s interests are being respected.
To create an effective AI regulatory framework, it’s best to draw upon existing laws and regulations. For example, laws that forbid discrimination against protected classes can be used to address AI bias. Relevant laws, similarly, exist for adjudicating cases involving damages caused by AI. By drawing on existing precedents, we can ensure that AI activities aren’t treated differently than similar activities carried out by humans or non-AI software.
Some laws may need to be adapted. We may need to consider how to best assign liability for AI-caused losses or damage between software developers, users, resellers and others. However, this does not require us to create a special regulatory framework for AI in all areas. Instead, we can amend existing regulations to address any new AI-specific considerations.
Schumer is correct that laws have often failed to keep up with technological advancements, leaving courts perplexed when trying to apply outdated laws to new technologies. Going forward, lawmakers at all levels should create regulations that respect fundamental rights and don’t potentially regulate thought or expression. Rather than focusing on specific technologies, they should instead consider the best applications of technology within society as a whole.
Regulation of AI should center around its usage, as opposed to how it functions, in order to avoid the regulation of intellectual processes and expression. This approach avoids interfering with innovation and avoids potential conflicts with constitutional free speech rights.
Jeremy Straub is the director of the North Dakota State University’s Institute for Cyber Security Education and Research, an NDSU Challey Institute Faculty Fellow and an assistant professor in the NDSU Computer Science Department. The author’s opinions are his own.