The views expressed by contributors are their own and not the view of The Hill

Blumenthal, Hawley AI framework could damage AI industry, violate First Amendment 

Photo by OLIVIER MORIN/AFP via Getty Images
This illustration photograph taken in Helsinki on June 12, 2023, shows an AI (Artificial Intelligence) logo blended with four fake Twitter accounts bearing profile pictures apparently generated by Artificial Intelligence software. (Photo by OLIVIER MORIN/AFP via Getty Images)

As our nation grapples with the fast-paced evolution of artificial intelligence (AI), it is crucial to strike the right balance between preventing AI’s potential risks and fostering innovation. Sens. Richard Blumenthal (D-Conn.) and Hawley’s (R-Mo.) bipartisan AI framework, despite its laudable intentions—such as protecting children and promoting transparency—threatens to stifle AI innovation by regulating AI development instead of just use. In doing so, it may even infringe upon our First Amendment rights. 

Regulating development is a bad idea. AI development must be treated as protected speech and is subject to the same prohibition on prior restraint as any other speech form. Writing software as code is not meaningfully different, as a form of expression, than writing it in a narrative form. Various AI-powered systems have been developed that convert text descriptions into functional software, demonstrating the similarity between these two forms of expression. Regulating the development of AI poses a significant challenge by regulating the use of software-descriptive language.  It will also be difficult for regulators to differentiate between AI and other types of software.  

Beyond First Amendment issues, this type of regulation is bad for America. We want scientists and engineers in the lab innovating — not filling out form after form to seek agency approval. AI has been shown to help the paralyzedprevent crimehelp detect cancerprevent veteran suicidedetect and prevent diseaseprotect firefightersaid soldiers on the battlefieldprevent the sextortion of youthhelp the elderlyprevent heart diseaseprotect critical infrastructureprevent cyberbullyingimprove road and construction site safety, and even help explore space. This is just the beginning — the potential beneficial uses of AI go far beyond these.  

The costs and burdens of regulation will stifle innovation. They may dramatically damage AI startups that lack the knowledge, resources and time to navigate a complex regulatory framework. At a time when federal agencies, such as DARPA, are devoting resources to encourage AI development, Congress would be foolish to stifle it with regulatory paperwork.  

Moreover, in addition to damaging America’s AI industry, AI development regulation won’t accomplish anything beneficial. Any AI that might have a damaging effect will still be developed — just somewhere else. In the worst case, it may be used against the United States, which will be left without the defenses it could have, had innovation been encouraged. At best, we may fall behind other countries, lose our competitive edge, and end up not having an understanding of crucial AI technologies that are developed elsewhere.  

We also don’t need a new AI regulatory agency. In addition to the cost to taxpayers, potential duplication of effort, and confusion it will create, AI shouldn’t be treated differently than a human (or other piece of software) making the same decisions.

Protected class discrimination should be proscribed. Period. We don’t need to handle a discriminating AI differently than a discriminating human or automated non-AI software process. The same argument holds in numerous other areas. Consumers should be protected from unfair trade practices regardless of whether they involve an AI. Drugs and medical devices should be safe — whether designed or implemented by an AI or not. Fortunately, we have agencies to do all of these things already, such as the Equal Employment Opportunity Commission (EEOC), Federal Trade Commission (FTC), and Food and Drug Administration (FDA). Having one regulator for non-AI issues and another, who lacks the domain expertise, for AI-related issues makes little sense. It may result in a mess of confusing and potentially contractor regulations. Moreover, how will we deal with the next new technology that may impact these areas? Create another new agency?  

Instead of developing a new agency, Congress should allocate funds to each relevant agency to bring together experts — from within government, the private sector, and academia — to identify and propose fixes to any existing laws that may need to be updated in light of AI. Congress should also give each agency funds to promote AI development in their respective areas. This funding can encourage companies to develop AI technologies for the public good and further drive America’s leadership in AI. Finally, Congress should invest significantly in America’s educational system to prepare anyone capable and desiring to use and develop AI systems. This is a far better use of resources than a new federal bureaucracy and will pay dividends — instead of providing detriment — in the long term.  

Jeremy Straub is the director of the North Dakota State University Cybersecurity Institute, a Challey Institute senior faculty fellow, and an associate professor in the NDSU Department of Computer Science. The opinions presented are his own.  

Tags Artificial Intelligence Richard Blumenthal

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Congress Blog News

See All