The views expressed by contributors are their own and not the view of The Hill

The Senate doesn’t need to start from scratch on AI legislation

Senate Majority Leader Chuck Schumer of N.Y., speaks to the media, June 13, 2023, on Capitol Hill in Washington. The development of artificial intelligence is a “moment of revolution” that requires swift action from the government. That's according to Senate Majority Leader Chuck Schumer, who said Wednesday that he is working on ambitious bipartisan legislation dealing with AI. (AP Photo/Jacquelyn Martin, File)
Senate Majority Leader Chuck Schumer of N.Y., speaks to the media, June 13, 2023, on Capitol Hill in Washington. The development of artificial intelligence is a “moment of revolution” that requires swift action from the government. That’s according to Senate Majority Leader Chuck Schumer, who said Wednesday that he is working on ambitious bipartisan legislation dealing with AI. (AP Photo/Jacquelyn Martin, File)

In June, Senate Majority Leader Chuck Schumer (D-N.Y.) launched SAFE Innovation, his framework for upcoming legislation to create the rules of the road for governing artificial intelligence (AI). While there are many points to make about the substance of the framework — heavy on fears about competition with China, light on rights-based protections for Americans — one comment stood out: Schumer made the case that the Senate is “starting from scratch” on AI regulation. 

While the senator may have political reasons for preferring a clean-slate approach, this could not be further from the truth. Fortunately for both Schumer’s team and the American public who deserve their government’s protection now, an evidence-based democratic vision for AI policy already exists. 

Indeed, shortly before the launch of SAFE Innovation, a community of computer and social scientists who have been at the forefront of advancing research-based approaches to governing AI issued their own statement, urgently calling on policymakers to base forthcoming legislation on the tools we already have “to help build a safer technological future.” The statement points to the large body of research and policy recommendations that has “long-anticipated harmful impact of AI systems” and which includes a roadmap for how to “design, audit or resist AI systems to protect democracy.” Legislators should be drawing on this critical national resource of expertise and existing research — some of it publicly funded through the National Science Foundation — when designing the future of AI governance in American society.

And the frameworks for robust, rights-respecting legislation are also coming from inside the government itself. The White House Office of Science and Technology Policy’s (OSTP) Blueprint for an AI Bill of Rights puts forward five core protections that Americans should expect from AI systems, including safe and effective systems and protection from algorithmic discrimination. The AI Bill of Rights’ release was accompanied by a technical companion describing practical and technologically achievable steps that can and should be taken to protect the public. Similarly, the National Institutes of Standards and Technology (NIST) launched the AI Risk Management Framework; the Risk Magament Framework is a voluntary standard for assessing and managing risk in AI systems, with a strong call to draw on empirical research to understand not only the technical but also the societal impacts of AI systems. Like the AI Bill of Rights, the Risk Management Framework was developed with expert input from academia, civil society and industry actors. Both of these frameworks should be deeply informing congressional action on AI governance.

Scholars and policy researchers have already learned a great deal about the right approaches to AI governance, and these insights are incorporated into the AI Bill of Rights and AI Risk Management Framework, ready for policymakers to take advantage of these distilled and protective steps. Research on the safety and effectiveness of AI systems shows that the systems sometimes simply don’t work and that preemptive consumer protection, liability and independent audits can help. Slews of such independent investigations have shown that AI systems can be discriminatory based on racegenderreligionsexual orientationdisability and other demographic categories, in sectors from education to finance to healthcare. A clear first step to preventing discrimination is requiring such assessments. Schumer has rightfully included explainability as a key pillar of his framework and here, existing work can also point towards best practices.

Senator Schumer can also learn from the steps that have already been taken in the United States and Europe to bring the force of law to AI governance. President Biden issued Executive Order 14091 which, among other actions, directed the federal agencies to use their civil rights capacities to protect Americans against algorithmic discrimination. And federal agencies across the executive branch have engaged in rule-making and other actions focused on algorithmic safety and rights issues for several years. Additionally, European policymakers have worked for more than two years to craft the EU-AI Act; if passed, as is expected in late 2023, the EU-AI Act will govern the use of AI systems within the European Union, directly impacting American companies. 

And American lawmakers themselves are certainly not starting from scratch. From narrow but important redlines such as prohibiting an autonomous launch of nuclear weapons, to more cross-cutting legislation such as accountability measures, lawmakers have already introduced numerous AI-related bills to Congress and at the state level. Lawmakers concerned about moving too fast can look to already proposed legislation like the American Data Privacy and Protection Act and the Algorithmic Accountability Act, which are solid and well-understood bills developed over several years that address important elements in a broader AI governance framework. Sector-specific laws or regulations can also allow Congress to build on existing congressional and agency strengths: For example, the Stop Spying Bosses Act would provide regulatory authority to the Department of Labor to oversee workplace surveillance and the rulemaking by the Department of Health and Human Services prohibits discrimination in clinical algorithms used in covered programs.

To be sure, governing AI poses novel challenges. But the senator’s plan to hold “AI Insight Forums” this fall for Congress to “lay down a new foundation for AI policy” provides the opportunity to show that a foundation already exists and that a robust field of experts acting in the public interest — outside of the tech industry — have been working for years to build it. We need to draw on the broad expertise in AI policymaking both inside and outside of government. 

America already has a blueprint for strong AI laws and a great deal of the knowledge it needs to quickly build the guardrails around AI that Senator Schumer rightly identified as necessary. 

Janet Haven is executive director of Data & Society, and a member of the National AI Advisory Committee, which advises the president and the National AI Initiative Office on AI policy matters. The above represents her individual perspective, not that of the NAIAC or any government official.

Sorelle Friedler is a senior policy fellow at Data & Society, and the Shibulal Family Associate Professor of Computer Science at Haverford College. Previously, she served as the assistant director for Data and Democracy in the White House Office of Science and Technology Policy under the Biden-Harris administration, where her work included the Blueprint for an AI Bill of Rights.

Tags Applications of artificial intelligence Artificial intelligence artificial intelligence regulation Chuck Schumer Ethics of artificial intelligence Politics of the United States

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more