Senators urge $32 billion in emergency spending on AI after finishing yearlong review
WASHINGTON (AP) — A bipartisan group of four senators led by Majority Leader Chuck Schumer is recommending that Congress spend at least $32 billion over the next three years to develop artificial intelligence and place safeguards around it, writing in a report released Wednesday that the U.S. needs to “harness the opportunities and address the risks” of the quickly developing technology.
The group of two Democrats and two Republicans said in an interview Tuesday that while they sometimes disagreed on the best paths forward, they felt it was imperative to find consensus with the technology taking off and other countries like China investing heavily in its development. They settled on a raft of broad policy recommendations that were included in their 33-page report.
While any legislation related to AI will be difficult to pass, especially in an election year and in a divided Congress, the senators said that regulation and incentives for innovation are urgently needed.
“It’s complicated, it’s difficult, but we can’t afford to put our head in the sand,” said Schumer, D-N.Y., who convened the group last year after AI chatbot ChatGPT entered the marketplace and showed that it could in many ways mimic human behavior.
The group recommends in the report that Congress draft emergency spending legislation to boost U.S. investments in artificial intelligence, including new research and development and new testing standards to try to understand the potential harms of the technology. The group also recommended new requirements for transparency as artificial intelligence products are rolled out and that studies be conducted into the potential impact of AI on jobs and the U.S. workforce.
Republican Sen. Mike Rounds, a member of the group, said the money would be well spent not only to compete with other countries who are racing into the AI space but also to improve Americans’ quality of life — supporting technology that could help cure some cancers or chronic illnesses, he said, or improvements in weapons systems could help the country avoid a war.
“This is a time in which the dollars we put into this particular investment will pay dividends for the taxpayers of this country long term,” he said.
The group came together a year ago after Schumer made the issue a priority — an unusual posture for a majority leader — and brought in Democratic Sen. Martin Heinrich of New Mexico, Republican Sen. Todd Young of Indiana and Rounds of South Dakota.
As the four senators began meeting with tech executives and experts, Schumer said in a speech over the summer that the rapid growth of artificial intelligence tools was a “moment of revolution” and that the government must act quickly to regulate companies that are developing it.
Young said the development of ChatGPT, along with other similar models, made them realize that “we’re going to have to figure out collectively as an institution” how to deal with the technology.
“In the same breath that people marveled at the possibilities of just that one generative AI platform, they began to hypothesize about future risks that might be associated with future developments of artificial intelligence,” Young said.
While passing legislation will be tough, the group’s recommendations lay out the first comprehensive road map on an issue that is complex and has little precedent for consideration in Congress. The group spent almost a year compiling the list of policy suggestions after talking privately and publicly to a range of technology companies and other stakeholders, including in eight forums to which the entire Senate was invited.
The first forum in September included X owner and Tesla CEO Elon Musk, Meta’s Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai.
Schumer said after the private meeting that he had asked everyone in the room — including almost two dozen tech executives, advocates and skeptics — whether government should have a role in the oversight of artificial intelligence, and “every single person raised their hand.”
Still, there are diverse views in the tech industry about the future of AI. Musk has voiced dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place. Others are more concerned about the details of how proposed regulations could affect their business, from possible government oversight over the most capable AI systems to tracking of highly sought-after AI computer chips for national security.
The four senators are pitching their recommendations to Senate committees, which are then tasked with reviewing them and trying to figure out what is possible. The Senate Rules Committee is already moving forward with legislation, on Wednesday approving three bills that would ban deceptive AI content used to influence federal elections, require AI disclaimers on political ads and create voluntary guidelines for state election offices that oversee candidates.
Schumer, who controls the Senate’s schedule, said those election bills were among the chamber’s “highest priorities” this year. He also said he planned to sit down with House Speaker Mike Johnson, who has expressed interest in looking at AI policy but has not said how he would do that.
Still, winning enough votes on the legislation may be not be easy. The bills that would ban deceptive AI election content and require AI disclaimers on political ads were approved by the Rules panel on party line votes, with no GOP support. Republicans argued that the legislation would usurp states that are already acting on the issue and potentially violate political candidates’ rights to free speech.
Senate Rules Committee Chairwoman Amy Klobuchar, a Democrat from Minnesota, said that the rapid development of AI is a “hair on fire” moment for elections. And while states may be passing similar bills, she said the country is “unguarded on the federal level.”
Some experts warn that the U.S. is behind many other countries on the issue, including the EU which took the lead in March when they gave final approval to a sweeping new law governing artificial intelligence in the 27-nation bloc. Europe’s AI Act sets tighter rules for the AI products and services deemed to pose the highest risks, such as in medicine, critical infrastructure or policing. But it also includes provisions regulating the new class of generative AI systems like ChatGPT that have rapidly advanced in recent years.
“It’s time for Congress to act,” said Alexandra Reeve Givens, CEO of the Center for Democracy & Technology. “It’s not enough to focus on investment and innovation. We need guardrails to ensure the responsible development of AI.”
Others said the senators’ road map wasn’t tough enough on tech companies. Some groups calling for tighter AI safeguards and civil rights protections said it showed too much deference to industry priorities.
Alix Dunn is a senior adviser at AI Now, a policy research center that pushes for more accountability around AI technology. She criticized the closed door sessions with tech CEOs. “I don’t see how it got us even an inch closer to meaningful government action on AI,” she said.
The senators emphasized balance between innovation and safeguards, and also the urgency of action.
“We have the lead at this moment in time on this issue, and it will define the relationship between the United States and our allies and other competing powers in the world for a long time to come,” Heinrich said.
___
O’Brien reported from Providence, R.I. Associated Press writer Dan Merica in Washington contributed to this report.
Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.