Senate

Senators grapple with response to AI after first classified briefing

Senators left their first classified briefing on artificial intelligence (AI) with increased concerns about the risks posed by the technology and no clear battle lines on a legislative plan to regulate the booming industry. 

The briefing Tuesday, requested after Senate Majority Leader Chuck Schumer (D-N.Y.) and others warned that lawmakers needed expertise on the rapidly developing industry, brought in top intelligence and defense officials, including Director of National Intelligence Avril Haines, Deputy Secretary of Defense Kathleen Hicks and the Director of the White House Office of Science Technology and Policy Arati Prabhakar, to brief senators on the risks and opportunities presented by AI. 

“AI has this extraordinary potential to make our lives better,” said Sen. John Kennedy (R-La.) before pausing.

“If it doesn’t kill us first,” he added.

Congress and the administration have been scrambling to better understand the risks and benefits of generative AI in recent months, especially since the quick rise of OpenAI’s ChatGPT tool after it launched in late November. 

AI-powered chatbots like ChatGPT and Google’s Bard, as well as video-, audio- and image-based tools, are magnifying concerns about the spread of false information from so-called hallucinations, or false information shared by the chatbots, as well as additional risks of how the technology could be weaponized.

“One of the interesting things about this space right now is it doesn’t feel particularly partisan. So we have a moment we should take advantage of,” said Sen. Martin Heinrich (D-N.M.), one of the four working group members who called for the series of briefings on AI.

But he said lawmakers need to understand how the technology works — including the dangers that could stem from its limitations.

“Understanding how these models work is really important … It’s just predicting what sounds like a good response. That’s very different from actual intelligence and understanding that this is kind of a statistics game that’s getting better over time, but oftentimes doesn’t have any guardrails built into it,” he added.

“These models are not built to tell you the truth. They’re built to tell you something that sounds like an appropriate English language response.”

Sen. Elizabeth Warren (D-Mass.) said the large language models that AI systems are being trained on are not designed for accuracy. 

“That creates real threats that AI can be used to sound sensible while it perpetuates one wrong answer after another,” she added. 

But lawmakers were split about how and even whether to seek to regulate AI.

Sen. Chris Coons (D-Del.) said he left the briefing “more concerned than ever that we have significant challenges in front of us and that the Senate needs to legislate to address these.” 

Coons called the briefing the latest in a “series of constructive conversations” on the risks and opportunities of AI, but he said he doesn’t yet see a bipartisan consensus on a path toward a legislative proposal to regulate the technology. 

Sen. Marco Rubio (R-Fla.), the top Republican on the Senate Intelligence Committee, said any path to regulation would be fraught.

“The one thing I’m certain of is: I know of no technological advance in human history you’ve been able to roll back. It’s going to happen. The question is how do we build guardrails and practices around it so that we can maximize its benefits and diminish its harm,” Rubio told reporters.

He also warned that lawmakers would face limitations on trying to control private entities pushing the technology across the globe.

“We can do that as far as how the government uses it or what some company in the United States does,” Rubio said. “But AI is not the kind of thing where unlike some technologies from the past, it’s not knowledge based and engineering based. So it’s not the kind of thing that you can confine to the national border. Some other country will still develop the capability that you’re not allowing in your own country, so I’m not sure it solves it from a global standpoint.”

Rubio added he isn’t opposed to regulation.

“I just don’t particularly know enough about AI yet to even understand what it is we’re trying to regulate. There’s probably some role to play in codifying how government uses it in defense realms and so forth, but beyond that I’m not prepared to give you an opinion because I think it’s something we’re still learning about.”

Sen. Mazie Hirono (D-Hawaii) described AI as “uncharted waters” but saw Congress’s role as best directed at addressing the technology in the political sphere.

“There’s a sense that we should provide some parameters, especially I would say in the political arena, to enable people to know when something is AI-generated content so that they know maybe it’s not reliable. But other than that, there are many other applications and uses of AI that I don’t think we’re able to get quite the handle on,” she said.

Some lawmakers are cautioning that Congress should be wary of overregulating in a way that could harm competition — a similar point raised by tech companies leading in the space. 

“On the one hand, you don’t want to ignore risk, you don’t want to ignore obvious implications of technology. On the other hand, you don’t want to squelch it through a regulatory system,” said Sen. Mike Lee (R-Utah). 

Nick Clegg, president of global affairs at Meta, in an op-ed published in the Financial Times Tuesday urged tech companies to lead with transparency as they push forward with AI tools. At the same time, he said the “most dystopian warnings about AI are really about a technological leap — or several leaps” beyond where the tech is today. 

“There’s a world of difference between the chatbot-style applications of today’s large language models and the supersized frontier models theoretically capable of sci-fi-style superintelligence. But we’re in the foothills debating the perils we might find at the mountaintop,” Clegg wrote.

“But there’s time for both the technology and the guardrails to develop,” he added. 

Tuesday’s briefing is part of the plan Schumer laid out for how lawmakers will tackle regulating the booming industry, held alongside a series of expert forums that Schumer said will convene later this year.

Schumer last month revealed a framework for AI regulation. He established a bipartisan group of senators to lead on the issue alongside himself, made up of Heinrich, Todd Young (R-Ind.) and Mike Rounds (R-S.D.). 

Schumer’s proposed framework follows other voluntary guidelines on AI released by the administration, through the White House’s blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. But the lack of strict government regulations leaves the tech industry largely to create self-imposed guidelines.

Sen. Markwayne Mullin (R-Okla.) says he expects more hearings and working groups to emerge that are comprised of folks who are “really interested” in the topic. 

“We don’t [know a ton about AI]. It was at a 30,000-foot level and it was more of an introduction. It was very intriguing though. Basically, everybody was saying: listen, the technology has been around a long time, but this is something new that is rapidly advancing. But we’ve been using AI around us everywhere and kind of take a deep breath and let’s figure out what the good and what the bad is on it,” he said.

“Sometimes Congress has a tendency to overreact. Let’s not overreact yet because there’s a need here, but there’s also a need here to be cautious and make sure it’s not used by our adversaries.”