The views expressed by contributors are their own and not the view of The Hill

Robocalls are the least of our AI worries

Jensen Huang, co-founder and chief executive officer of Nvidia Corp., speaks during the Nvidia GPU Technology Conference (GTC) in San Jose, California, US, on Monday, March 18, 2024.
Jensen Huang, co-founder and chief executive officer of Nvidia Corp., speaks during the Nvidia GPU Technology Conference (GTC) in San Jose, California, US, on Monday, March 18, 2024. Dubbed the Woodstock festival of AI by Bank of America analysts, GTC this year is set to draw 300,000 in-person and virtual attendees for the debut of Nvidia’s B100. Photographer: David Paul Morris/Bloomberg via Getty Images

President Biden’s use of the phrase “artificial intelligence” in his State of the Union address this month is a historic first. AI has entered Americans’ living rooms through astonishing tools like ChatGPT and DALL-E, and the president correctly determined that AI is too important to ignore.

What’s less clear is whether Biden really understands the scale of the risks posed by advanced AI. His only concrete policy proposal on this topic was to “ban A.I. voice impersonation.” Voice impersonation is worth addressing, of course; nobody wants fake robocalls hitting up their grandparents for cash or harassing voters with disinformation. But the problems posed by AI go far beyond increased influence for scam artists.

Last Monday, the State Department released an alarming report from Gladstone AI, an expert contractor who has been training the federal government to cope with the challenges of AI. The report urges the government to “move quickly and decisively to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an extinction-level threat to the human species.” 

According to the report’s authors, there are two ways AI could go catastrophically wrong. 

First, AI could “design and even execute catastrophic biological, chemical, or cyberattacks, or enable unprecedented weaponized applications in swarm robotics.” AI is already getting top scores on graduate-level exams like the Bar Exam and the MCAT. In a few years, AI could easily graduate from this academic setting and start carrying out real-world engineering tasks. 

Second, AI labs could permanently lose control of the systems they’re developing, allowing autonomous AIs to escape from the lab, make copies of themselves and compete against humans for energy and resources. Even if such AIs are relatively friendly, we should think twice about unleashing a competitor that’s smarter than we are — I would prefer not to live in a zoo.

Leading computer scientists agree that these risks are real and could take place in the very near future. 

“I firmly believe that urgent efforts, preferably in the coming months, are required,” said Yoshua Bengio, a professor at the University of Montreal who is widely hailed as a “godfather of AI” for his role in inventing machine learning. 

Or, as former Google researcher and Turing Award winner Geoffrey Hinton put it, “I think in five years’ time [AI] may well be able to reason better than us.”

President Biden released an executive order last October that at least referenced some of these catastrophic risks, setting up panels and working groups to try to figure out how the U.S. government can “ensure that AI systems function as intended” and “are resilient against misuse or dangerous modifications.” At this point, though, we need firm requirements backed by real funding for meaningful enforcement, not just more committees.

The White House knows how to push for funding when it comes to business opportunities. As Biden reminded us in the State of the Union, after the Chips and Science Act “the United States is investing more in research and development than ever before,” and rather than imports, “private companies are now investing billions of dollars to build new chip factories here in America!” The Commerce Department plans to distribute $39 billion in government subsidies to those businesses. 

If that’s the price of American industrial dominance, then so be it — but where’s the corresponding funding for safety? 

Spending just one percent of the CHIPS subsidies on AI safety would allow the National Institute of Standards and Technology to hire its own staff to run the new AI Safety Institute instead of begging for volunteers to work in crumblingsnake-infested facilities. It would also fund the “interim safeguards” called for in Gladstone’s report, such as an AI Observatory to keep the government informed about fast-moving developments in advanced AI, and imposing limits on the wholesale export of America’s cloud computing capacity to rival states like China. 

Instead, the president’s FY2025 budget calls for giving Commerce only $65 million “to safeguard and promote AI, including protecting the American public against its risks.” That’s less than 0.2 percent of the subsidies.

During the State of the Union, President Biden demanded that Congress “harness the promise of A.I. and protect us from its peril.” That’s a good first step, but not if your idea of peril is limited to annoying robocalls. 

As Biden moves from the State of the Union to the campaign trail, we’ll be watching to see what else he has to say about AI safety, and what kind of concrete action would likely come out of a second Biden administration.

Jason Green-Lowe is the executive director of the Center for AI Policy, a nonpartisan advocacy group urging Congress to require safe AI.

Tags artificial intelligence regulation Chat GPT generative AI Joe Biden Politics of the United States

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Technology News

See All