The views expressed by contributors are their own and not the view of The Hill

Six months later, our call to slow AI development is more crucial than ever

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston. President Joe Biden’s administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released, though it hasn’t decided if the government will have a role in doing the vetting. The U.S. Commerce Department on Tuesday, April 11, said it will spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures that could ease consumer concerns about these new systems. (AP Photo/Michael Dwyer, File)

Imagine there has been a breakthrough in nuclear fusion, and with it the promise of nearly unlimited clean energy. Several giant companies race to build power plants with this new technology, and their prototypes work. 

But there is a problem: Nobody can show that they are safe. After the first reactor comes online, thousands of scientists, including many nuclear physicists, sound an alarm that these plants might leak or explode.

Scaled-up versions might, in extreme cases, even ignite the atmosphere, destroying all life. Experts try to rule out this scenario but fail. Even the corporations admit they cannot guarantee safety. And now, some reactors are leaking. Despite open letters, government statements of concern and an alarmed public, companies are racing to build bigger and bigger reactors. One has published reactor plans and even intends to market a do-it-yourself kit.

Would this be acceptable to you? What would you want your elected leaders to do? 

Humanity faces a very similar situation right now, with artificial intelligence.

In March, we published a letter calling for at least a six-month pause in the training of the next generation of giant AI experiments. It was signed by over 30,000 individuals, including more than 2,000 industry leaders and more than 3,000 experts. Since then, there has been a groundswell of concern about unchecked, out-of-control AI development. From Senate hearings to European Union legislation, sirens are sounding. Eighty-six percent of voters believe AI could cause a catastrophic event. 

They are right to be worried. Pre-eminent AI pioneers, who invented the techniques being used, are speaking out about the risks and even regretting their life’s work. There is another letter signed by hundreds of world experts and leaders of these corporations, suggesting that advanced AI — which they are working furiously to build — could make us extinct. 

Unlike the reactors, AI will not itself explode (though it may leak). But its opportunities carry a myriad of risks that are already manifest today. Rampant misinformation, large-scale manipulation and pervasive bias are already eroding trust and weakening our society. Generative AI is already causing chaos in our workforces, creative industries and democratic processes. Autonomous weapons, large-scale cyberattacks and AI-enabled chemical weapons development are in the works. Dario Amodei, CEO of the AI company Anthropic, warns that chatbots could help criminals create bioweapons in two to three years. 

The creators admit these dangers. Yet they remain locked in an arms race to create more and more powerful systems, with no clear plan to make them safe or keep them under control. Several have stated that a slowdown will be necessary to prevent harm, but are unable or unwilling to say under what circumstances. They can’t even say how such a slowdown could be implemented. They definitively will not guarantee our safety, no matter their efforts, because they cannot. There are fundamental questions they cannot or will not answer. 

The public and expert consensus is loud and clear: Regulate AI now. Eighty-two percent of voters do not trust companies to self-regulate, and a bipartisan majority want the government to step in. We know from experience that regulation can safeguard and drive innovation, as it has done for pharmaceuticals and aviation. We must do the same for AI. We must make it work for everyone, or risk the incredible futures it could help bring to life.

The U.S. must immediately establish a detailed registry of giant AI experiments, maintained by a U.S. federal agency. This agency should also build awareness of the huge clusters of specialized hardware that are used in these experiments, and work with the manufacturers of that hardware to include safety and verification features at the chip level. The U.S. government should at minimum ensure that it has the capability to trigger a pause. It has become clear that corporations are not merely reluctant to hit the brakes — the brake pedal does not even exist.

If we are going to reap the revolutionary potential of AI, regulators must enforce standards to ensure safety and security during development. They must require that developers take on the burden of proof, and demonstrate that their new systems are safe before deployment — just like they do for new drugs, cars or airplanes. Lawmakers must take proactive steps to ensure that developers are legally liable for the harm their products cause. 

These efforts cannot stop at home. The large-scale risks of AI affect everyone everywhere, and the upcoming UK summit is an opportunity to start the crucial task of addressing them at a global level in a way that transcends national borders and geopolitical rivalries. This kind of international cooperation is possible. We coordinated on cloning. We banned bioweapons. We signed treaties about nuclear weapons even at the height of the Cold War. We can work together on AI. 

Companies should think critically about why they are building these massive AI systems. There are a lot of vague assertions about the all-powerful, life-fixing “genies” they will become. But the health, science, poverty and other issues mentioned can all be addressed using the powerful general-purpose AI we already have, augmented with more specialized narrow AI. We should do this, before pouring billions into pursuit of superhuman general-purpose AI systems, at enormous risk. Genies are notoriously unreliable.

Right now, tens of thousands of brand-new cutting-edge chips are humming away in massive river-cooled data centers, growing the next generation of artificial minds. We don’t know what their powers will be, how they will think or make decisions or how to robustly align them to human interests. But they will be more capable than most people at many things, and more capable than the finest human minds at some things. 

We have a rapidly-closing window of opportunity to take control of this process, as a species, before the runaway development of AI takes control of us. We must use it.

Anthony Aguirre is the executive director and secretary of the board at the Future of Life Institute and the Faggin Presidential Professor for the Physics of Information at the University of California Santa Cruz.