The views expressed by contributors are their own and not the view of The Hill

An indispensable alliance on artificial intelligence 

President Joe Biden and British Prime Minister Rishi Sunak shake hands at the conclusion of a meeting in the Oval Office of the White House in Washington, Thursday, June 8, 2023. (AP Photo/Susan Walsh)

When I visited the White House earlier this year, I called the relationship between our two countries the indispensable alliance. The world is a safer, better and more prosperous place when we stand together, as we have throughout our history. And we do so again today, from the military and humanitarian aid we’re sending to Ukraine to our urgent diplomacy in the Middle East. 

But right now, one of the greatest challenges facing American and British leadership is artificial intelligence (AI). As President Biden said back in June: In the history of human endeavor, there’s never been as fundamental a technological change. He was right. And as the world’s leading democratic AI powers, the United Kingdom and United States must urgently work together on proposals for how we govern this transformative technology. 

Like the coming of electricity or the birth of the internet, AI will bring new knowledge, new opportunities for economic growth, new advances in human capability and the chance to solve global problems we once thought beyond us. AI can help solve world hunger by preventing crop failures and making it cheaper and easier to grow food. It can help accelerate the transition to net zero. And it is already making extraordinary breakthroughs in health and medicine, aiding us in the search for new dementia treatments and vaccines for cancer. 

But like previous waves of technology, AI also brings new dangers and new fears. So, if we want our children and grandchildren to benefit from all the opportunities of AI, we must act — and act now — to give people peace of mind about the risks. 

What are those risks?  

For the first time, the British government has taken the highly unusual step of publishing our analysis, including an assessment by the U.K. intelligence community. Our aim was to help the world have a more informed and open conversation.  

Our reports provide a stark warning.  

AI could be used for harm by criminals or terrorist groups. The risk of cyberattacks, disinformation or fraud pose a real threat to society. And in the most unlikely but extreme cases, some experts think there is even the risk that humanity could lose control of AI completely, through the kind of AI sometimes referred to as “super intelligence.”  

We should not be alarmist about this. There is a very real debate happening, and some experts think it will never happen. 

But even if the very worst risks are unlikely to happen, they would be incredibly serious if they do. So, leaders around the world have a responsibility to recognize those risks, and act. Many of the loudest warnings about AI have come from the people building this technology themselves, and the pace of change in AI is simply breath-taking: every new wave will become more advanced, better trained, with better chips, and more computing power. 

So, what should we do?  

First, governments have a role. As the country where so many of the foremost AI companies were founded, the U.S. has rightly been leading the way. But as the European home of many of those companies — from OpenAI to Google DeepMind to Anthropic — the U.K. has a responsibility, too. 

That’s why we’ve just announced the first ever AI Safety Institute. Our institute will bring together some of the most respected and knowledgeable AI experts in the world. They will carefully examine, evaluate and test new types of AI so that we understand what those models can do, exploring all the risks, from social harms like bias and misinformation through to the most extreme of all. And we will share those conclusions with other countries and companies to help keep AI safe for everyone. 

But AI does not respect borders. No country can make AI safe on its own. So our second step must be to increase international cooperation. That starts this week, at the first ever Global AI Safety Summit, which I’m proud the U.K. is hosting. Many of the leading AI companies themselves will attend, alongside civil society experts and advanced AI countries. I’m delighted Vice President Kamala Harris will be there. 

What do we want to achieve at this week’s summit? I want us to agree to the first ever international statement about AI risks. Right now, we don’t have a shared understanding of what they are, and without that, we cannot work together to address them. I’m also proposing that we establish a truly global expert panel, nominated by those attending the summit, to publish a State of AI Science report. And over the longer term, my vision is for a truly international approach, where we collaborate with partners to ensure AI systems are safe before they are released. 

None of that will be easy to achieve. But leaders have a responsibility to do the right thing, to be honest about the risks and to make the right long-term decisions to earn people’s trust, giving peace of mind that we will keep you safe. If we can do that — if the U.S. and UK can work with partners around the world to get this right — then the opportunities of AI are extraordinary. And we can look to the future with optimism and hope. 

Rishi Sunak is prime minister of the United Kingdom.