US, other nations unveil deal to keep AI safe from rogue actors

Photo by OLIVIER MORIN/AFP via Getty Images
This illustration photograph taken in Helsinki on June 12, 2023, shows an AI (Artificial Intelligence) logo blended with four fake Twitter accounts bearing profile pictures apparently generated by Artificial Intelligence software. (Photo by OLIVIER MORIN/AFP via Getty Images)

The United States, along with 17 other countries, unveiled an international agreement that aims to keep artificial intelligence (AI) systems safe from rogue actors and urges providers to follow “secure by design principles.”

The 20-page document, jointly published Sunday by the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency and the United Kingdom’s National Security Centre, provides a set of guidelines to ensure AI systems are built to “function as intended” without leaking sensitive data to unauthorized users.

Broken down into four sections, the guidelines lay out recommendations that can be taken during each step of the process, from AI system design and development to its deployment and maintenance. The guidelines generally discuss threats to AI systems, the protection of AI-related assets like models and data, the responsible release of AI systems, and the importance of monitoring such systems after they are released.

Other countries featured in the agreement include Australia, Canada, Chile, the Czech Republic, Estonia, Germany, Israel, Italy, Japan, Nigeria, Poland and Singapore.

While the agreement is nonbinding and includes more general recommendations for AI providers, Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Agency, told Reuters the guidelines represent the importance of prioritizing the safety of AI systems.

“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told the news outlet.

The race to deploy AI systems has accelerated in the past year among tech firms, though lawmakers and some tech leaders have expressed concerns over the risk of uncontrolled AI development.

Last month, the Biden administration issued a sweeping executive order on AI focused on managing the risks of AI. The order includes new standards of safety, worker protection principles, along with directing federal agencies to accelerate the development of techniques so AI systems can be trained while preserving the privacy of training data.

The order will also evaluate how agencies collect and use commercially available information with personal data, along with introducing a tool to expand grants for AI research in areas like health care and climate change.

Tags AI Artificial Intelligence Jen Easterly

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more