The new AI competition is over norms
Much of the discussion of nation-state competition in artificial intelligence (AI) focuses on relatively easily quantifiable phenomena including funding, technological advances, access to data and computational power, and the speed of AI industrialization. However, a central element of AI leadership is something much less tangible: control over the norms and values that shape the development and use of AI around the world.
The U.S. government has overlooked this dimension of AI development for years, but the last couple months indicate the beginnings of a change of course. If the U.S. hopes to maintain global AI leadership, the government must continue to stake out a comprehensive positive vision, or we may find that the future of AI is a world few of us want to live in.
{mosads}Until recent months, the U.S. government had remained relatively quiet on the topics of AI values and ethics. A May 2018 White House AI Fact Sheet, avoided the topics altogether. However, ten months later, there was a shift in AI strategy, and the U.S. government is increasingly interested in leading the design of a normative framework for the development and use of AI technologies. The White House Executive Order on AI explicitly mentioned the values of privacy and civil liberty, and stated that the U.S. has a role to play in “shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.”
Moreover, on March 19, The White House launched AI.gov, an online resource to showcase the administration’s efforts and commitments on AI. One of the five sections is titled “AI with American Values,” and describes US “core values” as including freedom, guarantees of human rights, the rule of law, stability in our institutions, rights to privacy, respect for intellectual property, and opportunities to all to pursue their dreams. The site says, “The AI technologies we develop must also reflect these fundamental American values and our devotion to helping people.”
Certainly, U.S. history includes many instances of failing to live up to these lofty goals. Any articulation of principles or values should be accompanied by a plan that articulates how they can be achieved. Moreover, “national values” necessarily encompass enormous diversity and must allow room for discussion and dissent, especially by historically marginalized communities. Nonetheless, values are being embedded into AI systems and shaping their development and use, so we can no longer hide behind fantasies of technological neutrality. Decisions about when and how to use AI systems throughout society are value-laden and impactful. And not everyone agrees on the best ways forward.
Numerous nations have articulated the importance of protecting their own national values in the age of AI, but these values differ. While French President Emmanuel Macron has championed the importance of “national cohesion” despite tools that can easily segment and discriminate, U.S. President Donald Trump has lauded AI developments, in part for their potential to “create vast new wealth for American workers and families.” There is no single shared set of aspirations that will support a safe and beneficial future for all. A comparative study of AI strategies from ten countries recently published by the UC Berkeley Center for Long-Term Cybersecurity (CLTC), which I authored, illustrates significant divergences and gaps in AI strategies around the world. The report also highlights several synergies, such as the desire to create reliable AI systems, which could support greater multilateral cooperation.
{mossecondads}The CLTC report uses an “AI Map” to highlight twenty systemic threats and opportunities posed by AI and indicate the issues prioritized by different actors. For example, the framework shows the ways in which the U.S. has been an outlier for not addressing a wider range of societal impacts of AI. The analysis also exposes tensions between AI visions. For example, only two (or fewer) national strategies explicitly address inequality, human rights, or disinformation. Only half prioritize the safety of AI systems.
As the implications of AI have come into sharper focus, governments have recognized the need to protect the interests of their citizens. However, national decisions are not made in a vacuum and their effects can be far-reaching. When the EU implemented the General Data Protection Regulation (GDPR), organizations and companies well beyond the immediate region felt its influence. If a major international actor establishes standards on an ethical or normative issue, such as how the EU has done on privacy with the GDPR, to a large degree this standard can become the default for the world. Nations have to contend with globally relevant values for AI given the transnational nature of AI development, use, and impacts.
The U.S. federal government has notably altered its AI strategy to include the importance of protecting values such as civil liberties, privacy, and technical standards to support safe AI development. This helps align the nation with its allies and is a step in the right direction. However, a more proactive stance is needed. The forefront of AI competition is not just technological development, but also the development of norms that will influence the way the technology shapes the world. The U.S. must provide a comprehensive, positive vision for AI to stake out its leadership proposition, and to earn the trust that lasting AI leadership will require.
Jessica Cussins Newman is a Research Fellow at the UC Berkeley Center for Long-Term Cybersecurity, where she focuses on digital governance and the security implications of artificial intelligence. She is also an AI Policy Specialist with the Future of Life Institute and a Research Advisor with The Future Society. She has previously studied at Harvard University’s Belfer Center, and has held research positions with Harvard’s Program on Science, Technology & Society, the Institute for the Future, and the Center for Genetics and Society. She holds degrees from the Harvard Kennedy School and University of California, Berkeley. Follow her on Twitter @JessicaH_Newman.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.