The views expressed by contributors are their own and not the view of The Hill

In the AI era, adaptable policy is a necessity

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)

The rapid rollout of generative artificial intelligence tools such as Open AI’s ChatGPT and Google’s Bard has caused an uproar among leaders in government, industry and academia, which are now clamoring for ways to control the development and use of this emerging technology. 

The AI landscape is constantly evolving. Although that should not stop policymakers from intervening in the field, their approach to regulation and oversight needs to balance competing goals and be responsive to a rapidly changing geopolitical and technological landscape. 

In the context of AI, three key goals are essential for government policymakers: incentivizing AI innovation, limiting the misuse of AI by authoritarian governments and other actors, and ensuring consumers of AI-enabled products and tools are not harmed. Realizing these goals usually results in conflict and tension. 

For example, when we impose export controls on certain chips to slow down our adversaries’ advances in AI, could that inadvertently hamper U.S. companies’ ability to develop and produce the most advanced semiconductor technology? Or more fundamentally, do we know if the controls have actually slowed down the adversary in question? A similar situation arose for the satellite industry in the 1980s

Similarly, does the blind pursuit of economic growth and innovation lead us to make unacceptable sacrifices in security? The U.S. has long placed an emphasis on innovation over careful deployment. We’ve seen many instances where “moving fast and breaking things” in software development, aerospace, transportation and other sectors has caused real-world harm and even loss of life.

These tensions make it virtually impossible for the government to accomplish all of its goals in every situation. The responsibility then rests with policymakers to balance these tensions and chart a successful path for the nation.

Furthermore, any effort to steer the trajectory of AI and other emerging technologies should include a system for monitoring whether the measure is working as intended or needs to be adjusted. Without feedback mechanisms in place, the U.S. risks pursuing inappropriate policy responses and potentially ceding technological leadership to peer innovators.

Groups like the Organization for Economic Cooperation and Development (OECD) have already encouraged leaders to pursue more adaptable, creative policy solutions when addressing the systemic problems facing our society. While this framework can be applied to virtually all realms of policy, it is particularly useful for addressing issues in the technology and national security space.

The decision of how to balance strategic technology and national security goals differs across countries and regions depending on their economic realities, geopolitical standing and other factors. 

In AI policy, for instance, the U.S. has invested substantially in developing the technology while relying largely on the private sector and the courts to create guardrails for the technology. In contrast, the European Union is leading the charge to develop technical standards for AI safety. The bloc has played a similar watchdog role in other areas of tech policy, such as data privacy and antitrust. Each approach carries its own set of implications, underscoring the often precarious balance between innovation and safety, speed and security, efficiency and resilience and collaboration and competition.

Effective strategy requires policymakers to acknowledge these tensions and determine the equilibrium that works best for their country.

Our leaders have a wide variety of tools they can use to strike that balance. Some relate to specific government functions and authorities, such as procurement and taxes. Others are more general purpose, such as international collaboration and information-sharing. Many, if not all, of these levers can be used to shape the AI landscape. Immigration and workforce development policies affect the size and composition of the AI talent pool; economic controls can advance or impede various countries’ ability to compete in the AI market; investments in research and infrastructure can accelerate and secure a country’s leadership in the technology. 

Today, discussions around mitigating AI risks are mostly focused on a single policy lever: regulation. But regulations are just one tool in policymakers’ arsenal, and we cannot rely on a single policy lever when addressing multifaceted problems. Government leaders have a variety of ways to combat technological risk beyond regulation, such as funding AI safety research, empowering competition enforcers and expanding the market for safe AI tools using procurement (link broken). 

However, these alternative policies also come with tradeoffs. Safety research may divert funds from other fields. Competition policies may reduce research and development spending at large incumbent companies. Market-making procurement programs may disempower small firms that lack the resources to navigate the federal purchasing process.

Policymakers must consider the tradeoffs before pursuing any particular path.

It is impossible to know for certain what the outcomes and interactions of specific policies will be until they are enacted. When pursuing these interventions, it is therefore essential that policymakers create mechanisms that allow them to monitor the impacts of their actions in real time — to understand what works well and what needs adjustments. Only through regularly updated monitoring systems will leaders be able to spot emergent trends and change course when policies are no longer serving their intended needs. 

This is especially essential in a rapidly moving technical area such as AI. Policymakers can lay the groundwork for these feedback systems in the short-term by investing in incident tracking, third-party auditing and monitoring the data and models used in critical applications, and also supporting the creation of monitoring systems for each policy intervention in the federal budget and bipartisan legislation. 

In a world of peer, nation-state level innovators and a rapidly moving emerging technology area, adaptability isn’t optional — it’s an absolute necessity. Our policy framework needs to be fluid, flexible and forward-thinking. This commitment to continuous monitoring and adaptability is the key to thriving in our rapidly changing world. As we navigate this terrain and pivot towards a systems-oriented strategy for tech and national security policy, adaptability should be our compass. 

Our recent report through Georgetown University’s Center for Security and Emerging Technology provides a detailed framework for how policymakers can begin examining the interactions and tradeoffs between different policies and constructing more informed, effective and adaptable long-term strategies.

The rapid proliferation of AI tools offers a unique opportunity to shift towards a more integrated, systems-oriented approach to policymaking. By making wise use of our policy levers, constantly evaluating their impacts, and standing ready to adjust as necessary, we can rise to the challenges and take advantage of the opportunities AI presents.

Jack Corrigan is a senior research analyst at Georgetown University’s Center for Security and Emerging Technology, where Dewey Murdick is executive director.

Tags Artificial intelligence Artificial intelligence arms race artificial intelligence regulation Politics of the United States

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more