The right tools, employees and data can help the law get ahead of AI
Hockey great Wayne Gretzky’s guidance to “skate where the puck is going, not where it has been” is an apt recommendation for policymakers as they seek to craft future-focused responses to artificial intelligence in light of the breakneck pace of its advancement.
AI tools are being applied more broadly across the economy and promise meaningful changes in the way knowledge work is done. At the same time, though, AI is also being used to generate child sexual abuse material, collude to raise rents and design bioweapons.
Given the dual-use potential of this emerging technology, President Biden’s executive order on AI is an important first step toward a regulatory regime that protects against the possible perils while promoting AI’s potential, but more is needed, particularly on security issues.
The Department of Homeland Security is the lead agency when it comes to crafting responses to the security threats associated with AI. DHS’s Cybersecurity and Infrastructure Security Agency is responsible for securing critical infrastructure from AI threats and vulnerabilities. Another catastrophic risk associated with AI is its potential to be used to design and build weapons of mass destruction, and DHS is the lead agency for studying these risks as well. DHS is also supporting the National Institute of Standards and Technology in building on its risk framework on AI.
To meet these demands, DHS will need more personnel who understand AI and its risks to explore the implications and challenges of any policy responses that may be devised. If the federal government is to successfully respond to AI, it will need the ability to monitor and analyze progress. Relying on hype and speculation will lead to a dangerous place.
For example, for nearly a decade, the deployment of full self-driving cars was said to be mere months away. While there has been progress on driving assistance, overreliance on unsafe “self-driving” cars has led to collisions and deaths. Similar claims about human-level AI being just around the corner should be met with a level of skepticism, though developments should be closely monitored.
Along these lines, the administration’s executive order requires developers of potentially dangerous foundational AI to supply the federal government with key information about any models they may produce and how those models are being trained and tested. To make this information useful, the government will require the analytic capability to make sense of this data and translate it into risk assessments.
Additionally, because this is new territory, novel policy analysis techniques such as gaming, robust decision making and modeling under deep uncertainty, are likely to be more relevant than approaches that rely heavily on empirical analysis.
In gaming, a scenario is designed to explore a core set of policy questions, game participants acting as decision-makers describe how they would respond to the scenario and other participants’ actions, and then recommendations are made based on the discussions of the participants’ actions.
Robust decision-making describes a collection of methods that are designed to find policy options that perform well under a wide range of plausible futures. Similarly, modeling under deep uncertainty tests different policy options in cases where there is disagreement or uncertainty about key factors like the rate of advancement, the pace of adoption or technological requirements for AI systems.
These types of forward-looking capabilities will be needed for policymakers in DHS and the federal government more broadly as they begin to respond to the security challenges raised by AI.
To skate where the puck is going you need to know the direction and speed at which the puck is moving, the ability to plan a way to get there and the skills to execute that plan. To successfully respond to advancements in AI, the federal government will need a workforce that is familiar with AI, capabilities to collect useful information on the growth path and pace of AI and tools to plan responses that consider the various tradeoffs.
Biden’s executive order is a start, but more work is needed.
Carter C. Price is research quality assurance manager for the Homeland Security Research Division, and a senior mathematician at the nonprofit, nonpartisan RAND Corporation. Jack Riley is vice president and director of RAND’s Homeland Security Research Division.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.