The views expressed by contributors are their own and not the view of The Hill

How ‘killer robots’ can help us learn from mistakes made in AI policies

AP Photo/Eric Risberg
San Francisco Police Chief Bill Scott answers questions during a news conference on May 21, 2019, regarding allowing police to use potentially lethal, remote-controlled robots in emergency situations. Civil rights advocates are critical of the militarization of police.

The use of lethal robots for law enforcement has turned from a science fiction concept to news snippets, thanks to recent high-profile debates in San Francisco and Oakland, Calif., as well as their actual use in Dallas. The San Francisco Board of Supervisors voted 8-3 to grant police the ability to use ground-based robots for lethal force when “when risk of loss of life to members of the public or officers is imminent and officers cannot subdue the threat after using alternative force options or other de-escalation tactics.” Following immediate public outcry, the board reversed course a week later and unanimously voted to ban the lethal use of robots. Oakland underwent a less public but similar process, and in January the Dallas Police Department used a robot to end a standoff

All of these events illustrate major pitfalls with the way that police currently use or plan to use lethal robots. Processes are rushed or nonexistent, conducted haphazardly, do not involve the public or civil society, and fail to create adequate oversight. These problems must be fixed in future processes that authorize artificial intelligence (AI) use in order to avoid controversy, collateral damage and even international destabilization.

The chief sin that a process can commit is to move too quickly. Decisions about how to use AI systems require careful deliberation and informed discussion, especially with something as high-stakes as the use of lethal force. A counter example here is the Department of Defense (DOD) Directive 3000.09, which covers the development and deployment of lethal autonomous systems. Because it lacks clarity for new technology and terminology, this decade-old policy is in the process of a lengthy, but deliberate, update. For San Francisco and Oakland, the impetus for speed was California’s requiring an audit of military equipment, but San Francisco’s debate was too far along to get started and Oakland’s was done in an entirely impromptu fashion.

This was reinforced by the fact that the police in both cities already had robots (albeit, not armed in San Francisco) in their inventories; if the use of robots was not approved, they argued, the equipment would have to be divested, creating an “authorize it or lose it” mentality. Procurement should be covered by the policy on autonomous systems, not an afterthought to avoid loss. In a functional process, procurement should follow authorization, not vice versa.

Simply slowing down and avoiding sunk costs alone is not enough, however; the process itself must be improved. In the case of San Francisco, debate involved only the board of supervisors and the police department (who had a hand in drafting the authorization under discussion). Oakland’s process started as the council discussed robots alongside staples of police equipment such as stun grenades. When considering the deployment of AI, it is important to solicit the viewpoint of civil society representatives, whose expertise on technology policy, law, human rights and artificial intelligence generally, could prove invaluable to producing nuanced policies. 

Additionally, and especially for law enforcement issues, the public needs to have more involvement in these processes; otherwise, the public will be forced to turn to protests to make its voice heard. In contrast, consider the robust public discussion among citizens, civil society, and companies over law enforcement use of facial recognition software or California’s Bot Bill. The ideal process fosters such discussion.

Finally, it is critical to consider oversight mechanisms at the start. San Francisco neatly sidestepped this with a nebulous mandate. All four of the requirements to use force that the board approved require additional guidance. Without specific definitions for “risk of loss of life,” “imminent,” “alternative force options” and “de-escalation tactics,” and no other figures of authority there to challenge a police interpretation of them, these are largely toothless stipulations. But San Francisco did more than Oakland, which did not outline even cursory oversight mechanisms for the police and relied solely on the standard police authorization to use lethal force. These cases indicate that there should be a separate body to carry out oversight, with clear guidelines about when and how to deploy autonomous systems.

As AI systems continue to improve, debates are required about how and where government bodies deploy them. Each organization should have its own processes: the Department of Defense will have stakeholders and systems different from local law enforcement. But some practices should be constant. Processes need to be deliberate and not rushed; appealing to the sunk cost fallacy should be avoided; other stakeholders need to be consulted early and often; and oversight needs to be built in from the beginning. 

Without taking these steps, it will continue to be difficult to build trust with the public and the international community that AI can be deployed responsibly. While recent events may have sparked a public outcry over the dangers of “killer robots,” we should not lose sight of the danger that poor processes create when deploying AI systems.

Michael Depp is a research associate at the Center for a New American Security, where he focuses on artificial intelligence safety and stability. Follow him on Twitter @michaeljaydepp.

Tags Artificial intelligence Military equipment

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.