The views expressed by contributors are their own and not the view of The Hill

A much-needed model for monitoring government algorithms

Getty Images

Amidst the alarmism about algorithms having more and more power over our lives, people often omit a key fact: these powerful computational tools, which use lots of data to help make decisions or predict outcomes, are also spreading throughout government. Whether it is your police or fire department, the school district, or even traffic planners, local governments are rapidly embracing algorithmic systems. But where is the public involvement or oversight?  

We are charting a different course, one that brings the public into discussions about whether and how government should use algorithms and supports meaningful scrutiny of these systems. This is the result of a two-year effort by the Pittsburgh Task Force on Public Algorithms — which we hope governments across the country will consider as a model to achieve both algorithmic justice and accountability in this period of rapid growth.   

We already know that public algorithms — those systems that governments use — are growing in prevalence. Applications include public safety, education, child welfare, benefits and resource allocation and criminal justice, with no end in sight.   

Many people might be surprised to learn that an algorithm has something to say about where their kid goes to school, whether a cop patrols their neighborhood or even how long they might sit at a red light. And when those same folks ask for an explanation about such a decision, the best they might get from a local public servant could be a shrug and a wave toward the “black box” of algorithmic decision-making.  

This growth across municipal governments is too often happening in the dark. There’s little public say over whether an algorithm might help solve a problem, no public participation in designing an algorithm and no outside reviews of systems for harms. That is a recipe for distrust in government. And it is something we cannot afford in this moment of stress for American democracy.  

Transparency issues are not the only problems. Error rates can be unacceptably high — just ask the more than 40,000 Michiganders whom an algorithm falsely flagged for suspected unemployment benefits fraud. Or consult the growing evidence of problems with facial recognition systems, especially along racial lines.  

We must do better. And I am proud to say that we are offering a path forward. 

The Pittsburgh Task Force on Public Algorithms set out to study our regional governments’ use of algorithms, where openness and public involvement varied considerably by agency. Our report offers concrete recommendations for ensuring that the public is brought into weighty decisions around public algorithms, guarding against harms from algorithms and responsibly managing algorithmic growth.   

Our recommendations are grounded in a simple truth: Without public trust, these tools are set up to fail. We encourage governments to take a risk-based approach, with more public involvement and consultation and greater outside scrutiny for higher-risk systems (such as those for policing or detention decisions).  

We advocate for procurement and contract review process revisions to make sure that executive leaders (and the public) know when systems are coming into the government. And whatever the risk level, we encourage the publication of critical information about algorithmic systems so that the public and experts alike can assess them.   

We believe that our task force’s work offers a model for local governments struggling with the proliferation of algorithmic systems and finding that no such successful template exists in this country.  

To be sure, there seems to be growing awareness in Washington of algorithmic problems. But these efforts in Congress, which have yet to succeed, are largely focused on private-sector algorithms. That would leave a glaring gap in federal oversight for the algorithmic systems of your local government. And although some local governments have themselves taken some action, too often those efforts — like facial recognition bans and New York City’s new law requiring audits of only those algorithms that employers use in hiring or promotion — are narrow in scope.  

There is a tremendous opportunity in this country to fashion a framework for managing and harnessing public algorithmic systems. Our Task Force has offered recommendations that we believe can achieve that goal. We are seizing the initiative in Pittsburgh — and humbly hoping that others will follow our lead.  

David J. Hickton is the founding director of The University of Pittsburgh Institute for Cyber Law, Policy, and Security, which hosted the Pittsburgh Task Force on Public Algorithms. He previously served as U.S. Attorney for the Western District of Pennsylvania. 

Tags Algorithmic bias algorithms Artificial intelligence Facial recognition system

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.