The views expressed by contributors are their own and not the view of The Hill

To regulate AI, start with data privacy  

Getty Images

Last week, a Black woman in Detroit filed suit in federal court because the Detroit Police Department arrested her for a crime she did not commit. Police alleged that the woman, then eight months pregnant, had been involved in a carjacking. The New York Times reported that her arrest was more than a case of simple mistaken identity by an unreliable eyewitness. The police had relied on artificial intelligence to identify a suspect, and the program had wrongly pushed them towards a woman who was entirely innocent.  

Cases like this one have launched a conversation about the need for safeguards around the use of artificial intelligence to protect individuals across the country. At the same time, with the advent of ChatGPT and other generative AI, official Washington and communities around the country are becoming more and more concerned about the harm this new technology might inflict. Yet the reality is that artificial intelligence is only one part of a much larger, ongoing threat, and efforts to regulate its use and scope are just one element of a missing, comprehensive technology policy we desperately need. Instead of focusing our efforts narrowly on AI’s impact, we should broaden our scope — and protect the privacy of all Americans.  

Right now, the United States is one of only a few major countries without a comprehensive data protection law. Without these safeguards, our personal decisions, experiences and most intimate parts of our identity are at risk of prosecution and harassment, powered by troves of data available on demand to the government and to private parties that are willing to pay. As the use of artificial intelligence grows, strong guardrails that limit how companies can collect, sell and use data are more important than ever.  

The impact of our digital Wild West is real.  

An investigation by The Markup found advertisers able to target ads to “struggling seniors” and the “easily deflated,” among many other categories. Racially biased facial recognition tools prevent people from seeking asylum. Automated hiring tools are discriminating against people with disabilities. Algorithmic systems increasingly determine who gets a job, who can access loans, and who can rent a home. Yet even as these systems are increasingly abundant, it is difficult for individuals to know what data is being used, to correct inaccuracies, or to prove discrimination when it occurs. 

The availability of this data also leaves the door open to alarming government overreach. After the Supreme Court overturned Roe v. Wade, many states imposed laws that criminalize abortion — and in some cases, even prohibited actions to help abortion-seekers travel out of state. Suddenly, search histories, location data, period tracking app data, and other digital footprints could be used to prosecute people exercising a fundamental right.  

Most Americans have little recourse against this Big Brother-style surveillance and are forced to tread carefully around their devices and digital sources of information to shield themselves from the government’s gaze. Earlier this year, the Office of the Director of National Intelligence declassified a report with chilling conclusions: “Today, in a way that far fewer Americans seem to understand, and even fewer of them can avoid, [commercially available information] includes information on nearly everyone that is of a type and level of sensitivity that historically could have been obtained, if at all, only through targeted … collection.” Since data brokers and other sources offer such rich data for purchase, hard-won protections for Americans’ rights are increasingly meaningless. Instead of getting a warrant for search or surveillance, the government can simply buy the sensitive data it needs.  

Without a national legal standard around privacy, we’re left with an inadequate patchwork of narrower efforts. Although some states, like California, have impressive and laudable regulations, other states’ rules are weak or nonexistent. While there are more efforts to protect kids’ data, these proposals don’t solve the larger problems and may carry additional risks.  

Ultimately, everyone in the United States needs privacy protections. Without an overarching legal standard, our aims will fall short. 

The good news is that we already have well-developed proposals that would finally deliver baseline data protections, address many of the current harms related to artificial intelligence, and provide the scaffolding needed to further regulate AI. The American Data Privacy and Protection Act (ADPPA) would significantly limit how much information companies can collect and restrict how they could sell or use the information.  

This kind approach has support among policymakers, who recognize the need for strong privacy protection with an eye toward AI. In its Blueprint for an AI Bill of Rights, the Biden administration calls data privacy a “foundational and cross-cutting principle required for achieving all others in this framework.” ADPPA’s significant bipartisan support in the previous Congress also demonstrated that privacy protection is not a one-party issue. While the legislation may not be perfect, and there are still thorny questions to resolve, ADPPA would be a major step forward for protecting Americans’ rights and preventing abuse. 

The bottom line is that concerns about novel AI-related issues shouldn’t derail nearly-completed work to protect everyone’s rights. In fact, the rush to regulate AI provides the perfect opportunity to complete the decades-long effort to approve comprehensive baseline privacy legislation. 

We should take it. 

Laleh Ispahani is the executive director of Open Society-U.S. 

Tags Artificial intelligence ChatGPT Data privacy

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Congress Blog News

See All