The views expressed by contributors are their own and not the view of The Hill

AI is supercharging child surveillance and the school-to-prison pipeline

In this July 10, 2018, file photo, a camera with facial recognition capabilities hangs from a wall while being installed at Lockport High School in Lockport, N.Y.
In this July 10, 2018, file photo, a camera with facial recognition capabilities hangs from a wall while being installed at Lockport High School in Lockport, N.Y. New York banned the use of facial recognition technology in schools Wednesday, Sept. 27, 2023, following a report that concluded the risks to student privacy and civil rights outweighed potential security benefits. (AP Photo/Carolyn Thompson, File)

Last month, the White House announced its highly anticipated executive order on artificial intelligence. Unfortunately, the Biden administration missed another critical opportunity to address the AI civil rights crisis unfolding across Black communities nationwide. 

With each passing day, it becomes clear that these dangerous technologies are drawing new color lines for the 21st century, threatening to relegate yet another generation of Black people to second-class citizenship. Nowhere is this more evident than in the use of algorithmic technologies in America’s public schools. 

Controversial, data-driven technologies are showing up in public schools nationwide at alarming rates. AI-enabled systems such as facial recognition, predictive policing, geolocation tracking, student device monitoring and even aerial drones are commonplace in public schools. 

For example, a recent national survey of educators found that over 88 percent of schools use student device monitoring, 33 percent use facial recognition and 38 percent share student data with law enforcement. Many of these tools are designed for military use and routinely used by authoritarian regimes to repress ethnic minorities — making their use in schools all the more frightening.

The harms of these technologies are not evenly shared. Research shows that these tools disproportionately affect Black youth, youth with disabilities, immigrant youth, LGBTQ youth and youth in low-income communities. For example, researchers at Johns Hopkins University found that schools with large surveillance infrastructure suspend students at higher rates, leading to worse academic outcomes for Black students.

Students, parents and teachers are concerned by these developments. Black families are especially concerned about how these technologies may be used to expand police presence in schools. Their concerns are supported by research demonstrating that students’ digital footprints are increasingly used to disproportionately discipline, expel and even arrest Black schoolchildren — effectively opening a new digital frontier in the longstanding school-to-prison pipeline.

As a civil rights attorney who investigates how new technologies violate the rights of communities of color, I’ve seen these challenges play out in stunning ways. 

For the last three years, I’ve worked in coalition with advocates to end a secret predictive policing program used by a Florida sheriff’s office against vulnerable schoolchildren. Through litigation and open records requests, we obtained copies of the sheriff’s secret youth database which contained the names of up to 18,000 children each academic year. 

The office built this database using an algorithm that assessed confidential student records — including grades, attendance records and histories of child abuse — to identify students believed to be at the greatest risk of falling into “a life of crime.” Public records also showed that school-based police officers were instructed to surveil these children in school and obtain “actionable intelligence” for criminal investigations. Local reporting revealed that police were directed to target these children and their families with such intensity that they would feel pressure to either “move or sue.” 

To be sure, the experiences of this community reflect an especially egregious example of technology-driven rights abuses in schools. However, what used to be an outlier is quickly becoming the norm.

For example, in Philadelphia, local leaders are planning to launch a “school safety” drone surveillance program to monitor “high crime areas” near schools. In some instances, schoolchildren will be trained to operate the new aerial surveillance system.

States like Wisconsin are using a dropout prevention algorithm that explicitly treats a student’s race as a risk factor despite widespread inaccuracies and encoded bias.

In Georgia, the state national guard is using geofencing technologies to target select schools for military recruitment. Meanwhile, cities like New York CityChicagoWashington, D.C. and Los Angeles have used a suite of policing technologies to build massive “gang databases” that almost exclusively target Black and Hispanic children.

And states like Florida are rolling out sophisticated social media surveillance and content moderation technologies. These tools could open the door for censoring classroom conversations on race, gender and social inequality — denying students the freedom to learn.

Despite the prolific use of these technologies, there are still solutions to undo digital authoritarianism in America’s public schools.

The clearest solution is a ban on using federal funds for schools to purchase these technologies in the first place. Federal funding is a driving force behind the widespread adoption of these technologies. Eliminating federal funds as a revenue source for school districts to procure these systems will go a long way in addressing this challenge.

The Biden administration can follow the lead of New York State, which recently issued a statewide ban on the use of facial recognition in its public schools. Federal agencies like the United States Department of Education have preexisting legal authority in federal civil rights and student privacy laws to develop a federal ban on technologies that violate students’ rights.

To be sure, there is a constructive role for technology in schools. However, most reasonable minds would agree that racial injustice is not one of them. Policymakers must address the growing threat these technologies present to the freedoms and rights that protect our children and youth.

Clarence Okoh is a civil rights attorney and Just Tech fellow at the Center for Law and Social Policy in Washington, D.C. He is a Public Voices Fellow on Technology in the Public Interest with The OpEd Project in partnership with The MacArthur Foundation.

Tags Ethics of artificial intelligence facial recognition technology Joe Biden Politics of the United States School-to-prison pipeline

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Most Popular

Load more