Government study finds racial, gender bias in facial recognition software

Getty Images

Many facial recognition technology systems misidentify people of color at a higher rate than white people, according to a federal study released Thursday.

The research from the National Institute of Standards and Technology (NIST), a federal agency within the Department of Commerce, comes amid pushback from lawmakers and civil rights groups to the software which scans faces to quickly identify individuals.

After reviewing 189 pieces of software from 99 developers, which NIST identified as a majority of the industry, the researchers found that in one-to-one matching, which is normally used for verification, Asian and African American people were up to 100 times more likely to be misidentified than white men.

In one-to-many matching, used by law enforcement to identify people of interest, faces of African American women returned more false positives than other groups.

“In a one-to-one search, a false negative might be merely an inconvenience — you can’t get into your phone, but the issue can usually be remediated by a second attempt,” Patrick Grother, a NIST computer scientist and the report’s primary author, said in a statement.

“But a false positive in a one-to-many search puts an incorrect match on a list of candidates that warrant further scrutiny.”

Grother concluded that NIST found “empirical evidence” that the majority of facial recognition systems have “demographic differentials” that can worsen their accuracy based on a person’s age, gender or race.

The federal investigators stressed that there was a wide range of accuracy levels between different software, noting some algorithms resulted in very few errors.

They also found that software developed in Asian countries tended to perform better on Asian faces. Although the research did not focus on a casual link, that result suggests that more diverse databases of individuals could yield better results for facial recognition.

“These results are an encouraging sign that more diverse training data may produce more equitable outcomes, should it be possible for developers to use such data,” Grother said.

The study’s results could significantly change the course of the nascent technology, which has been receiving increasing scrutiny.

Several other studies have found similar bias in the technology and some cities have banned its use by law enforcement.

Lawmakers have introduced bills to limit its use by police or in public housing, but as of now, there is no federal law dictating when, how, where or why facial recognition technology can be used.

Tags facial recognition Law enforcement

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.