The views expressed by contributors are their own and not the view of The Hill

Why are Artificial Intelligence systems biased?

Getty Images


A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an “investor” must be a male.

A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn’t seem to like female candidates.

Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color. Another commercial face-recognition technology that Amazon tried to sell to government agencies has been shown to have significantly higher error rates for minorities. And a popular selfie lens by Snapchat appears to “whiten” people’s faces, apparently to make them more attractive.

These are not just academic curiosities. Broward County’s recidivism system, while supposedly only one of several factors judges were to consider, was shown to have a substantial impact. Just recently, we learned of the first false arrest of an African American based largely on a facial-recognition system.

Even as these embedded biases are discovered, new ones come up.

Perhaps the most egregious are what may be called “mugshot AI,” which claims to unearth useful patterns from physiognomic characteristics. From phrenology to palmistry, pseudosciences that claim to tell personality and mental states from physical characteristics are nothing new. AI’s newfound ability to process, recognize or find patterns from large-scale physiognomic data has, however, given a dubious new lease to these dubious undertakings. Various companies claim to discern personality characteristics, including criminality, from mugshots or to speed up recruitment by analyzing job candidates from online video interviews. Indeed, there is a tremendous temptation to look for some arbitrary correlational mapping from one high-dimensional object — a person’s face, voice, posture — to another critical decision variable, given enough data.

Of course, bias existed before the advent of AI systems. Human decision-makers, from law enforcement to employment agencies, have been known to act on internal biases. One saving grace is that there is variance in individual human biases, which works to reduce their macro harm; not all humans have the same difficulty in distinguishing between nonwhite faces, for example.

Yet, bias internalized in a widely deployed AI system can be insidious — precisely because a single set of biases become institutionalized with little variance.

The situation is further exacerbated by our well-known automation bias, which makes us subconsciously give greater weight to machine decisions. 

Reining in inadvertent amplification of societal biases thus has become one of the most urgent tasks in managing the risks of data-driven AI technology.

So why do AI systems exhibit racist or sexist biases? Are people in commercial AI labs deliberately writing biased algorithms or training systems on deliberately biased data? Turns out that the offending behavior is most often learned than designed, and most of these systems have been trained on readily available public data, often gleaned from the web.

A critical catalyst for the recent successes of AI has been the automatically captured digital footprints of our lives and quotidian interactions. This allowed image-recognition systems to be trained on troves of pictures (often with labels) that we collectively upload onto the web, and natural language systems to be trained on the enormous body of language captured on the web — from Reddit to Wikipedia — through our daily interactions.

Indeed, the web and internet have become a repository of our Jungian collective subconscious — and a convenient way to train AI systems. A problem with the collective subconscious is that it is often raw, unwashed and rife with prejudices; an AI system trained on it, not surprisingly, winds up learning these and, when deployed at scale, can unwittingly exacerbate existing biases.

In other words, although it is no longer socially acceptable to admit to racist or sexist views, such views — and their consequences — often are still implicit in our collective behavior and captured in our digital footprints. Modern data-driven AI systems can unwittingly learn these biases, even if we didn’t quite intend them to.

AI systems trained on such biased data not only are used in predictive decision-making (policing, job interviews, etc.), but also to generate or finish incompletely specified data (e.g., to improve a low-resolution picture by upsampling it). This generation phase can itself be a vehicle for further propagation of biases. It shouldn’t come as a surprise, for example, that a system trained on images of engineering faculty members will more readily imagine a male face than a female one. The fact that machine-learning systems are limited to capturing correlational patterns in the data — and that some correlations may result from ingrained inequitable societal structures — means societal biases can seep in, despite well-intentioned design.

Increasingly, designers are combating societal biases in AI systems. First and foremost is curating the training data. Unlike traditional disciplines, like statistics, that paid significant attention to data collection strategies, progress in AI came mostly from exploiting the copious data available on the web. Unfortunately, that readily available data often is asymmetric and doesn’t have sufficient diversity — Western societies, for example, tend to have a larger digital footprint than others. Such asymmetries, in turn, lead to the kinds of asymmetric failures observed in gender-detection systems. Some obvious ideas for curation, such as “blinding” the learning system to certain sensitive attributes such as gender and race, have been shown to be of limited effectiveness.

The other issue of using readily available data is that it is often rife with hidden biases. For example, as much as there is temptation to train large-scale language generation systems on the profusion of text on the web, it is not surprising that a lot of this user-generated text on forums that allow anonymous postings can be rife with biases. This explains to a large extent the types of biased text completions observed in some state-of-the-art language-generation systems. There is a growing understanding that training data must be carefully vetted. Such steps may increase the costs of — and severely reduce — the data available for training. Nevertheless, given the insidious societal costs of using uncurated data, we must be ready to bear those costs.

Some also have advocated explicitly “de-biasing” the data (e.g., by balancing the classes in the training samples). While a tempting solution, such steps in essence correspond to a form of social engineering — in this case, of data. If there is social engineering to be done, it seems much better for society to do it at the societal level, rather than just by AI developers.

Part of the challenge in controlling harmful societal biases in today’s AI systems is that most of them are largely data-driven, and typically do not take any explicit knowledge as input. Given that explicit knowledge is often the most natural way to state societal norms and mores, there are efforts to effectively infuse explicit knowledge into data-driven predictive systems.

Another proactive step is looking more carefully at what is being optimized by learning systems. Most systems focus on optimizing the accuracy of a predictive system. It is, however, possible for a system that had high overall accuracy to still have bad performance on certain minority classes. More generally, there is increasing recognition that the degree of egregiousness in misclassifications must be considered — after all, confusing apples with oranges is less egregious than confusing humans with animals. The prohibitive costs of false positives in some applications (e.g., face recognition in predictive policing) might caution a civilized society that, in some cases, predictive systems based on correlational patterns should be avoided, despite their seemingly high accuracy.

As AI technology matures and becomes widely deployed, there is increased awareness — in the research community, companies, governments and society — of the importance of considering its impacts. There are now premier academic conferences devoted to scholarly understanding of the impact of AI technology in exacerbating societal biases; increasingly, AI publications ask for an explicit discussion of the broader impacts of technical work. Alerted by ongoing research, companies such as IBM, Amazon and Microsoft are declaring moratoriums on the sale of technologies such as face recognition, pending greater understanding of their impacts. Several U.S. cities have banned or suspended facial-recognition technology in policing.

There is, of course, no magic bullet for removing societal bias from AI systems. The only way to make sure fair learning can happen from the digital traces of our lives is to actually lead fair lives, however tall an order that might be.

But we also should acknowledge that these systems, rightly used, can hold a mirror up to society. Just as television brought racial injustices into our living rooms during the 1960s’ civil rights movement and helped change us for the better, AI systems based on our digital footprints can help show us ourselves and, thus, be a force for our betterment.

Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and chief AI officer for AI Foundation, which focuses on the responsible development of AI technologies. He was president of the Association for the Advancement of Artificial Intelligence and helped start the Conference on AI, Ethics and Society. He was also a founding board member of Partnership on AI. He can be followed on Twitter @rao2z.

Tags Algorithmic bias Artificial intelligence Bias Machine learning Philosophy of artificial intelligence

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.