clock menu more-arrow no yes mobile
A drawing of a woman looking at a computer with a warning message on the screen. Xia Gordon for Vox and Capital B

Filed under:

AI automated discrimination. Here’s how to spot it.

The next generation of AI comes with a familiar bias problem.

A.W. Ohlheiser is a senior technology reporter at Vox, writing about the impact of technology on humans and society. They have also covered online culture and misinformation at the Washington Post, Slate, and the Columbia Journalism Review, among other places. They have an MA in religious studies and journalism from NYU.

Part of the discrimination issue of The Highlight. This story was produced in partnership with Capital B.

Say a computer and a human were pitted against each other in a battle for neutrality. Who do you think would win? Plenty of people would bet on the machine. But this is the wrong question.

Humans created computers, and they design and train the systems that make modern technology work. As these systems are created, the biases of their human creators are reflected in them. When people refer to artificial intelligence bias, this is, in essence, what they are talking about. Like human bias, AI bias, when translated into decisions or actions, becomes discrimination. Like many forms of discrimination, AI bias disproportionately impacts communities that historically or presently face oppression.

Facial recognition software has a long history of failing to recognize Black faces. Researchers and users have identified anti-Black biases in AI applications ranging from hiring to robots to loans. AI systems can determine whether you find public housing or whether a landlord rents to you. Generative AI technology is being pitched as a cure for the paperwork onslaught that contributes to medical professional burnout.

As the capabilities of generative AI tools like ChatGPT and Google Bard enter the mainstream, the unfair preferences or prejudices that have long plagued AI have remained in place. The effect is all around us, in apps and software you encounter daily, from the automatic sorting of your social media feeds to the chatbots you use for customer service. AI bias also can creep into some of the biggest decisions companies might make about you: whether to hire you for a job, to lend you money for a home, or to cover the cost of your health care.

The terminology of this technology — AI, algorithms, large language models — can make the examinations of its effects feel highly technical. In some ways, AI bias is a technical issue, one with no easy solution. Yet the questions at the heart of fighting AI bias require little specialized knowledge to understand: Why does bias creep into these systems? Who is harmed by AI bias? Who is responsible for addressing it and the harms it generates in practice? Can we trust AI to handle important tasks that have an impact on human lives?

Here’s a guide to help you sort through these concerns, and figure out where we go from here.

What is AI? What’s an algorithm?

A lot of definitions of artificial intelligence rely on a comparison to human reasoning: AI, these definitions go, is advanced technology designed to replicate human intelligence, and able to perform tasks that have previously required human intervention. But really, AI is software that can learn, make decisions, complete tasks, and problem-solve.

AI learns how to do this from a data set, often referred to as its training data. An AI system trained to recognize faces would learn to do that on a data set composed of a bunch of photos. One that creates text would learn how to write from existing writing fed into the system. Most of the AI you’ve heard about in 2023 is generative AI, which is AI that can, from large data sets, learn how to make new content, like photos, audio clips, and text. Think the image generator DALL-E or the chatbot ChatGPT.

In order to work, AI needs algorithms, which are basically mathematical recipes, instructions for a piece of software to follow in order to complete tasks. In AI, they provide the basis for how a program will learn, and what it will do.

Okay, so then what is AI bias, and how does it get into an AI system?

AI bias is like any other bias: It’s an unfair prejudice or practice present in or executed by the system. It disproportionately impacts some communities over others, and is creeping into more and more corners of daily life. People might encounter bias from a social media filter that doesn’t work properly on darker skin, or in test proctoring software that doesn’t account for the behavior of neurodivergent students. Biased AI systems might determine the care someone receives at the doctor or how they’re treated by the criminal justice system.

Bias finds its way into AI in a lot of ways. Broadly speaking, however, to understand what’s happening when an AI system is biased, you just need to know that AI is fundamentally trained to recognize patterns and complete tasks based on those patterns, according to Sasha Luccioni, a researcher on the Machine Learning Ethics and Society team at the open source AI startup Hugging Face. Because of this, she said, AI systems “will home in on the dominant patterns, whatever they are.”

Those dominant patterns might show up in the training data an AI system learns from, in the tasks it is asked to complete, and in the algorithms that power its learning process. Let’s start with the first of these.

AI-powered systems are trained on sets of existing data, like photos, videos, audio recordings, or text. This data can be skewed in an endless number of ways. For instance, facial recognition software needs photos to learn how to spot faces, but if the data set it’s trained on contains photographs that depict mostly white people, the system might not work as well on nonwhite faces. An AI-powered captioning program might not be able to accurately transcribe somebody speaking English with a slight foreign accent if that accent isn’t represented in the audio clips in its training database. AI can only learn from what it’s been given.

The data set’s bias might itself merely be a reflection of larger systemic biases. As Karen Hao has explained in MIT Technology Review, unrepresentative training data prompts AI systems to identify unrepresentative patterns. A system designed to automate a decision-making process trained on historical data may simply learn how to perpetuate the prejudices already represented in that history.

Perhaps an AI system’s creator is trying to remove bias introduced by a data set. Some methods of trying to reduce bias can also introduce their own problems. Making an algorithm “blind” to an attribute like race or gender doesn’t mean that the AI won’t find other ways to introduce biases into its decision-making process — and perhaps to identify the same attributes it was supposed to ignore, as the Brookings Institution explained in a 2019 report. For example, a system that is designed to assess applications for a job might be rendered “blind” to an applicant’s gender but learn to distinguish male-sounding and female-sounding names, or look for other indicators in their CV, like a degree from an all-women’s college, if the data set it’s trained on favors male applicants.

Have I encountered AI bias?

Probably, yes.

For many Americans, AI-powered algorithms are already part of their daily routines, from recommendation algorithms driving their online shopping to the posts they see on social media. Vincent Conitzer, a professor of computer science at Carnegie Mellon University, notes that the rise of chatbots like ChatGPT provides more opportunities for these algorithms to produce bias. Meanwhile, companies like Google and Microsoft are looking to generative AI to power the search engines of the future, where users will be able to ask conversational questions and get clear, simple answers.

“One use of chat might be, ‘Okay, well, I’m going to visit this city. What are some of the sites that I should see? What is a good neighborhood to stay in?’ That could have real business implications for real people,” Conitzer said.

Although generative AI is just beginning to show up in quotidian technologies, conversational search is already a part of many people’s lives. Voice-activated assistants have already shifted our relationship to searching for information and staying organized, making routine tasks — compiling a grocery list, setting a timer, or managing a schedule — as simple as talking. The assistant will do the rest of the work. But there’s an established bias in tools like Siri, Alexa, and Google Assistant.

Speech recognition technologies have an established history of failing in certain scenarios. They might not recognize requests from people who do not speak English as a first language, or they may fail to properly understand Black speakers. While some people may choose to avoid these problems by not using these technologies, these failures can be particularly devastating for those with disabilities who may rely on voice-activated technologies.

This form of bias is creeping into generative AI, too. One recent study of tools meant to detect the use of ChatGPT in any given writing sample found that these detectors might falsely and unfairly flag writing done by non-native English speakers as AI-generated. Right now, ChatGPT feels like a novelty to many of its users. But as companies rush to incorporate generative AI into their products, Conitzer said, “these techniques will increasingly be integrated into products in various ways that have real consequences on people.”

Who is hurt most by AI bias?

For a stark glimpse of how AI bias impacts human lives, we can look at the criminal justice system. Courts have used algorithms with biases against Black people to create risk scores meant to predict how likely an individual is to commit another crime. These scores influence sentences and prisoners’ ability to get parole. Police departments have even incorporated facial recognition, along with the technology’s well-documented biases, into its daily policing.

An algorithm designed to do a risk assessment on whether an arrestee should be detained would use data derived from the US criminal justice system. That data would contain false convictions, and fail to capture data on people who commit crimes and are not caught, according to Conitzer.

“Some communities are policed far more heavily than other communities. It’s going to look like the other community isn’t committing a whole lot of crimes, but that might just be a consequence of the fact that they’re not policed as heavily,” Conitzer explained. An algorithm trained on this data would pick up on these biases within the criminal justice system, recognize it as a pattern, and produce biased decisions based on that data.

Obvious AI bias is hardly limited to one institution. At the start of the Covid-19 pandemic, schools relied on anti-cheating software used for virtual test takers. That type of software often uses video analysis and facial recognition to watch for specific behaviors it’s been trained to see as potential signs of cheating. Students soon found that virtual proctoring software, intended to enforce academic fairness, didn’t work equally well for all students. Some popular proctoring programs were failing to detect Black faces and penalizing students who were unable to find a stable internet connection and quiet, private space in their home for test-taking. Proctoring software was particularly biased against students with a wide range of disabilities, or spiking the anxiety of test takers with some mental health conditions.

As the Center for Democracy and Technology has noted, proctoring software could incorrectly flag students requiring a screen reader, those with visual impairment or other disabilities that might cause irregular eye movements, and neurodivergent students who might pace or fidget while taking a test. Some proctoring services do not allow for bathroom breaks.

This sounds pretty bad! Is there any solution?

The good news is that AI bias is a problem that lots of people talk about and think about how to reduce. However, not everyone agrees on how to fix this increasingly pressing issue.

Sam Altman, the founder of OpenAI, recently told Rest of World that he believes these systems will eventually be able to fix themselves: “I’m optimistic that we will get to a world where these models can be a force to reduce bias in society, not reinforce it,” he said. “Even though the early systems before people figured out these techniques certainly reinforced bias, I think we can now explain that we want a model to be unbiased, and it’s pretty good at that.”

Altman’s solution essentially asks the world to trust the technology to fix itself, in a process driven by the people who created it. For a lot of AI and ethics experts, that’s not enough.

Luccioni, the Hugging Face ethics researcher, used the example of generative AI tools that are supposed to speed up medical paperwork, arguing that we should be questioning whether AI belongs in this space at all. “Say that ChatGPT writes down the wrong prescription, and someone dies,” she says. While note-taking is not a task that takes a decade of education to master, assuming that you can simply swap out a medical doctor for an AI bot to speed up the paperwork process removes vital oversight from the equation.

An even deeper problem, Luccioni notes, is that there are no mechanisms for accountability when an AI tool integrating itself into vital care makes mistakes. Companies promising to replace or work in tandem with highly specialized professionals do not need to seek any sort of certification before, for instance, launching a bot that’s supposed to serve as a virtual therapist.

Timnit Gebru, a computer scientist with deep expertise in AI bias and the founder of the Distributed AI Research Institute, argued recently that the companies behind the push to incorporate AI into more and more aspects of our lives have already proven that they do not deserve this trust. “Unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive,” she told the Guardian.

Conitzer says the problem of AI bias requires auditing and transparency in AI systems, particularly those tasked with important decisions. Presently, many of these systems are proprietary or otherwise unavailable for scrutiny from the public. As the novelty of generative AI tools like ChatGPT fuels a rush to incorporate new systems into more and more of our lives, understanding how to identify AI bias is the first step toward systemic change.

Abby Ohlheiser is a freelance reporter and editor who writes about technology, religion, and culture.

Policy

We can make birth safer for Black mothers. Here’s how.

Even Better

How to be enough

Policy

America is full of abandoned malls. What if we turned them into housing?

View all stories in The Highlight

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.