Skip to main content

AI Weekly: The intractable challenge of bias in AI

Image Credit: Getty Images

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Last week, Twitter shared research showing that the platform’s algorithms amplify tweets from right-of-center politicians and news outlets at the expense of left-leaning sources. Rumman Chowdhury, the head of Twitter’s machine learning, ethics, transparency, and accountability team, said in an interview with Protocol that while some of the behavior could be user-driven, the reason for the bias isn’t entirely clear.

“We can see that it is happening. We are not entirely sure why it is happening,” Chowdhury said. “When algorithms get put out into the world, what happens when people interact with it — we can’t model for that. We can’t model for how individuals or groups of people will use Twitter, what will happen in the world in a way that will impact how people use Twitter.”

Twitter’s forthcoming root-cause analysis will likely turn up some of the origins of its recommendation algorithms’ rightward tilt. But Chowdhury’s frank disclosure highlights the unknowns about biases in AI models and how they occur — and whether it’s possible to mitigate them.

The challenge of biased models

The past several years have established that bias mitigation techniques aren’t a panacea when it comes to ensuring fair predictions from AI models. Applying algorithmic solutions to social problems can magnify biases against marginalized peoples, and undersampling populations always results in worse predictive accuracy. For example, even leading language models like OpenAI’s GPT-3 exhibit toxic and discriminatory behavior, usually traceable back to the dataset creation process. When trained on biased datasets, models acquire and exacerbate biases, like flagging text by Black authors as more toxic than text by white authors.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Bias in AI doesn’t arise from datasets alone. Problem formulation, or the way researchers fit tasks to AI techniques, can also contribute. So can other human-led steps throughout the AI deployment pipeline.

A recent study from Cornell and Brown University investigated the problems around model selection, or the process by which engineers choose machine learning models to deploy after training and validation. The paper notes that while researchers may report average performance across a small number of models, they often publish results using a specific set of variables that can obscure a model’s true performance. This presents a challenge because other model properties can change during training. Seemingly minute differences in accuracy between groups can multiply out to large groups, impacting fairness with regard to specific demographics.

The study’s coauthors underline a case study in which test subjects were asked to choose a “fair” skin cancer detection model based on metrics they identified. Overwhelmingly, the subjects selected a model with the highest accuracy — even though it exhibited the largest gender disparity. This is problematic on its face because the accuracy metric doesn’t provide a breakdown of false negatives (missing a cancer diagnosis) and false positives (mistakenly diagnosing cancer when it’s not actually present), the researchers assert. Including these metrics could have biased the subjects to make different choices concerning which model was “best.”

Architectural differences between algorithms can also contribute to biased outcomes. In a paper accepted to the 2020 NeurIPS conference, Google and Stanford researchers explored the bias exhibited by certain kinds of computer vision algorithms — convolutional neural networks (CNNs) — trained on the open source ImageNet dataset. Their work indicates that CNNs’ bias toward textures may come not from differences in their internal workings but from differences in the data that they see: CNNs tend to classify objects according to material (e.g. “checkered”) and humans to shape (e.g. “circle”).

Given the various factors involved, it’s not surprising that 65% of execs can’t explain how their company’s models make decisions.

While challenges in identifying and eliminating bias in AI are likely to remain, particularly as research uncovers flaws in bias mitigation techniques, there are preventative steps that can be taken. For instance, a study from a team at Columbia University found that diversity in data science teams is key in reducing algorithmic bias. The team found that, while individually, everyone is more or less equally biased, across race, gender, and ethnicity, males are more likely to make the same prediction errors. This indicates that the more homogenous the team is, the more likely it is that a given prediction error will appear twice.

“Questions about algorithmic bias are often framed as theoretical computer science problems. However, productionized algorithms are developed by humans, working inside organizations, who are subject to training, persuasion, culture, incentives, and implementation frictions,” the researchers wrote in their paper.

In light of other studies suggesting that the AI industry is built on geographic and social inequalities; that dataset prep for AI research is highly inconsistent; and that few major AI researchers discuss the potential negative impacts of their work in published papers, a thoughtful approach to AI deployment is becoming increasingly critical. A failure to implement models responsibly could — and has — led to uneven health outcomes, unjust criminal sentencing, muzzled speech, housing and lending discrimination, and even disenfranchisement. Harms are only likely to become more common if flawed algorithms proliferate.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.