How To Tackle AI Bias

Learn what is artificial intelligence bias and how companies can prevent it.

September 22, 2022

Although AI is undoubtedly revolutionizing the world as a versatile technology implemented in many important sectors, it can only act on the information used to train it. This opens the door for human bias, here’s how can we de-bias AI, says Nigel Cannings, CEO of Intelligent Voice.

Artificial intelligence is revolutionizing the world as a dynamic tool across many important sectors. From medical and financial services to recruitment profiling, this versatile technology plays a leading role in our decision-making. 

It can’t, however, think like a human and can only act on the information used to train it. Ironically, this can also open the way for human bias to manifest itself negatively. So, what is the issue with bias in AI, and what can we do to combat the problem?

Fairness and Transparency

Aside from ease, one of the primary selling points of AI in sectors such as recruitment and HR, was the perception that as a “machine”, artificial intelligence is inherently free from bias. Thus it would make it easier for businesses to achieve diversity goals while removing the risk of racism, sexism, and ageism from core human resources management systems. But all was not as it originally seemed. Systemic bias found its way into AI. Why? Because, like all tech, it was created and populated by humans. 

Consequently, it has become increasingly vital that debiasing becomes a watchword in the field and is acted upon by everyone who values fairness and transparency. Making debiasing an invaluable step in the training process. Why? Because we know inputting the wrong data can produce harmful results, damaging not only any business or organization but also individuals in the workplace.

Taking a recruitment example, there have been multiple instances – even in embarrassingly high-profile companies, such as Amazon – where algorithms were previously trained that favored young white men while excluding women and people of color. This was not deliberate but an unfortunate artifact of the training data, based on an unconscious hiring policy that favored certain groups of people. This is literal systemic sexism and racism in action, showing how AI can inadvertently lead to discrimination through thoughtless training. And this is why there needs to be more focus on debiasing these systems.

However, we need to understand the basics of how AI networks decide before we can begin to solve the problem of bias.

The Basics of AI

Machines can now detect cancer in an X-Ray when the most methodical of professionals might have been unable to see the problem. However, AI can only achieve this because a human has previously labeled hundreds of X-Rays as having come from an individual who has either developed or does not have the disease.

Human involvement is the key to the basics of AI. Professionals work to input data to train AI to sift through the right data to make decisions. And often, the results are outstanding. The difficulty is finding ways to input that data without also inputting the potential for bias. 

Staying with the medical model, a recent discovery is that this same technology could identify the race of patients based on an X-Ray when professionals offered the same data could not. While this may not be misused per se, it shows how discrimination can easily creep into AI models.

Everyday People

Let’s take another example with a crucial bearing on people’s everyday lives involving often controversial Universal Credit. The Department of Work and Pensions has cited its intention to use AI to assess who should and should not receive the benefit.

Although this is all in line with 21st-century technology, they have yet to announce more detail on the proposed new system, which brings questions about this technology to light. What data will they use? Will people be heavily profiled? What model of personal information will they feed the software?

Where is the transparency despite Freedom of Information requests from the media? Get this wrong, and lives could easily be ruined. 

Rights on Automated Decisions

It’s important to remember that the EU’s data protection regulation GDPR states that everyone has a right to have automated decisions explained, so a benefits system that cannot do this is illegal and unethical.

‘The point is any pre-existing biases from the creator will be trained into the software, possibly leading to catastrophic results. AI is in essence trained by human thought processes.’ 

See More: The Ethics of AI in HR: What Does It Take to Build an AI Ethics Framework?

How Do We debias?

We use what’s termed “labeled data”, linked to a specific category to train AI models. This could be an image, for example, or in terms of a selection process, a successful and unsuccessful candidates’ CVs. It is possible to use “semi-supervised” training to reduce the amount of human labeling, but this greatly increases the amount of data that has to be used to show an AI model of how the world works. And if the data is not carefully curated, this can quickly lead to unwelcome results.

 Recent advances in Large Language Models such as GPT-3 and BlenderBot mean that machines can mimic human-like interactions. But because the model is more parrot than human, it leads to “speech” that can be racist, homophobic, Islamophobic and anti-Semitic, or just plain wrong.

Explainable AI is Key

To prevent this, we need ‘Explainable AI’, a system asking the software why it came to a certain decision and what it based results on. “XAI” is quickly growing as an area of research.

We can train models that “tell” us why they have reached a decision, or we can trick “black-box” models into revealing how they reached a conclusion. However, if we can’t explain why a system makes a decision, we can never work out how to correct the data it is trained on.

When we use AI correctly, it can be without question one of the most incredible concepts of our times. This can only be done if we maintain the debiasing approach.

How do you think companies can debias data at the point of collection and the importance of the demography of data? Share your thoughts with us know on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

MORE ON AI BIAS

Nigel Cannings
Nigel Cannings is the CTO at Intelligent Voice. He has over 25 years' experience in both Law and Technology, is the founder of Intelligent Voice Ltd and a pioneer in all things voice. Nigel is also a regular speaker at industry events not limited to NVIDIA, IBM, HPE and AI Financial Summits.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.