Forging a Future With Ethical AI

As algorithms become more prevalent and more advanced, what ethical issues should companies using AI be aware of? Simon Tanné, Head of Data Science at Echobox examines bias as a key ethical challenge in AI today and in the future.

Last Updated: September 19, 2022

As AI becomes more interconnected with our daily lives, the ethical questions for companies and individuals have become more complex. Businesses realize the importance of ethical AI and the reputational damage that can stem from being associated with a prejudiced algorithm or one that produces unethical outputs, and this is driving change. A decade ago, AI ethics was perhaps an afterthought, regarded only in the most apparent cases of harmful output. Today, ethics are increasingly considered early in the AI project lifecycle and incorporated during the requirements gathering process. 

Bias: a perennial challenge in AI 

A few key ethical issues have been present since the early days of AI and continue to be important in a business context as technology evolves. The first is bias

To fully understand the problem of bias, let’s start at the beginning of the lifecycle of an algorithm – a set of instructions and logical rules that execute to achieve an outcome, essentially the building blocks of AI. One of the first stages of creating an algorithm is gathering data on which to train the model with the challenge of making it robust. In many cases, priority goes to the quantity of training data over its quality or representativeness (in terms of both the content itself being representative and coming from a diverse and representative set of sources). An algorithm may be given diverse content from the internet or other public sources as training data, and the quality of web content cannot always be ensured. Within a set of data scraped from the web, certain populations might be over- or under-represented, bias in how content is presented, and the content itself may even be false. If an algorithm is trained on biased data, its output is likely biased, and the impact can be far-reaching. 

The risk of malicious manipulation of algorithms 

Another issue in AI ethics that could become more prominent as technology evolves is the malicious use of algorithms. This issue is perhaps more straightforward and less prevalent than the issue of bias, making it a less significant threat in a business context. 

It’s always possible for bad actors to train an algorithm with malicious intent, and some experts warn that floods of biased data or misinformation could be deliberately released to manipulate otherwise ethical algorithms. But for most of the companies using AI algorithms, if the output is corrupt or unethical, it results from unexpected algorithmic behavior – not an intentionally malevolent action. Algorithms often function as black boxes, and even experts and data scientists cannot entirely control them. 

How can bias be corrected and prevented in AI?

How can these ethical issues be corrected and even prevented as AI technology is increasingly adopted across companies of all sizes and deployed in new ways across the business? With bias being such a considerable risk for companies using AI at present, we’ll focus on three main approaches to correcting for bias when training and using algorithms: 

  1. The first option involves retraining algorithms using a corrective data set. If an algorithm is producing false or biased information – for example, it only returns examples of male figures when prompted with the word “hero” – corrective action would involve retraining the algorithm with a more representative data set. In this example, we would give the algorithm a new data set that more prominently features female heroes from history, literature, pop culture, and more. Of course, this approach requires a human to identify skewed output in the first place and provide a corrected training data set – which still creates opportunities for bias. 
  2. Advances in AI are not only raising new ethical questions – but they’re also creating new solutions to ensure ethical AI. A second approach to correcting bias is to use AI control processes and algorithms to counter-audit original generator algorithms. These control processes ensure that the output of original algorithms is correct, ethical, and in line with a company’s guidelines. While ongoing research, this approach requires less human involvement than retraining algorithms. The ultimate goal would be to have these control processes fully integrated within AI models from the start to ensure ethical output. The technology isn’t there yet, but it’s an interesting space to watch in AI ethics. 
  3. Another area of ongoing development involves breaking down algorithmic models for greater transparency, permitting potential bias to be corrected along the way. Currently, most AI algorithms are difficult to control because they function like black boxes: their inner workings are not easily interpreted by humans, making it challenging to change a model’s structure and modify how it works from an ethical perspective. Researchers are currently working on developing milestones within the structure of an algorithm’s model. This would make it possible to observe and understand how the algorithm functions at each milestone and adjust the model or the weighting to influence the output. 

As AI evolves and advances, so do potential ethical risks 

At the moment, no machine or algorithm has unequivocally managed to pass the Turing Test – the AI test of fame to determine if a machine can demonstrate intelligence indistinguishable from a human – though some (disputed) attempts have occurred in recent years. In the next decade, we may very well witness an intelligent system able to pass this test, which would mean that we would not be able to distinguish between communicating with this system and another human.

GPT-3 may be a key advancement in getting there. One of the largest language models in use and widely considered a breakthrough in AI, it’s capable of generating sentences and can even write article summaries or generate full stories, creative in nature, based on a prompt of a few lines. 

Certain ethical issues also surface with the advances in AI signaled by the arrival of GPT-3 and other NLP models from the “Transformers” generation. For example, these models’ output often follows the tone or style of the prompt, which can be problematic: even if the algorithm creator tries to remove bias and toxic language, the model is still capable of generating problematic content if fed with harmful or malicious prompts. 

Even with today’s version of GPT-3, it can be difficult to distinguish AI from human intelligence, but ethical issues and complexities will become even more significant as algorithms become more sophisticated and their capabilities approach that of a human. 

Transparency is the way forward for ethical AI 

Minimizing ethical risk in AI and reducing bias are rooted in transparency. We must make our algorithms more transparent, we must introduce model milestones that make it possible to understand and correct the output at each stage, and we must study the diversity of biases that occur so that we can eradicate them. Of course, it’s not feasible for any person or team to do this alone. The entire AI community must collaborate to identify and implement standardized frameworks and control systems that do not exist today. We can achieve this through open sourcing models and training mechanisms. This will allow a broader set of people to determine how our models, and their behaviors, might need to change to ensure an ethical future for AI.

What are the risks companies using AI should be aware of? Share your thoughts with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window . We’d love to know!

MORE ON AI BIAS

Simon Tanne
Simon Tanné is the Head of Data Science at Echobox, the new standard in publishing automation. Simon holds a degree in mathematics and data science from UCL after completing his engineering degree at CentraleSupélec, a leading engineering school in France. He joined Echobox in 2014 as one of the earliest members of the team. Today, more than 1000 brands across 100 countries rely on Echobox’s automation to increase performance while saving costs on content distribution.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.