Whether easing the barriers to launching a business or telling jokes, artificial intelligence has proven its ability to make an impact in a variety of spaces, but many companies aren’t taking full advantage of the benefits it has to offer. A major factor standing between AI and impact is a lack of trust.

There are customers who don’t trust AI because of headlines decrying bias and discrimination and internal stakeholders who cite a variety of trust-related issues. According to 2021 Forrester survey data, data and analytics decision-makers whose firms are adopting AI claim that these issues include privacy concerns with the use of AI to mine customer insights, inability to maintain oversight and governance of machine decisions and actions, and concern about unintended, potentially negative, and unethical outcomes.

With most emerging technologies, there is a period of confidence building, but AI presents its own trust-related challenges because:

  • AI is self-programming. Most AI today “codes itself” through machine learning, identifying patterns in training data to construct a model. Letting machines take control of coding raises trust issues and leads to concerns about errors. Systems designed by machines may be opaque and difficult for humans to understand; the inability to understand how AI produced its results may not be a large issue if you are teaching digital humanoids how to play soccer, but it could raise alarms if that AI is being used to make healthcare decisions or drive vehicles.
  • AI is inherently probabilistic. As we enter fall, you are certain to find pumpkin-spiced drinks popping up at your favorite cafés, but little else in this world is certain. Machine learning reflects the uncertainty inherent in the world because it learns from real-world data, so business leaders hoping to employ machine learning and AI will need to get comfortable with the fact that AI predictions are not deterministic. Business leaders will need to translate AI’s probabilities into a specific business context. For example, a prediction with a 95% confidence score may be adequate if you’re predicting customer churn, but it would be woefully insufficient if you’re trying to diagnose a highly contagious illness.
  • AI is a moral mirror. Over the course of history, humans have not treated each other fairly. The resulting inequity is embedded in the data we’re using to train today’s AI systems. As a result, AI learns to replicate bias, discrimination, and injustice. Well-known snafus like Microsoft’s Tay chatbot becoming a “Hitler-loving sex robot” and Amazon’s inability to keep its hiring AI from discriminating against female candidates may make it seem like this issue is a unique burden for tech companies. But the AI, algorithmic, and automation incidents and controversies (AIAAIC) repository has collected over 850 incidents since 2012 from sectors as diverse as automotive, consumer goods, the public sector, and even religion. As these incidents continue to make headlines, stakeholder trust in AI will remain shaky.

So how can technology leaders overcome AI’s trust-related challenges? In our report, Build Stakeholder Trust In Artificial Intelligence, we offer seven levers of trusted AI that technology leaders can pull to drive confidence in AI decision-making. Together, the levers may be applied to internal stakeholder groups to strengthen the AI business case and to external stakeholders such as customers and regulators to mitigate risks and create confidence.

Want to learn more about AI and data, be sure to check out the agenda for our upcoming Data Strategy and Insights event December 6-7 in Austin and online. I’ll be presenting a keynote entitled “The Seven Habits Of Highly Trusted Artificial Intelligence,” and we’ll have a variety of sessions and meetings about trust in AI. Hope to see you there.