Nonprofit Launches Tools to Boost Responsible AI

The Responsible AI Institute’s new tools create benchmarks that help organizations keep track of newly adopted global AI regulations and standards.

Shane Snider , Senior Writer, InformationWeek

December 8, 2023

3 Min Read
Graphic representation of responsible AI showing multiple aspects of use.
Bakhtiar Zein via Alamy Stock

At a Glance

  • Three benchmarks will guide businesses through AI regulatory landscape.
  • Navigating the differing global AI policies is tricky.
  • Businesses should create a culture that encourages safe AI use and adoption at every level.

In the race to adopt red-hot AI tools, businesses are faced with a growing global hodgepodge of guidelines for safe AI adoption -- including new edicts from President Biden’s Executive order and recent rules crafted abroad.

The Responsible Artificial Intelligence Institute (RAI Institute) on Thursday launched three tools designed to aid those looking to implement AI in organizations safely and responsibly. The tools, known as the Responsible AI Safety and Effectiveness (RAISE) benchmarks are designed to assist companies in developing safe AI products and meet quickly changing regulatory requirements.

The new tools cover three AI safety benchmarks, including those for corporate AI policy, AI hallucinations, and AI vendor alignment. The tools will be available to enterprises, startups and individual RAI members.

In an interview with InformationWeek, founder RAI Institute founder and executive chairman Manoj Saxena explains the importance of developing an AI safety framework as GenAI continues to quickly emerge. As the former IBM Watson Solutions general manager, Saxena is well-versed in the promise and potential hazards associated with AI. While working at IBM, Saxena was at a conference where someone brought up AI’s potential for bias specifically in medicine, where it could be the difference between life and death.

Related:Haggling Over the Future of AI Regulation and Responsibility

“That literally changed the direction of my life, and I made it my life’s mission to make sure we put guardrails and innovate responsibly with AI,” Saxena says.

Three Forces

Saxena said “three forces” are coming together as GenAI continues to take hold that will ultimately push a responsible AI ecosystem. Those forces are regulation, success from responsible AI rollout, and customer awareness and demand for safe AI.  The US, UK, and Canada have more pro-market regulation efforts, where other countries like China have more stringent rules, he says.

“So, our goal is to be like a bond rating agency for AI … helping companies really figure out what’s the best way to implement AI in a manner that adds to profitability and competitive advantage,” he said.

Companies are now trying to figure out how to deal with the dark side of AI -- including issues like data leakage, hallucinations, and bias that could undermine a company’s ethical standards. Using existing regulations and frameworks already in place, like the National Institute of Standards and Technology (NIST) AI framework and standards established by the International Organization for Standardization (ISO) for cybersecurity, RAI’s tools create a single point of reference for a convoluted global regulatory ecosystem.

Related:US Lawmakers Mull AI, Data Privacy Regulation

“Companies know that they need to make and deploy AI but they also need to do it in a way that’s compliant and risk-mitigated” he says.

Lowering the AI Barrier to Entry

Competitive pressures are forcing many companies to jump headfirst into AI adoption. And for smaller companies with fewer resources, it can be daunting to create a safe AI environment that meets the cornucopia of global regulations that are continually being developed. RAI’s members have access to online tools that guide them through the process.

Those companies, Saxena says, need to look at three important steps: making sure the product is not hallucinating, that there is a sound policy benchmark in place and, finally, that if you implement or buy an AI product, it aligns with existing policies and regulations. “We want to make sure that this is safe, that it’s not hallucinating, and that ChatGPT, or Bard or others are not making up stuff that could create damage to me and my business,” he said.

Var Shankar, executive direct of RAI Institute, said a company-wide culture of responsible AI use is essential. “You need a citizen developer program -- you need everybody to get involved in AI and development. On the other hand, you need hygiene around having the right documentation and processes. To do both of those things well, you need to add some level of automated documentation, which is what we’re trying to get at with these benchmarks.”

Related:Bridging the AI Strategy Gap in the C-Suite

Founded in 2016, the RAI Institute’s members include Amazon Web Services, Boston Consulting Group, Shell Chevron, and many other companies collaborating to promote responsible AI use.

Read more about:

Regulation

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights