RAI Institute Founder on Steering AI Systems to Maturity

The nonprofit Responsible AI Institute has released its “Maturity Model” to help businesses grade their artificial intelligence efforts considering increasing regulatory demands.

Shane Snider , Senior Writer, InformationWeek

February 21, 2024

5 Min Read
 red car with number 90 races on the streets of Montreux, Switzerland
Alex Hammond via Alamy Stock

It’s no secret that companies are in a race to adopt and implement new AI tools as generative AI fever sweeps the technology industries. But there’s danger in adopting powerful generative AI tools without a sound strategy or defined business use cases, says Manoj Saxena, founder and chairman of Responsible AI Institute (RAI Institute).

The institute’s new model outlines five stages of maturity, including aware, active, operational, systemic, and transformative. Used in conjunction with RAI’s benchmark tools, the group hopes the maturity model will help organizations realize better AI practices and provide a grading process to track AI maturity.

InformationWeek interviewed Saxena, who is the former general manager of IBM’s Watson Solutions, to find out more about RAI’s efforts.

(Editor’s note: The following quotes are edited for clarity):

With RAI’s “maturity model,” what is your take on where companies are falling into those stages right now?

This is not unlike when the internet came about. We saw the first demo of a browser in the 90s. But the first business and e-commerce sites were not built until later. It took this two- or three-year period, what I call the "dog watching TV stage," where they’re looking at the new technology and saying, "What the heck is this?" That’s exactly what’s happening right now with generative AI.

Related:Biden Pens Landmark AI Executive Order

It seems like 2024 is the year that money is flowing, and things are actually happening with businesses and generative AI strategies. How is this new landscape evolving?

This is one of those rare times where the larger incumbents are putting billions of dollars into AI and they’ve started to give out free tools. But people don’t really know how to go about implementing. These are dynamic systems that are evolving and creating information all the time. And how to align that to regulations that are coming up is something very new. We didn’t have this with the internet. You understood the technology -- you put the models in, and you just went, "Go!" So, two big barriers that companies are beginning to address with AI is: One, do we have the skills and capacity to really start harnessing this into systems that can create business value? And two, can we do that without AI damaging my brand? These are the headwinds. My view is that it’s going to evolve, but it’s going to take building new capacities in the enterprise for responsible AI that they don’t have yet today.

Are the guardrails for AI starting to gel, so to speak? Or are we still behind the curve?

Related:Demystifying Responsible AI For Business Leaders

This is like having super cars without brakes and steering. Generative AI is like having a super car that’s all engine with no brakes and now steering. So, you could put up the guardrails, but if you don’t know how to break it and steer it, it’s going to bang into the guardrails. Italy’s initial ban of ChatGPT is a great example of it. To me, it’s a problem of both building cars that give you safety and alignment on how to do this, and a problem of having the right guardrails in place. So, we need to make progress on both.

Is there a danger now that we have the Biden Administration’s executive order on AI in place, that we will fall down on getting more solid regulation in place? We still don’t have federal-level data privacy regulations. Will we ever get comprehensive AI legislation?

We have launched these technologies that can give you a lot of power. There are [data privacy laws] in some states and there are regulations in the UK and EU. And then I have my internal AI policy, how does my business make sure it is compliant? In the context of AI, we have global policies, national policies, industry policies, company-level policies, use case policies, and user policies -- all these together are what make that car steerable. And that framework doesn’t exist today because most of the big guys are busy solving the problem of artificial general intelligence [AGI] and not solving the problem of augmented business intelligence.

Related:EU AI Act Takes Another Step Forward

Are we having a problem where we are perhaps focusing on the "sexiest" thing -- the aspects of generative AI that get all the hype? AGI is the big, scary thing that grabs all the headlines. Are we missing the boat on the other side of AI?

We are definitely confusing the sizzle for the steak. We really are going after all this exciting stuff to talk about. But it’s the boring processes within enterprises that can make billions of dollars if you can apply the right type of AI and steering and performance on it. If you look at the car industry, it took a long time before traffic lights and guardrails and bumpers and safety bags showed up. It was all about big engines and assembly lines. And unfortunately, it took a lot of damage and deaths before those safety measures got put into place. Here, with AI, we don’t have the 40 years that it took the car industry to get it right.

How do organizations get there? What should be the goals for companies as they are trying to build their AI systems?

There needs to be strong leadership support and funding for experimenting and playing in this area. The second thing is creating a process by which you manage end-to-end responsible AI assurance -- and that doesn’t mean you put bumpers on the end of the car before it goes for the paint job. It means you design the bumpers and the safety bags and the crumple zones when you’re designing the AI. AI is too strategic to be left to the technologists. Because it needs to be designed as a business capability, not just an IT capability.

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights