AI Governance: Risks, Regulations and Trends for Enterprises

The right AI governance program can improve your AI value, preserve your reputation with customers and partners, and keep you ready for the big new regulations on the horizon, according to Forrester Research at their Data Strategy & Insights event.

Jessica Davis, Senior Editor

December 13, 2022

5 Min Read
Yellow light illuminated on a traffic light
Kiyoshi Takahase Segundo via Alamy Stock Photos

Enterprises are now strongly on the path to operationalizing AI within their organizations, with 76% on the AI adoption curve, according to Forrester Research. But getting it operationalized is just one step of the process.

There are any number of important tasks that go along with getting quality results from your artificial intelligence in the enterprise, but they all come down to one thing: data. How do you handle your data? How are you protecting your customers’ privacy? What are the new rules and regulations on the horizon that you will need to comply with in your data and AI practice? How can you ensure that your organization is getting the most value from your data and AI models?

Forrester Research addressed these questions during their recent Data Strategy & Insights event during a session: A Hitchhiker’s Guide to AI Governance.

“There’s a little bit of machine learning out there that’s a bit rogue from a data perspective,” said Michele Goetz, a VP and principal analyst at the firm, presenting together with Brandon Purcell, also a VP and principal analyst. Together, Goetz and Purcell provided an overview of why enterprises need to pay attention to data governance as well as some of the most pressing new regulations for AI and data on the horizon and the differing approaches in different geographies. They also spoke about the different risk areas for artificial intelligence in the enterprise. Then they provided a framework for how organizations can tackle the task.

Why AI Governance Matters

Goetz pointed out that what you do from an AI governance perspective ensures that your customers, partners, and the marketplace trusts you.

“If you’ve been having fun talking to your friends on social media and then the next thing you know you are being recommended or get an email from someone because they were tracking your conversation -- I don’t know about you, but I don’t like that. It doesn’t instill trust,” she said.

You don’t want to employ practices that lead to that kind of experience for customers. The same goes for your partners’ experiences with you and the overall market’s experience, too, she said. You want them to be able to trust the insights coming from your machine learning and AI capabilities without experiencing a negative event.

AI Risk and Security

Purcell noted that the AI Risk and Security organization, known as AIRS, an informal group of practitioners, has split AI risks into four different categories. First are data-related risks.

“As we all know, the AI models are only as good as the data used to train them,” he said. “One of the limitations in data is the fact that you probably don’t have data on every single instance the model is going to see, so there are significant learning limitations in your models. Additionally, we’ve all heard ‘garbage in, garbage out.’ I have yet to talk to an enterprise client that doesn’t have some sort of data hygiene issue.”

The next risk is bad actors. There are some looking to game AI systems, and there are others who could initiate data poisoning attacks. Other risks are techniques that actors can use to infer private information about training data. Finally, there are bad actors who would try to steal your models to figure out how they work -- for instance, stealing your fraud detection model so they can learn to beat it.

New Rules and Regulations on the Horizon

One of the biggest new regulations that’s coming, likely in 2024, is the AI Act in Europe, which creates a hierarchy, rating some AI use cases as an unacceptable risk, others as high risk, others as limited risk, and others as minimal risk. High-risk AI will be prohibited and include use cases such as mass surveillance, manipulation of behavior that causes harm, and social scoring. High-risk activities will require an assessment and include access to employment and education and public services, safety components of vehicles, and law enforcement. Limited-risk AI activities are required to be transparent, and they include impersonation, chatbots, emotion recognition, and deep fakes. Anything else can be categorized under minimal risk and that carries no obligations for the enterprise.

In the United States, the rules are quite a bit different. Purcell said that the National Institute of Standards and Technology has released a proposed framework for governing AI, but it is not mandatory. Purcell said that the draft form of this focuses on how to help companies ensure that AI is created in a responsible way, and he believes it focuses on cultivating a culture of risk management.

In addition, the White House released an AI Bill of Rights this year, which is not binding but it indicates a direction that the Biden administration will take in terms of AI regulations. Key components of this are the importance of privacy and also the importance of having human beings make critical decisions rather than relying on AI/automation.

AI Governance Across the Enterprise

A solid AI governance practice will need to span the organization so that it can navigate in this new era of maturing regulations and greater customer sophistication when it comes to privacy. This work will need to include the AI leader, business leader, data engineer, legal/compliance specialist, data scientist, and solution engineer, according to Forrester. Each member of this group has a different level of excitement or concern around the AI practice.

Getting Started

“Start with establishing a framework for explainability,” said Purcell. “Explainability is critical. Then connect your AI architecture end-to-end so you don’t have these rogue AI installations. Deploy observability capabilities, launch communications and AI literacy to bolster that culture pillar, and more than anything be prepared to adapt.”

What to Read Next:

Special Report: Privacy in the Data-Driven Enterprise

5 Ways to Embrace Next-Generation AI

Data Clean Rooms: Enabling Analytics, Protecting Privacy

Read more about:

Regulation

About the Author(s)

Jessica Davis

Senior Editor

Jessica Davis is a Senior Editor at InformationWeek. She covers enterprise IT leadership, careers, artificial intelligence, data and analytics, and enterprise software. She has spent a career covering the intersection of business and technology. Follow her on twitter: @jessicadavis.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights