How to Succeed with AI Compliance in Manufacturing: Trustworthy AI for the Win?

Insight, automation and security with trustworthy AI.

January 19, 2023

As discussed in Part 1 of this series, manufacturers can gain breakthrough competitive advantage from AI through anomaly detection, predictive maintenance, and automated asset management. But the power of AI extends beyond those use cases, supporting an entirely new dimension of automation and insight, shares Lori Witzel, director of research for analytics and data management at TIBCO.

Artificial Intelligence (AI) is in the news, as are regulations to manage AI risks. AI regulatory compliance will impact manufacturers sooner than later. 

Through AI and related technologies, manufacturers can gain a complete, 360-degree, data-driven integrated view of all operations—from suppliers and supply chains, through equipment, processes, and manufacturing practices, to final product testing and customer satisfaction. It’s the promise of Industry 4.0 made real, and it is widening the gap between leaders and laggards.

The benefits of AI are no longer without risk, however. Increased adoption of AI across many verticals, including manufacturing, is driving increased technology regulation. US manufacturers need to act now to prepare for the changing regulatory landscape.

Trustworthy AI Is a Best Practice

Building trust and transparency into AI is a core best practice. It’s also imperative to ensure compliance with current and future regulations.

Trustworthy AI is auditable, transparent and (at the risk of greatly oversimplifying a complex topic) interpretable. Interpretable AI comprises algorithms that provide a clear explanation of their decision-making processes. This interpretability ensures that humans can assess an AI-infused process, so they can apply their own insights and opinion to the logic behind an AI-made decision. 

For example, an experienced operations manager may need to understand why certain products coming through production were identified as flawed and not others. If AI determines that a product in an image is defective, this presents a potential interpretability use case—the need for a human to be able to validate the decision. The AI becomes interpretable when the defect location is highlighted visually, so a person can see and verify which of the many visible features in the image represent the defect. It’s not interpretable if the AI just indicates that the image contains a defect but doesn’t highlight the actual defect within the image.

Another example of risk specific to manufacturing, noted by McKinseyOpens a new window , is the potential for accidents and injuries due to an AI-mediated interface between people and machines. If AI-infused systems fail to keep a human in the loop––failing an interpretability best practice––equipment operators may not be able to provide a needed override, increasing physical risk in applications using self-driving vehicles. Other risks for manufacturers, such as erroneous AI downgrading a supplier’s product, are also consequential. 

Interpretable, transparent AI will enable data science teams to respond in ways that even a less technical workforce can understand. This is especially beneficial for legacy manufacturing operations, which often find themselves under pressure from digital-first competitors.

See More: A Quick Guide to Smart Manufacturing

Trustworthy AI Is Based on Trustable Data

An example of the value of trustable data for manufacturing is Arkema, a French €8B specialty chemicals and advanced materials company. They manufacture technical polymers, additives, resins, and adhesives. The flow of data across customer, vendor, and material domains throughout the business has been revolutionized by their data fabric-like approach to data assets. Jean-Marc Viallatte, group vice president of Global Supply Chain at Arkema, led an enterprise-wide initiative layering a common data framework into an ever-expanding list of products, ensuring every system deployed pulls from the trusted master data hub. 

The Arkema team now widely shares standardized, trustable data across the organization, enabling enhanced regulatory compliance, facilitating additional growth through smoother M&A activity-related data integration, and supporting flawless customer-centric service. Arkema is an example that US manufacturers can learn from as they seek advantage by using AI for supply chain optimization, anomaly detection, root cause analysis, key factor identification, yield optimization via pattern recognition at scale, and predictive and prescriptive maintenance via advanced equipment monitoring.

How to Prepare for a Changing AI Regulatory Landscape

As noted by McKinsey, manufacturers that use AI outperform their lagging peers significantly. The examples they cite are yielding loss reductions of 20 to 40 percent while improving on-time delivery using an AI scheduling agent. But without preparing for AI transparency and auditability, those advantages might be lost to regulatory risk. Although AI regulation is still state-by-state, in many cases, and is in the draft stage around the globe, preparing to execute on compliance could include:

1. A data fabric architecture with robust master data management (MDM) for holistic management of the data pipelines that fuel manufacturing automation: Regulatory compliance means understanding not just the algorithms in use but the data that has been used to train AI and machine learning (ML) models. A data fabric provides the framework to achieve transparency as well as better outcomes.

    • AI training data discovery and management: Your data science teams may not only use data from the organization, including IoT data—they may also use publically available datasets. Whether internal or external in origin, the lineage of the data, and the observability and transparency of its use, are key components for regulatory compliance.
    • Personally Identifiable Information (PII) discovery and management: To ensure AI regulatory compliance, the organization must understand whether there is PII in any AI system used by the organization. Robust MDM can help identify what PII data are in which systems and how that PII is being masked or otherwise protected.

2. Data virtualization to help scale and reduce friction in AI training data preparation: The enormous volume of training data needed by ML and AI systems requires agile, scalable data preparation processes. Data virtualization can reduce friction in data preparation by reducing the impact of data silos on scalability and access.

3. Baseline and ongoing algorithm audits: Identifying and documenting algorithms in use across manufacturing automation and supply chain processes is an important action toward the transparency needed for regulatory compliance.

    • Algorithm transparency and explainability: An integrated platform approach to data analytics and data science will make identifying and documenting algorithms in use easier. It will also aid in ensuring these algorithms are transparent and explainable—key facets of AI compliance.
    • Trading partner and vendor algorithm documentation: Manufacturers should also ask trading partners and technology vendors for documentation of any algorithms that may be used by the manufacturer’s own systems and processes. Boston Consulting GroupOpens a new window , among others, recommends implementing a responsible AI framework that includes vendor management since a manufacturer may be held liable for non-compliant AI provided by a trading partner or vendor.

Just as the benefits of AI for manufacturers transcend silos and extend across the organization and its trading partners, so too should preparation for the regulation of these technologies. AI can be pivotal in enabling manufacturers to leap ahead of the competition. As you prepare to make that leap, ensure that you have the governed, transparent AI processes in place – along with diverse stakeholder input – to be able to adapt to a changing regulatory landscape. 

What AI compliance strategies are you implementing to adapt to the evolving regulatory landscape? Share with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

MORE ON MANUFACTURING

Lori Witzel
Lori Witzel

Director of Research for Analytics and Data Management, TIBCO

Lori Witzel is Director of Research for Data Management and Analytics at TIBCO, where she develops and shares perspectives on improving business outcomes through digital transformation, human-centered artificial intelligence, and data literacy. Providing guidance for business people on topical issues such as AI regulation, trust and transparency, and sustainability, she helps customers get more value from data while managing risk.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.