Wed | Feb 14, 2024 | 11:23 AM PST

In August 2023, my exploration into AI Governance was published, featuring a notable subsection titled "Governance Circling in AI's LLM Waters." By the stroke of serendipity, within only five months, "circling" evolved into a tidal wave.

The initial piece emphasized a holistic strategy for incorporating security and governance frameworks, advocating for a mixed approach that encompassed Secure Software Development Life Cycle (SSDLC) adaptations, ethical and privacy considerations, and adherence to technical benchmarks such as the NIST AI RMF 1.0 and OWASP standards.

In the subsequent five months, we have witnessed explosive growth in this area, with global legislators scrambling like the Flash's alter ego (DC comic book character) to race against what is a seemingly insurmountable pace of AI advancements, all in the pursuit of implementing "some type of regulatory guardrail." This urgency only emphasizes the vital need for an "AI Governance Life Jacket" to navigate these tumultuous waters.

A small step back in time...

Let's take a small step back and bring you up to speed on the pace of things occurring. Let's start with UX (User Experience).

According to Sujan Sarkar in "AI Industry Analysis: 50 Most Visited AI Tool and their 24B+ Traffic Behvavior," between September 2022 and August 2023, AI tools received a total of 24 billion visits. Among the 3,000 AI tools analyzed, ChatGPT contributed to 60% of the overall traffic. Furthermore, "ChatGPT, Character AI, and Google Bard witnessed net increases in traffic of 1.8 billion, 463.4 million, and 69 million visits, respectively." During this 12-month period, the AI industry averaged 2 billion visits per month.

The study also revealed that the U.S. accounted for 5.5 billion visits, making up 22.62% of the overall traffic, while European countries combined for 3.9 billion visits.

This data further underscores my previous point about the urgency of global legislators to enact regulatory guardrails at lightning speed. As technology practitioners and corporate stakeholders, we must commit to governance oversight, even though advancements in AI and ML are unlikely to decelerate. Why? The motivation for this partly arises from increasing worries about AI's potential societal impacts, such as on political elections, medical devices, and automobile production. The push for swift legislative action is also driven by concerns over ethical usage, algorithmic bias, privacy issues, and the governance of autonomous decision-making processes.

Simultaneously, while the usage of AI tools was skyrocketing from January to August 2023, a swift surge of regulatory initiatives commenced (e.g., Executive Order on AI, adoption of EU AI Act into law, etc.). With the U.S. and Europe at the forefront, global legislators have expedited the development of regulatory measures to tackle issues such as ethical use, algorithmic bias, privacy, and autonomous decision-making control. This period only solidifies the significant steps towards reinforcing governance oversight, highlighting the necessity for your AI Governance Life Jacket.

Importance of the life jacket

AI advancements have thus far emerged as the leading meta-force of this century. The rapid push for new laws to regulate AI and ML technologies raises concerns. The fundamental imperfection of humans underscores the necessity for security measures, right?! 

Historically, hastily implemented regulations have seldom served the long-term interests of society, often acting as mere band-aid. There is also skepticism of whether regulators genuinely understand the complexities of AI and ML design and development themselves.

The involvement of various stakeholders in the regulatory process—including technology educators, social scientists, cognitive psychologists, and the valuable insights of cybersecurity leaders—is salient. Yet there's a looming concern about their ability to effectively comprehend AI and ML and apply new rules, laws, and guidelines at such a swift pace.

It is essential to remember that this journey towards establishing AI governance, ethical standards, and safety regulations is also in its infancy. Having an "AI Governance Life Jacket" is not just wise but a strategic necessity. Utilizing expertise in security and compliance can mitigate risks like fines, penalties, or loss of contracts due to non-compliance. Being well-equipped for the impending challenges is essential for navigating the complexities of AI/ML deployment and ensuring safety, fairness, and accountability.

AI literacy stitched into the life jacket

It is more imperative than ever to consider that the foundational comprehension of AI and ML is becoming more pronounced.

AI literacy is stitched into the fabric of the AI Governance Life Jacket, woven with three core elements: Trust, Security, and Usage. These elements are intricately linked to the goal of optimizing operations, improving workflows, and enabling more informed decision making.

So, would you agree that it's widely acknowledged that the field of AI poses a formidable learning curve for many? If yes, then this would suggest that security professionals are also challenged to identify essential protection areas, encompassing privacy and data issues, as well as understanding the essence of ethical practices. It also raises the question of how to tackle the widespread lack of understanding regarding AI and ML across the organization.

Understanding AI involves differentiating between Artificial Intelligence, Machine Learning Models, Artificial Super Intelligence (ASI), Artificial General Intelligence (AGI), and the associated risks for each. Without a clear grasp of the fundamental concepts, navigating the complexities of writing or evaluating contracts, policies, and compliance requirements could be a daunting task. A good starting point is familiarizing oneself with the specific terminology involved.

To illustrate my point, in an article written by Andrea Azzo, "Teaching Artificial Intelligence Literacy: AI is for Everyone," she conducted interviews with prominent leaders from elite institutions. The interview with Ken Holstein, an associate professor at Carnegie Mellon University's (CMU) Human-Computer Interaction Institute and director of the Co-Augmentation, Learning, and AI (CoALA) Lab, was particularly enlightening.

Holstein's insights into adult understanding of AI concepts, along with Azzo's article, highlight researchers' collaboration with adults to enhance AI comprehension. CMU's CoALA Lab conducted workplace studies on AI-augmented tool interactions. Holstein stated during the interview: "...His group's research has found there is often little to no training aimed at helping workers learn how to use AI tools effectively and responsibly. Also, he said AI-based tools are often not designed to solve the right problems."

Furthermore, "These misunderstandings bring dangers, such as the risk of overreliance on AI recommendations," Holstein continued. The misunderstanding referenced is based on poor AI literacy, which may indeed be the case in certain situations. Nonetheless, the main factor is the problem that the design aims to address. AI machine learning models are as capable of detecting bias in data as they are at being utilized for discriminatory practices.

Technology practitioners and other stakeholders should consider developing an AI Literacy initiative, focusing on educating and empowering all areas of the business about the fundamentals and implications of AI technologies.

Closing remarks

The "AI Governance Life Jacket" metaphor was to emphasize the necessity for robust governance and enhanced AI literacy amidst the rapid evolution of AI technologies. It advocates for trust, security, and responsible usage as keys to managing AI's intricacies. By adopting strategic AI governance practices and fostering AI literacy, organizations can uphold ethical standards, compliance, and operational integrity. This strategy equips organizations to navigate AI's challenges effectively and leverage its benefits responsibly.

** Special thanks to Andrew McAdam's contribution to this article.  

Comments