Is AI Its Own Biggest Risk? Here’s What Enterprises Need to Know

Learn how enterprises can responsibly implement AI, leveraging it as a proactive defense against evolving threats.

January 19, 2024

AI Trends and Threats

Ashvin Kamaraju, global vice president at Thales, delves into the growing concerns surrounding the risks to AI rather than from it. As enterprises embrace AI, he explains the top risks and outlines strategic approaches for leaders to safeguard their AI ecosystems.

The rise of widely accessible generative AI platforms and tools drives decision-makers across businesses to evaluate where the technology can be leveraged within their stacks to enhance operations. According to the GitHub Survey, 92% of developers already use A.I. coding toolsOpens a new window . These platforms are becoming the foundation for everything in the enterprise – from processes to solutions and mindset.

This growing focus on increasing AI usage has sparked conversations centered on the potential risks of the tech. Still, as it becomes more pervasive, a more concerning element must be considered: the risks to AI.

Top 4 Risks to AI and What Leaders Need to Know

  • Stealing the model: Threat actors can target machine learning models that use public APIs by copying a model. By having the exact model on hand, cybercriminals can learn the ins and outs of its capabilities, testing the limits to see how they can successfully target the real thing. This threat vector is growing as enterprises seek to incorporate AI but don’t have the budget to fund the costly development. They resort to more cost-effective options, like GPT-4, which provide paying customers insight into their data. 
  • Data Poisoning: Public datasets used to train deep-learning models have the potential to be tampered with. If accessible to a bad actor, these sets can be manipulated, and models trained on poisoned data produce false or malicious predictions and decisions. While there’s yet to be a reported instance, this scenario could cause immense damage if done successfully.     
  • Prompt Injection: A risk that has already proven its harm to AI is prompt injection. Large Language Models (LLMs) are the foundation of AI tools, and they work by predicting what comes next – for chatbots, this is what is used to drive responses and give instructions. Hackers are using the prompt injection technique to “trick” chatbots by inputting a series of prompts or questions to deceive the application to override its existing instructions.
  • Extracting confidential information: There’s a growing concern about what these AI platforms store. Since LLMs are trained on data, information uploaded into these platforms could be stored and then recirculated if given the right query. As enterprises add LLMs to their tech stacks, the biggest risk will be what data is being uploaded. If teams are uploading personally identifiable information (PII) or confidential information, organizations run the risk of having this data publicly shared. 

See More: Confronting The Risks of Artificial Intelligence Technology

How Enterprises Can Mitigate the Risks to AI 

Widespread use of AI among enterprises is a growing trend, which means risks to AI will remain persistent unless properly addressed. Enterprises implementing these AI systems should do so responsibly, incorporating security industry guidance and including these systems in the threat landscape. 

So, how can enterprises hold up their end of AI responsibility? By pinpointing proper business use cases of AI to get ahead of threats. These use cases include:

  • Leveraging AI as a nimble defense: Today’s threats require a proactive approach to security, not a reactive one. By adding AI to their security stacks, businesses can address threats preemptively. Using AI-based systems, IT and security teams have access to extensive threat intelligence resources that are continually being collected, allowing them to rapidly enforce new policies and better understand bad actor’s evolving tactics. AI equips teams with the insight and solutions necessary to mitigate modern risks, which is essential as cybercriminals turn to AI to advance their methods. 
  • Advancing anomaly detection with generative AI: With the threat intelligence AI systems gather, IT and security teams are powered with real-time anomaly detection. False positives can make anomaly detection difficult for internal teams as they sift through results to ensure systems identify true anomalies. Instead of classifying data, generative models can be trained to understand better what “normal data” patterns look like, helping to reduce false positives. Generative AI can help alert spikes, bots, and other attacks potentially targeting systems by pulling from data sets. Accurately and efficiently supporting teams in thwarting attacks.
  • Reducing toil: Previously, security teams required experts in areas like certain programming languages to help inform the decision-making process during an attack. AI removes the need for having an expert in every language. If an organization faces an attack due to a malicious script trying to target their system, IT and security teams can turn to generative AI to feed in the script and receive instantaneous directions on patching any existing vulnerabilities to defend against the attack. Not only does this allow for quick remediation, but it also lifts the burden off internal teams.  

AI Risk Responsibility Isn’t Just on Enterprises

With all technologies, the industry plays a pivotal role in shaping future use. Almost overnight, AI became rapidly accessible, and its advancements are coming just as fast. There’s a demand to develop more responsible AIOpens a new window in the U.S., but the lack of clear-cut regulations leaves little clarity for those less familiar with the tech. 

As AI continues infiltrating the workplace, enterprises face the immense burden of rapidly and securely deploying AI-based systems to meet new demands while avoiding exposing themselves to the expanding threat vector. This tremendous weight is not one that organizations alone can carry, so developing regulations and offering guidance and frameworks are instrumental in the future of workplace AI. 

The AI Executive Order issued by the Biden administration helps to establish new standards for the safety and security of AI. Recent efforts by the White House have also driven commitments from leading AI companies to drive safe, secure, and trustworthy development of AI, which will further enhance AI usage among organizations implementing solutions from these companies. Despite progress, regulations take time to reach their full effect, and AI advancements aren’t slowing down to wait for policies to be implemented. 

Luckily, existing frameworks, guidance, and resources are available for organizations to ensure proper business use and implementation of AI as we await more firm regulations. For example, The National Institute of Standards and Technology (NIST) launched the NIST AI Risk Management Framework. This framework aims to better manage risks to individuals, organizations, and society associated with AI. 

Over the last year, we’ve seen how quickly AI evolves. As we await further regulations for the tech that offer more defined processes, enterprises should turn to existing frameworks like NIST’s to best equip themselves.

Collaboration Will Be Key To Protecting AI

For a successful future of AI use, it’s clear that enterprises need to shift mindset mindsets to focus on the risks with this technology. By placing resources behind protecting AI and calling for collaboration from business leaders, regulators, and industry experts, there’s a clear path to a more secure future that benefits from AI’s innovations. 

Why do you think enterprises should shift focus on the risks of AI vs. the risks from it? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON AI RISK

Ashvin Kamaraju
Ashvin Kamaraju

Global Vice President, Engineering and Cloud, Thales

As Global Vice President of Engineering and Cloud Operations, Ashvin Kamaraju drives the technology strategy for Thales Cloud Protection & Licensing, leading a global organization of researchers and technologists that develop the strategic vision for the company’s portfolio of industry-leading data protection products and services. Previously Ashvin served as Vice President Global Engineering at Thales following its acquisition of Vormetric. He led a geographically distributed engineering organization that developed a broad portfolio of leading-edge data security products that met rigorous security standards and were designed for deployment in the enterprise, private and public clouds.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.