AI Adoption by Employees Exceeds Basic Risk Management Controls

48% of employees have entered organizational data into an AI-powered tool their company hasn’t provided them for work.

September 29, 2023

  • More organizations and employees are adopting AI for their work. Simultaneously, the drawbacks of AI also necessitate implementing risk management controls.
  • So, where are organizations regarding AI implementation in relation to the presence of risk management controls? A recent study by AuditBoard and The Harris Poll tried to find out.

As more organizations and employees realize the benefits of artificial intelligence (AI), the suspicion of the technology’s potential to replace human workers is being replaced by rapid application in the workplace. That said, the technology’s drawbacks require implementing certain risk management controls.

AuditBoard recently commissioned The Harris Poll to study American employees about the implementation of AI tools in relation to the presence of basic risk management controls. A few interesting trends and potential concerns emerged from the study.

The following are a few insights from the study.

More Employees Are Adopting AI

The good news is that employees are becoming less suspicious about AI. Further, as more employees realize its benefits and seek more efficient ways of working, they are incorporating the technology into their daily work activities. According to the study, 51% of employees use AI-powered tools like Grammarly, ChatGPT, and Dall-E.

So, what are employees using these tools for? While 26% use them to do research for their work, 23% use them to create written materials. About 22% use them to create content, and 19% for design work.

What employees use AI-powered tools for

What employees use AI-powered tools for

Source: June 2023 AuditBoard/Harris PollOpens a new window

Few Companies Have a Formal Policy About Non-Company-Supplied AI Tools

While over half of the respondents use such tools, only 37% said their organization has a formal policy about using non-company-supplied AI-powered tools. From a perspective of risk management, this statistic represents an unmitigated risk where workers use AI as they wish with sensitive organizational information. The following statistic solidifies this concern.

The study confirmed that 48% of employees entered organizational data into an AI-powered tool their company hadn’t provided them for work. This highlights the risks associated with data privacy, security, and AI functioning as workers use AI tools without proper vetting by the organizations’ IT security teams.

So, what type of organizational data are employees entering into AI-powered tools? While 24% enter written material that needs editing, 21% enter reports or material that need to be summarized. About 18% enter process documentation and the same percentage of people enter business results data. About 16% enter software code, and 14% enter proprietary information.

See more: Are Your Employees Using AI Tools Safely?

Many Employees Believe AI-powered Tools Are Safe and Secure

According to the study, 64% of respondents believed using AI-powered tools in their work was safe and secure. This underscores a more significant concern.

A major risk is tied to artificial intelligence — a human cognitive bias called the Dunning-Kruger effect. This effect explains humans’ tendency to be overconfident while being ignorant of possible risks. In the context of AI, this cognitive bias may lead workers to overestimate an AI tool’s capabilities while lacking an understanding of the technology. For example, a worker may use an unapproved AI-powered tool to analyze organizational data, receiving unintended results. Further, they may take these results at face value, having too much trust in the tool’s capabilities.

Balance AI Adoption With Risk Management Strategies

While things look positive regarding employees losing their inhibition of adopting AI-powered tools, the study underscores the need for implementing robust risk management strategies. A few basic controls needed include having a clear policy for using AI-powered tools, data handling, and educating employees on AI’s limitations. Using AI in the workplace will continue to expand, and hence, a comprehensive approach to policy development and risk management is necessary.

How are you ensuring your employees are not adopting unapproved AI-powered tools? Tell us on LinkedInOpens a new window , XOpens a new window , or FacebookOpens a new window . We’d love to hear from you!

Image source: Shutterstock

MORE ON AI IN WORKPLACE

Major Tech Companies in Trouble Over Exploiting Workers for AI

Enhancing Employee Engagement with AI-enabled Recognition

The Game-changing Potential of Generative AI in Employee Training

Generative AI Will More Likely Augment Jobs Than Destroy Them: UN Report

Karthik Kashyap
Karthik comes from a diverse educational and work background. With an engineering degree and a Masters in Supply Chain and Operations Management from Nottingham University, United Kingdom, he has experience of close to 15 years having worked across different industries out of which, he has worked as a content marketing professional for a significant part of his career. Currently, as an assistant editor at Spiceworks Ziff Davis, he covers a broad range of topics across HR Tech and Martech, from talent acquisition to workforce management and from marketing strategy to innovation. Besides being a content professional, Karthik is an avid blogger, traveler, history buff, and fitness enthusiast. To share quotes or inputs for news pieces, please get in touch on karthik.kashyap@swzd.com
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.