Mitigating AI Risks: Protecting from Identity Theft, Deepfakes, and Fraud

Navigating security and privacy challenges of generative AI

September 26, 2023

Mitigating AI Risks: Protecting from Identity Theft, Deepfakes, and Fraud

Amidst the present landscape of AI innovation lies a serious concern: growing privacy and security risks for organizations. Perry Carpenter of KnowBe4 explains a few of the novel and interrelated risks generative AI presents.

In case you haven’t heard, generative AI (the branch of artificial intelligence focused on generating text, audio, video, and images) is taking the world by storm. From creating realistic images to mimicking voices to writing convincing and error-free texts, tools like ChatGPT, Google Bard and Dall-E are opening an unprecedented level of creative possibilities. However, with great innovation comes unprecedented risks. Let’s understand a few pressing risks and how to tackle them.

Identity Theft, Sophisticated Impersonations and Synthetic Personas

The ability of GenAI to create realistic images, videos, speech, and personas has major security ramifications. Threat actors can leverage this technology to impersonate people or fabricate entirely new identities for the purpose of fraud and deception. They can use fake identities to deliver highly advanced phishing and social engineering attacks against intended targets. For example, a sudden surge in fake, AI-generated LinkedIn profiles targeting individuals in diverse sectors such as government, cybersecurity and education.

See More: On Second Thought, Maybe Don’t Build AI Into Your Product…

Deepfakes

One of the biggest dangers of GenAI is the ability to create deepfakes, a type of manipulated image, audio or speech that makes it appear like someone said or did something when they actually didn’t. Cybercriminals can synthesize deepfakes to spread disinformation, trick employees into revealing information and granting access to sensitive systems, conducting fraud, or even extorting them. StudiesOpens a new window show that 37% of organizations experienced fraud that involves synthesized speech, while 29% have been victims of deepfake videos.

Insider Threats And Misuse

Apart from external threats, GenAI also poses a high risk of insider misuse. Malicious and disgruntled employees can use GenAI to create unauthorized content, manipulate information or breach privacy regulations. GenAI’s capabilities can also be harnessed by insiders to create deceptive or defamatory content that causes harm to individuals or tarnishes an organization’s reputation.

Confidentiality, Privacy And Disclosure Risk

As the popularity of GenAI rises, more employees are experimenting with these tools. Imagine an employee inputting private, copyrighted, or confidential data, and, because AI memorizes the data it’s trained on, the input data becomes accidentally available to others who are using these tools or seeking this information. Something similar happened to SamsungOpens a new window , where an engineer accidentally shared a piece of source code which eventually led to a data leak. Samsung invariably banned employees from using ChatGPT. GoogleOpens a new window , the developer of Bard, warned its employees about posting confidential data on AI chatbots.

User Profiling and Targeted Attacks

GenAI has the ability to create tailored and highly personalized content (automated and at scale) to users based on their preferences, interests, and user behavior. Such profiling can not only infringe upon individual privacy, but it also raises concerns around data usage and consent. Such personalized content can be weaponized to create highly targeted social engineering attacks against specific users. 

How Can Organizations Mitigate These Risks?

There are five steps organizations can take to mitigate the security and privacy issues posed by GenAI, including: 

1. Improve employee awareness

As Generative AI makes its way into various aspects of our personal and professional lives, it is important that employees are educated about the risks of using these tools. Define user policies that instruct what is acceptable and what is unacceptable behavior and communicate expectations, guidelines, and consequences of misuse.

2. Foster a security instinct

Humans, when properly trained, can surpass technical controls when it comes to spotting unusual or abnormal things in emails, text messages and phone calls. Test by use of simulated social engineering and classroom exercises to help build a security gut feeling in employees so that they stay vigilant and not fall prey to manipulated and deepfake content. 

3. Implement acceptable use policies

Work with legal teams and other leaders in the organization to create security policies around the appropriate use of GenAI tools. Using examples, clearly outline what is allowed and what is not allowed (along with potential consequences for misuse) and why it is important not to share private and confidential data with AI.

4. Monitoring and auditing

Monitor the usage of Generative AI within the organization. Implement logging and tracking mechanisms so that suspicious or unauthorized activities can be identified and blocked. Monitoring and auditing practices will serve as a deterrent, discouraging users from engaging in improper or unauthorized activities while also promoting a culture of accountability and responsible use.

5. Implement robust access controls

Implement strict access controls and authorization mechanisms to limit employee access to AI tools and platforms. Only grant access to employees that have a legitimate business need for using these tools.

Although AI poses several dangers to organizations, its benefits far outweigh the risks. Implementing a blanket ban is not a realistic or sustainable approach for business. Instead, it is recommended that organizations offer employees the necessary education and security guidance, control and monitor utilization of these tools, and formulate strategies that protect business interests while embracing the advantages these technologies can offer. 

How are you navigating the evolving security risks posed by AI and generative AI? Share with us on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON GENERATIVE AI

Perry Carpenter
Perry Carpenter

Chief Evangelist and Security officer , KnowBe4

Perry Carpenter is author of the recently published, “The Security Culture Playbook: An Executive Guide To Reducing Risk and Developing Your Human Defense Layer.” [2022, Wiley] His second Wiley book publication on the subject. He is chief evangelist and security officer for KnowBe4 [NASDAQ: KNBE], the world’s largest security awareness training and simulated phishing platform.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.