5 Predictions for Generative AI Attacks and 5 Defense Strategies

Are we ready to defend against threats that generative AI brings with its many benefits?

October 12, 2023

Generative AI Attacks

For years, security researchers have warned that nation-states and other criminal actors will soon use artificial intelligence (AI) to automate attacks end-to-end, increasing their speed, scale, and severity. Steve Povolny of Exabeam shares how to defend against common threats that come with generative AI.

With the launch of large-language model (LLM) generative AI tools such as ChatGPT, the day that seemed far off in the future is likely going to become a reality.  Security researchers have used ChatGPT4 to enter prompts, look up code, and stitch together code blocks to create malware. However, the market is even further ahead. 

Tools are already available on the Dark Web that enable attackers to automate the development of malicious code, its quality assurance testing, and execution. Bad actors can use these capabilities to productionize and weaponize the development of malware and ransomware toolkits, endlessly changing key attributes, such as signatures, feature sets, and attack tactics as they seek to outpace enterprise security teams. 

5 Ways Bad Actors Will Use Generative AI to Launch Attacks 

So, how will malicious attackers use generative AI to increase the success of their attacks? Here are five predictions – followed by five strategies for improving organizations’ cybersecurity defenses. 

1. Nation-states will develop their own LLMs

Nation-states, which have nearly limitless funds at their disposal, will likely develop their own LLM generative AI tools dedicated exclusively to developing and training malware. They’ll hire large teams to evolve models and build next-generation malware development tools that will be difficult to combat.

In a recent speech, Rob JoyceOpens a new window , director of cybersecurity at the National Security Agency warned firms working in artificial intelligence, big data, quantum computing and healthcare, medicine, pharmaceuticals, and military technologies to be on the alert for growing nation-state attacks.

2. Criminal actors will monetize generative AI

Top criminal actors will use generative AI to develop toolkits, sell them on the marketplace, and roll out new iterations of malware just as soon as enterprises evolve defenses. The enterprising ones will offer customers performance guarantees because they can simply update code if attacks don’t deliver on marketing claims.

3. Ransomware will become an even worse threat

Malicious actors leverage ransomware because they gain control over data, exfiltrate intellectual property, and destabilize operations of leading companies and government agencies. They can reap financial ransoms by selling back control over systems and data backups, monetize insights by selling or using intelligence, or do both.

To date, the most successful ransomware attacks have been human-guided. Malicious actors buy access to compromised networks, prioritize targets, and launch attacks, with a 30-percent success rate of penetrating target defenses and a five-percent success rate of achieving ransom payouts. However, generative AI tools will level the playing field, allowing bad actors to train models to launch these higher-level attacks. This technology also enables malicious actors to develop more convincing phishing emails and social engineering attacks, often the chosen vector for delivering ransomware. Earlier this yearOpens a new window , the head of the Canadian Centre for Cyber Security said his agency had seen AI being used for carefully crafted phishing emails and misinformation.

4. Automation’s true value is repeatability

Automation will increase the speed and scale of attacks, but its true value is in creating consistent, repeatable processes. Instead of using bespoke processes to develop malware, nation-states and criminal actors will create technology production lines to develop new malware. They’ll use teams and generative AI to source new malware feature sets and families, test their effectiveness, and put them into production. This process will happen on a 24/7 basis across actors and geographies.

5. Bad actors will exploit trusted tools and data

Supply chain attacks have been around for a while, and they’re very effective because customers trust major providers. Malicious actors can use generative AI to search for flaws in solutions, such as systems of record or security tools, interjecting malicious code into executables. They’ll also increasingly target generative AI-guided cybersecurity tools, trying to poison the data that models are trained on to make them ineffective.

Understanding software pathways, executables, folder structures, and registry entries for providers’ solutions is challenging. However, generative AI can easily learn these nuances, making it easier – and cheaper – to launch these types of attacks. 

 See More: Putting Generative AI to Work Responsibly

Using Generative AI to Fend Off Sophisticated Attacks 

So, how can organizations get ready for generative AI-driven attacks? Surprisingly, the answer might lie in the technology itself. Sixty-seven percent of IT leadersOpens a new window said they have prioritized generative AI for their organization within the next year or so, and over one-third have named it as a top priority. 

1. Cooperating with peers is essential

Addressing the challenge of generative AI-powered attacks is larger than any one organization. Chief information security officers (CISOs) and other leaders of enterprises and government agencies should use industry forums to share new threat information and strategies so that all parties can improve their responses.

2. Generative AI can improve threat intelligence

Training AI on new threats is easier now that generative AI can learn and understand. Teams can point generative AI at large data sets, to uncover new insights about how threat patterns are changing and evolve their processes to match pace.

3. Companies can use new technology to simulate threats

Security teams can leverage generative AI and synthetic data to plan and simulate attacks on their own networks, testing their response. They can use lessons learned to improve attack preparedness and strengthen overall defenses.

4. Generative AI will improve vulnerability testing

Organizations are often slow to identify vulnerabilities and apply patches – even for common and well-documented vulnerabilities. Generative AI can be used in the secure software development lifecycle (SSDLC) to eliminate low-hanging fruit in the form of configuration errors, coded vulnerabilities, and design flaws. In general, emerging research in this area could be used to identify zero-day vulnerabilities in existing code. Aside from vulnerabilities, generative AI could also be used to tell you what data or systems you have that you don’t need that might present risk, and help identify “tech debt” and other non-essentials on networks.

5. Generative AI will improve detection

Generative AI will improve pattern recognition, most likely targets, who present the most insider threat risk, attacks trends, and anomaly detection so that security operations teams can quickly respond to unknown threats, decreasing network breaches and dwell time when invaders get in. 

Get Ready Now for Generative AI Attacks 

The time to get ready for generative AI attacks is today. Nation-state and criminal actors are already experimenting with this new technology and using it to evolve their attack strategies and toolkits. At the recent DefCon conference, AI Village’s Red Team Challenge prompted security professionals to test how to exploit generative AI for malicious gains. Participants were able to create a fraudulent retail website, create fake corporate accounts, and host a malware server.

Enterprises and government agencies can work with cybersecurity providers to evaluate their fitness to withstand advanced attacks, deploy new generative AI capabilities, and evolve training and processes to maximize the insights provided by these tools.

How are you leveraging generative AI to tackle evolved attacks? Share with us on  FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON GENERATIVE AI ATTACKS

Steve Povolny
Steve Povolny

Director of Security Research, Exabeam

Steve Povolny serves as director of security research at Exabeam. He brings more than 15 years of experience leading global teams of security researchers, data scientists, and developers to the New-Scale SIEM leader. Under Steve’s supervision, the security research team will integrate world-class findings and insights into the industry's top cybersecurity solutions to disrupt cybercrime and defend customers' critical assets. Prior to joining Exabeam, he was head of advanced research at Trellix, formerly McAfee. In this role, he led global security teams and the overall industry in secure product development and the mitigation of critical vulnerabilities.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.