Cyber criminals are likely to exploit the power of generative AI platforms -- including ChatGPT -- to make phishing attacks or other malicious activity more difficult to stop.

Nathan Eddy, Freelance Writer

March 16, 2023

5 Min Read
big binocularsZoonar GmbH via Alamy Stock

Artificial intelligence technology, including generative AI, enables malicious actors to increase the speed and variation of their attacks by modifying code in malware.

It can also be deployed to create thousands of variations of social engineering attacks to increase the probability of success.

With malware, generative AI engines such as ChatGPT could enable cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines.

ChatGPT is the publicly accessible interface to OpenAI’s GPT-3 Generational AI engine, which is focused entirely on natural language processing.

While it has been used to generate some software, its real forte is in acting as a conversational AI that delivers human-quality speech.

As generative AI technologies advance, so will the variety of ways this technology is used for malicious intent. This in turn requires IT security professionals to adapt their defense posture -- and even employ AI to fight back.

More than half the respondents to a February survey of 1,500 IT decision makers across North America, UK, and Australia said they believe there will be a successful cyberattack credited to ChatGPT within the year.

The BlackBerry report also revealed more than eight in 10 (82%) respondents said they plan to invest in AI-driven cybersecurity in the next two years. Nearly half (48%) said they plan to invest before the end of the year.

Generative AI Used in Multiple Attack Vectors

Mike Parkin, senior technical engineer at Vulcan Cyber, explains threat actors can use conversational AI to craft convincing dialogs useful for social engineering, either in emails or other text-based interactions.

“There are other applications, based on other machine learning engines, that can be used to create more sophisticated code and help threat actors bypass existing defenses,” he says. “It’s all a matter of what kind of data the AI is trained on what it’s designed to do.”

Just looking at a natural language AI like GPT-3 and ChatGPT in particular, these capabilities can easily augment threat actors’ abilities to generate convincing social engineering hooks.

“In the right circumstances, they could be used for live chat sessions or even to script live conversations,” he says. “That’s not even including the possibility of using machine learning techniques to develop code specifically to bypass existing defenses.”

Parkin says IT security teams can expect to see a fresh wave of phishing, cast-netting, and spear-phishing attacks that are more sophisticated than what they may have dealt with in the past.

“However, we can also expect the defense to quickly adapt as well,” he adds. “We can expect more sophisticated filters for email or text messaging that can help identify AI created content on the fly.”

Parkin cautions AI algorithms will only continue to improve with better machine learning and deep learning models, making life more difficult for cybersecurity practitioners.

“Our defenses will have to adapt to deploy more AI techniques that are specifically tailored to counter AI based attacks,” he says.

Deploying AI as a Defense Resource

Patrick Harr, CEO at SlashNext, says generative AI will forever change the threat landscape for both security vendors and cyber criminals.

“It will be important to be prepared to protect their organization with security solutions that use generative AI capabilities to detect these types of threats,” he explains. “Legacy security technology will not detect these types of attacks.”

That means using security tools with AI technology is important to stop such attacks.

“As chatbots get become better and have more uses, hackers will be able to diversify the types of threats they can deliver, which will increase the likelihood of a successful compromises,” Harr explains.

Generative AI technology can in fact be used to develop cyber defenses capable of stopping ransomware, business email compromise and other phishing threats developed with ChatGPT.

Expanding Understanding of AI Defense Capabilities

Casey Ellis, founder and CTO at Bugcrowd, says to adapt to a future in which AI will be a partner in defending systems and data from cyberattacks, security teams must “get their hands dirty” with flexible interfaces like ChatGPT to get a better understanding of AI’s current capabilities and limitations.

“Security leaders should also implement appropriate training and education programs for staff to ensure they are equipped to work alongside AI systems,” Ellis says. “I would also recommend developing protocols for human-machine collaboration and establish clear lines of responsibility.” 

Ellis adds it's also important to continuously evaluate the effectiveness of AI systems and adjust as needed to ensure optimal performance and stay up to date with emerging threats and evolving technologies in the cybersecurity landscape.

“Ultimately, cybersecurity is a human problem that has been sped up by technology,” he says. “The entire reason our industry exists is because of human creativity, human failures, and human needs.”

He says it is unlikely that AI will completely take over cybersecurity functions, as human operators bring intuition, creativity, and ethical decision-making to the task, things which will be difficult if not impossible to fully replace with AI.

“That said, AI will continue to play an increasingly important role in cybersecurity as it becomes more advanced, and a human-machine combination is necessary to effectively and ethically defend against evolving threats,” Ellis notes.

Parkin points out cybersecurity leadership and practitioners need to be ready to deal with a potential wave of fresh and sophisticated social engineering attacks.

“User education will become even more of a priority, as will having a coherent view of their environment and potential vulnerabilities so they can cut their risk surface down to a manageable level,” he says.

What to Read Next:

Should There Be Enforceable Ethics Regulations on Generative AI?

Patrolling the Metaverse: Stopping Cybercrime, Training Forces

ChatGPT: An Author Without Ethics

What Just Broke?: Digital Ethics in the Time of Generative AI

About the Author(s)

Nathan Eddy

Freelance Writer

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights