PhaaS and AI Enable Anyone to Be a Cybercriminal. So What?

Understanding the new face of phishing and safeguarding against it.

October 23, 2023

PhaaS and AI Enable Anyone to Be a Cybercriminal. So What?

Phishing-as-a-service (PhaaS) and AI are quickly emerging as cybercriminals’ latest weapon. Such offerings democratize cybercrime and disrupt the cyber landscape by rapidly intensifying attack sophistication and scale. Organizations should implement proactive security measures to safeguard assets from evolving phishing threats, says Candid Wüest of Acronis Research.

A newer form of cyberattack is quickly gaining steam. It can happen on an industrial scale and requires minimal technical sophistication. In fact, for around $60, anyone can launch a phishing attack using this tool.

Many are now familiar with the growing popularity of software-as-a-service (SaaS), where vendors provide services and software to a client with a subscription-based, pay-as-you-use model. These delivery models are popular for their ease of use and affordability, lowering overhead and limiting the amount of permanent hardware installed on-site. 

Cybercriminals recognize the power of as-a-service offerings and have made their nefarious aims available to the masses using this model. These days, Crimeware-as-a-service or Phishing-as-a-service (PhaaS) can turn your average computer user into an information-stealing cybercriminal.  

As-a-Service Offerings Democratize Cybercrime

One famous example of a PhaaS platform is Caffeine. Caffeine is a sophisticated, complex platform equipping attackers with “phishing kits.” The kits comprise everything a user needs to launch a successful phishing attack, from email templates to a list of potential targets. These kits also allow you to customize your approach by region, enabling attackers to pinpoint the regions or countries they want to target. Caffeine costs about $250 per month, but its price tag is on the more expensive end of the spectrum. PhaaS platform kits typically go from $10 to $300Opens a new window

Caffeine’s more expensive fee is a bit scarier because it comes equipped with a long list of capabilities and, as a result, a very low barrier to entry. Attacks that were once solely the purview of sophisticated cybercriminals are now available to anyone with a big enough bank account and an email address. This “democratization of actorsOpens a new window ,” as the FBI’s Cyber Division’s Wayne Jacobs dubs the phenomenon, is a major reason phishing attack volume is ballooning to unprecedented levels.

Acronis’ recent Mid-Year Cyberthreats ReportOpens a new window found that in the first half of 2023, phishing was the most popular form for stealing credentials, making up 73% of all attacks. Specifically, in Q1 2023, Acronis identified a 15% spike in phishing and malicious URLs compared to Q4 2022. New phishing emails are emerging every day, emphasizing the importance of implementing a multilayered approach to cybersecurity. 

See More: Battling Phishing and Business Email Compromise Attacks

Accessible AI Tools Help Crimeware-as-a-Service Scale

Another democratization-centric trend is exacerbating this new cyberattack landscape. Tools like ChatGPT, Bard, and the suite of apps Microsoft and other large technology providers offer have made AI available to anyone. Bad actors are leveraging suddenly accessible AI to both write and scale their attacks. 

Those large language models (LLMs) make it even more difficult for employees and consumers to distinguish malicious emails from real messages. Some tell-tale signs that an email was a phishing scam, like poor grammar or spelling, are now undetectable as ChatGPT and other LLMs can craft these messages for the threat actor. This enables anyone in any part of the world to attack a specific target with proper messaging regardless of their proficiency in a specific language. These innovations have lowered the traditional red flag of phishing attempts while enabling cybercriminals to be more convincing in their attacks.

These tools—ChatGPT in particular—are constantly fine-tuning themselves with user input, becoming more accurate and capable. AI chatbots have conversations with users. Many can execute complex tasks, such as writing an attack program or drafting a phishing email with believable orders and instructions for recipients. ChatGPT can also write code, like that which underpins a hack, even if tests reveal that the feature doesn’t work quite as well as originally thought. 

It doesn’t just stop at emails, though. These AI models also enable better voice scams by executive impersonations. This technology can mimic a person’s voice almost perfectly, matching their tone in their native language – all by leveraging publicly available recordings of the executive. This specific type of attack has been leveraged against c-suite executives to entice monetary authorizations. This same technology can also be used to up-level other historically successful scams, like your CEO’s age-old gift card email. At the same time, in the old days, you would receive a message riddled with typos from an email address that didn’t appear to be anyone at your company. Today, you may get a phone call from a similar number to your company’s formatting – but this time in your CEO’s voice. This is a much more difficult scam to identify, especially if someone is in an entry-level position and may not readily have access to their CEO to verify the ask. 

While the developers did not design AI chatbots and assistants for malicious purposes, cybercriminals use them to develop, deploy, and scale their attacks more efficiently. The reality with any new technology is that the good guys will wait for rules and regulations, testing technology to try and understand its pitfalls before implementation – but the bad guys don’t have those same restrictions. IT professionals are then stuck in a never-ending game of cat and mouse to catch up. 

Simplify and Remain Proactive to Defend Against Evolving Threats

The combination of Crimeware-as-a-service and AI makes dangerous capabilities accessible to the masses, where once they were only the domain of the very technologically literate. Organizations’ cyber defenses must evolve along with the capabilities of attackers to remain vigilant and effective. Though cyberattacks are becoming increasingly sophisticated, business leaders must resist the urge to conflate sophistication with complexity. The best way to stand up a solid defense is to keep things simple.

Allow the same tools bad actors use to do some heavy lifting. ChatGPT and other LLM-based AI technology can create realistic training simulations and synthetic data sets to train endpoint solutions. Organizations can couple technology tools with proactive measures, such as fast-patching software vulnerabilities, using multi-factor authentication, and keeping a well-maintained software and hardware inventory. 

Bad actors are using AI and as-a-service offerings to democratize cybercrime. So what? Leaders who take a measured, simplified approach to this evolving threat will best position their organizations to identify and react to threats, regardless of who mounts them. 

What strategies have you followed to defend against democratized cyber threats? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON PHISHING

Candid Wüest
Candid Wüest is the VP of Research at Acronis, where he researches on new threat trends and comprehensive protection methods. Previously he worked for more than sixteen years as the tech lead for Symantec’s global security response team.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.