Skip to main content

Dear enterprise IT: Cybercriminals use AI too

Cybersecurity skills are in high demand.
Image Credit: peshkov/Getty Images

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


In a 2017 Deloitte survey, only 42% of respondents considered their institutions to be extremely or very effective at managing cybersecurity risk. The pandemic has certainly done nothing to alleviate these concerns. Despite increased IT security investments companies made in 2020 to deal with distributed IT and work-from-home challenges, nearly 80% of senior IT workers and IT security leaders believe their organizations lack sufficient defenses against cyberattacks, according to IDG.

Unfortunately, the cybersecurity landscape is poised to become more treacherous with the emergence of AI-powered cyberattacks, which could enable cybercriminals to fly under the radar of conventional, rules-based detection tools. For example, when AI is thrown into the mix, “fake email” could become nearly indistinguishable from trusted contact messages. And deepfakes — media that takes a person in an existing image, audio recording, or video and replaces them with someone else’s likeness using AI — could be employed to commit fraud, costing companies millions of dollars.

The solution could lie in “defensive AI,” or self-learning algorithms that understand normal user, device, and system patterns in an organization and detect unusual activity without relying on historical data. But the road to widespread adoption could be long and winding as cybercriminals look to stay one step ahead of their targets.

What are AI-powered cyberattacks?

AI-powered cyberattacks are conventional cyberattacks augmented with AI and machine learning technologies. Take phishing, for example — a type of social engineering where an attacker sends a message designed to trick a human into revealing sensitive information or installing malware. Infused with AI, phishing messages can be personalized to target high-profile employees at enterprises (like members of the C-suite) in a practice known as “spear phishing.”

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Imagine an adversarial group attempting to impersonate board members or send fake invoices claiming to come from familiar suppliers. Sourcing a machine learning language model capable of generating convincing-sounding emails, the group could fine-tune a system to generate replies that adopt the tone and tenor of the impersonated sender and even make references to previous correspondences. That might sound far-fetched — but there’s already growing concern among academics that tools like GPT-3 could be co-opted to foment discord by spreading misinformation, disinformation, and outright lies.

Phishing emails need not be highly targeted to present a threat to organizations. Even lazily crafted spear-phishing messages can see up to 40 times the click-through rate compared with boilerplate content, making AI tools that expedite their creation hugely valuable to hackers. Beyond natural language generation, AI can be used to identify high-value targets within organizations from their company profiles and email signatures, or even based on their activity across social media sites including Facebook, Twitter, and LinkedIn.

In an interview with cyberdefense company Darktrace, Ed Green, principal digital architect at McLaren Racing, noted that before the pandemic, the technology team at McLaren would encounter crude, brute-force password attacks that Green likened to a “machine-gunning” of credentials. But in the past year, the attacks have become been tailored to focus on individuals, roles, or teams at overwhelming scale. “Everyone [is] moving very, very quickly,” because “you’ve got a limited amount of time to read and respond to data and then make adjustments,” Green said.

Phishing and spam are only the tip of the iceberg when it comes to AI-powered cyberattacks. For example, malware could be augmented with AI to more easily move through an organization, probing internal systems without giving itself away and analyzing network traffic to blend its own communications. AI-powered malware could also learn how to target particular endpoints instead of incorporating a complete list, implementing a self-destruct or self-pause mechanism to avoid detection by antimalware or sandboxing solutions.

Beyond this, AI-powered cyberattack software could learn from probes in a large botnet to arrive at the most effective forms of attack. And prior to an attack, probes could be used for reconnaissance, helping attackers decide if a company is worth targeting or monitoring the traffic to an infected node (e.g., a desktop PC, server, or internet of things device) to select valuable targets.

According to a recently published Darktrace whitepaper, context is one of the most valuable tools that AI brings to a cyber attacker’s arsenal. Weaponized AI might be able to adapt to the environment it infects by learning from contextual information, targeting the weak points it discovers or mimicking trusted elements of a system to maximize the damage it causes.

“Instead of guessing during which times normal business operations are conducted, [malware] will learn it,” Darktrace director of threat hunting Max Heinemeyer writes. “Rather than guessing if an environment is using mostly Windows machines or Linux machines, or if Twitter or Instagram would be a better channel for steganographic, it will be able to gain an understanding of what communication is dominant in the target’s network and blend in with it.”

This might give rise to what Darktrace calls “low-and-slow” data exfiltration attacks, where malware learns to evade detection by taking actions too subtle for humans and traditional security tools to detect. With an understanding of the context of its target’s environment, the malware could use send a payload that changes in size dynamically, for example, based on the total bandwidth used by the infected machine.

Solutions

Businesses are increasingly placing their faith in defensive AI to combat the growing cyberthreats. Known as an autonomous response, defensive AI can interrupt in-progress attacks without affecting day-to-day business. Given a strain of ransomware an enterprise hasn’t encountered before, defensive AI can identify the novel and abnormal patterns of behavior and stop the ransomware — even if it isn’t associated with publicly known compromise indicators like blacklisted command-and-control domains or malware file hashes.

AI can also improve threat hunting by integrating behavior analysis, developing profiles of apps and devices inside an organization’s network by analyzing data from endpoints. And it can provide insights into what configuration tweaks might improve infrastructure and software security, learning the patterns of network traffic and recommending policies.

For example, Vectra, a cybersecurity vendor, taps AI to alert IT teams to anomalous behavior from compromised devices in network traffic metadata and other sources, automating cyberattack mitigation. Vectra employs supervised machine learning techniques to train its threat detection models along with unsupervised techniques to identify attacks that haven’t been seen previously. The company’s data scientists build and tune self-learning AI systems that complement the metadata with key security information.

Another vendor, SafeGuard Cyber, leverages an AI-powered engine called Threat Cortex that detects and spotlights risks across different attack surfaces. Threat Cortex searches the dark web and deep web to surface attackers and risk events, automatically notifying stakeholders when an anomaly crops up. Using SafeGuard Cyber, admins can quarantine unauthorized data from leaving an organization or specific account. It allows them to lock down and revert compromised accounts back to an earlier, uncompromised state.

According to a recent Darktrace report, 44% of executives are assessing AI-enabled security systems, and 38% are deploying autonomous response technology. This agrees with findings from Statista. In a 2019 analysis, that firm reported that around 80% of executives in the telecommunications industry believe their organization wouldn’t be able to respond to cyberattacks without AI.

“Machine learning has many implications for cybersecurity. Unfortunately, this includes seasoned cyber attackers, who we presume will start to use this technology to protect their malicious infrastructure, improve malware they create and to find, and target vulnerabilities in company systems,” Slovakia-based cybersecurity company ESET wrote in a 2018 whitepaper. “The hype around the topics and growing number of news stories revolving around massive data leaks and cyberattacks fuels fears in company IT departments of what is yet to come.”

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.