Tue | Feb 20, 2024 | 5:13 AM PST

OpenAI and Microsoft recently collaborated to identify and disrupt several nation-state actors who were attempting to use AI services for malicious cyber activities.

According to Microsoft, the disrupted threat actors were affiliated with China, Iran, North Korea, and Russia. Their activities focused on using AI for reconnaissance, social engineering, scripting, and evading detection. The capabilities enabled by current AI systems were limited compared to existing non-AI tools. But OpenAI and Microsoft view this as an escalating threat that requires vigilance.

Open AI shared that there were five actors it disrupted: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard.

The company said the identified OpenAI accounts associated with these actors were terminated and that they "generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks."

Specifically, it said:

  • "Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns."
  • "Salmon Typhoon used our services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system."
  • "Crimson Sandstorm used our services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection."
  • "Emerald Sleet used our services to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns."
  • "Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks."

By terminating accounts and limiting access, OpenAI and Microsoft have temporarily contained the threat. However, they acknowledge that powerful AI systems are now widely accessible, making it difficult to control their use.

"The emergence of nation-state actors leveraging generative AI in cyber operations is no surprise and underscores the urgent need for proactive measures to safeguard digital infrastructure and information assets," said Ted Miracco, CEO of Approov Mobile Security.

Mark Campbell, Senior Director at Cigent, noted that "Phishing, whether human or AI-generated, is still the leading cause of initial access." He emphasized that security teams need advanced defenses like AI-enabled endpoint solutions to detect and stop attacks, including those initiated through AI-generated phishing.

This development signals that the age-old battle between cyber defenders and attackers is escalating to a new level. AI promises benefits but also risks.

For now, OpenAI and Microsoft appear to have stayed ahead of the attackers. But proactive measures and collaboration will be needed to minimize the chances of advanced AI systems being weaponized and causing widespread harm.

Follow SecureWorld News for more stories related to cybersecurity.

Comments