author photo
By Cam Sivesind
Mon | Nov 27, 2023 | 12:35 PM PST

In a significant step forward to safeguard the digital landscape, the United States Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom National Cyber Security Centre (NCSC) have jointly released the Guidelines for Secure AI System Development. These comprehensive guidelines aim to empower organizations worldwide to design, develop, deploy, and operate AI systems with cybersecurity at their core.

The guidelines, meticulously crafted in collaboration with 21 other agencies and ministries across the globe, mark a pivotal moment in addressing the growing cybersecurity concerns surrounding artificial intelligence systems. As AI continues to permeate various aspects of personal lives and businesses—from healthcare to finance to transportation—the need for robust cybersecurity measures becomes increasingly paramount.

Addressing AI-related cybersecurity risks

The guidelines recognize that AI systems, while offering immense potential, also introduce unique cybersecurity risks. These risks stem from the inherent complexity of AI algorithms, the vast amount of data they process, and the potential for misuse or manipulation.

To address these risks, the guidelines provide a comprehensive framework for secure AI system development. The framework emphasizes the importance of adopting a "secure by design" approach, integrating cybersecurity considerations from the very inception of an AI system's development.

"Security is a pre-requisite for safe and trustworthy AI, and today's guidelines from agencies including the NCSC and CISA provide a welcome blueprint for it," said Toby Lewis, Global Head of Threat Analysis at Darktrace. "I'm glad to see the guidelines emphasize the need for AI providers to secure their data and models from attackers, and for AI users to apply the right AI for the right task. Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we'll realize the benefits of AI faster and for more people."

Key guidelines for secure AI system development

The guidelines encompass a range of critical aspects, including:

  • Threat Modeling and Risk Assessment: Identifying and assessing potential cybersecurity threats and vulnerabilities throughout the AI system's lifecycle

  • Data Security and Privacy: Ensuring the confidentiality, integrity, and availability of data used in AI systems, adhering to data privacy regulations

  • Access Control and Authentication: Implementing robust access controls and authentication mechanisms to protect sensitive data and system components

  • Software Security: Employing secure coding practices, vulnerability management, and software supply chain security measures

  • Monitoring and Incident Response: Establishing continuous monitoring capabilities to detect and respond to cyberattacks effectively

The guidelines are broken down into four sections:

  • Secure design explains how to understand risks and threat modelling, as well as trade-offs to consider on system and model design.
  • Secure development features information on supply chain security, documentation, and asset and technical debt management.
  • Secure deployment is about protecting infrastructure and models from compromise, threat, or loss, as well as how to develop incident management processes, and responsible release.
  • Secure operation and maintenance provides guidelines on actions to take once a system has been deployed, including logging and monitoring, update management, and information sharing.

A global call to action

The release of the Guidelines for Secure AI System Development marks a significant step toward building a more secure and resilient digital future. It is a call to action for organizations worldwide to embrace secure design principles, enhance their cybersecurity posture, and protect AI systems from evolving threats.

CISA and NCSC have demonstrated remarkable leadership in fostering global collaboration and providing practical guidance for secure AI system development. Their efforts underscore the importance of international cooperation in addressing cybersecurity challenges and ensuring the responsible and secure adoption of AI technologies.

According to a press release from CISA: "The Guidelines, complementing the U.S. Voluntary Commitments on Ensuring Safe, Secure, and Trustworthy AI, provide essential recommendations for AI system development and emphasize the importance of adhering to Secure by Design principles. The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority."

CISA Director Jen Easterly had this to say:

"The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment—by governments across the world—to ensure the development and deployment of artificial intelligence capabilities that are secure by design. As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices. The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of international partnership in securing our digital future."

NCSC CEO Lindy Cameron added:

"We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up. These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout. I'm proud that the NCSC is leading crucial efforts to raise the AI cybersecurity bar: a more secure global cyber space will help us all to safely and confidently realize this technology's wonderful opportunities."

Here are some additional thoughts from Chaitanya Belwal, Ph.D, Sr. Director, Technical Account Management at Tanium:

“While the document is intended for use at a high level and it is not supposed to give specifics, one thing it should address is the interpretability of the models. Right now there are special notes on building ML models and it also discusses some extra procedures to handle Adversarial Machine Learning (AML) including prompt injection attacks and handling data corruption. But if a model is not interpretable, the developers cannot address several of the challenges mentioned in the document," Belwal said.

"Specifically, Deep Neural networks are notorious for being ‘black box’-like, and the reasons for assigning particular weights to specific inputs can only be ascertained after tracing all the epochs and back-propagation steps in gradient descent, which can be very difficult. Guidance on interpretability of a model will help align the industry and force it to innovate new techniques and come up with an interpretability score for each model," he added. "I forsee that AI regulations will initially deal with data privacy and copyright issues. Once those are sorted out, regulators will want AI models to be trained with more balanced data sets, to remove biases against protected categories (race, gender etc). It may not be too far of a stretch to imagine rules specifying a minimum percent of each category to be present in the training data. Having NIST/CIS type compliance checks for AI models will also be developed and mandated by regulators.”

Comments