The week beginning October 30, 2023, was a busy week for AI policymakers: On Monday, the US released President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and the G7 announced its agreement on Guiding Principles and a Code of Conduct on artificial intelligence. And on November 1 and 2, around 150 representatives from governments, industry, and academia around the globe congregated at the UK AI Safety Summit, convened by UK Prime Minister Rishi Sunak. In this blog post, we analyze the G7 announcements; find our analysis of the UK AI Safety Summit here.

The G7 Releases Add Incrementally To The OECD Principles

The G7 kicked off the “Hiroshima AI process” with the intent to create a comprehensive AI policy framework in May 2023. The guiding principles (Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems) and the code of conduct (Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems) are the latest outcomes and represent two of the four pillars of the framework. The analysis of generative AI risks and opportunities and the upcoming project-based cooperation in support of the development of responsible AI tools and best practices complete the framework.

Building on the existing OECD AI Principles, the ambition of the G7’s AI policy framework is “to provide guidance for organizations developing and using the most advanced AI systems, including the most advanced foundation models and generative AI systems.” The guiding principles and the code of conduct elaborate on the critical request of “all AI actors” to:

  • Double down on transparency, security, and accountability mechanisms in the context of a risk-based approach during the design, development, deployment, and use of advanced AI systems. They stress the need to increase testing to identify, evaluate, and mitigate risks across the AI lifecycle.
  • Require organizations to document and share information about vulnerabilities, patterns of misuse, and incidents when using advanced AI systems.
  • Develop and deploy authentication and provenance mechanisms to facilitate the identification of content and decisions made by advanced AI systems and identify the AI system itself.

And in alignment with the Biden Executive Order on AI, the G7 also calls out some areas of particularly high risk, such as chemical, radiological, and nuclear risks, health and safety, threats to democratic values and human rights, etc., which organizations must prioritize in their risk assessment and mitigation strategies.

This is a work in progress, and it’s too early to say whether it has moved the AI policy needle sufficiently. In fact, its success will largely depend on three factors: 1) the details on how organizations will achieve these goals; 2) the timeline for completing and implementing the policy; and 3) its impact on the AI approach that the OECD and the UN are elaborating. The next global milestones will be the release of the UN AI Advisory Body’s interim report on AI governance, planned for the end of 2023, and the EU’s AI Act, which we expect to pass in the next couple of months.