Americas

  • United States

Asia

by Grant Gross

US creates advisory group to consider AI regulation

news
Feb 08, 20244 mins
Artificial IntelligenceGenerative AIGovernment

More than 200 companies and organizations will participate in the AI Safety Institute Consortium to create guidelines for ensuring the safety of AI systems.

icons related to security, safety and protection
Credit: Thinkstock

The US government has created an artificial intelligence safety advisory group, including AI creators, users, and academics, with the goal of putting some guardrails on AI use and development.

The new US AI Safety Institute Consortium (AISIC), part of the National Institute of Standards and Technology, is tasked with coming up with guidelines for red-teaming AI systems, evaluating AI capacity, managing risk, ensuring safety and security, and watermarking AI-generated content.

On Thursday, the US Department of Commerce, NIST’s parent agency, announced both the creation of AISIC and a list of more than 200 participating companies and organizations. Amazon.com, Carnegie Mellon University, Duke University, the Free Software Foundation, and Visa are all members of AISIC, as well as several major developers of AI tools, including Apple, Google, Microsoft, and OpenAI.

The consortium “will ensure America is at the front of the pack” in setting AI safety standards while encouraging innovation, US Secretary of Commerce Gina Raimondo said in a statement. “Together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

In addition to the announcement of the new consortium, the Biden administration this week named Elizabeth Kelly, a former economic policy adviser to the president, as director of the newly formed US Artificial Intelligence Safety Institute (USAISI), an organization within NIST that will house AISIC.

It’s unclear whether the coalition’s work will lead to regulations or new laws. While President Joe Biden issued an Oct. 30 executive order on AI safety, the timeline for the consortium’s work is up in the air. Furthermore, if Biden loses the presidential election later this year, momentum for AI regulations could stall.

However, Biden’s recent executive order suggests some regulation is needed. “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks,” the executive order says. “This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”

Among Biden’s goals:

  • Require developers of AI systems to share their safety test results with the U.S. government.
  • Develop standards, tools, and test to help ensure that AI systems are safe, secure, and trustworthy.
  • Protect US residents against AI-enabled fraud and deception.
  • Establish a cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.

The AI Safety Institute Consortium seeks contributions from its members in several of those areas, notably around the development of testing tools and industry standards for safe AI deployment.

Meanwhile, lawmakers have introduced dozens of AI-related bills in the US Congress during the 2023-24 session. The Artificial Intelligence Bug Bounty Act would require the Department of Defense to create a bug bounty program for AI tools it uses. The Block Nuclear Launch by Autonomous Artificial Intelligence Act would prohibit the use of federal funds for autonomous weapons systems that can launch nuclear weapons without meaningful human intervention. During an election year, it’s difficult to pass bills in Congress, however.

Several people, including Elon Musk and Stephen Hawking, have raised many concerns about AI, including the far-off threat that AI will eventually take control of the world. Nearer-term concerns including using AI to create bioweapons, new cyberattacks, or disinformation campaigns.

But others, including venture capitalist Marc Andreessen, have suggested that many concerns about AI are overblown.

Andreessen, in a lengthy June 6, 2023 blog post, argued that AI will save the world. He called for no regulatory barriers “whatsoever” on open-source AI development, because of the benefits to students learning to build AI systems.

However, he wrote, opportunists with a chance to profit from regulation have created a “moral panic” about the dangers of AI as a way to force new restrictions, regulations, and laws. Leaders of existing AI companies “stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition.”