OpenAI Bolsters Team for Artificial Intelligence Safety Oversight

The decision follows the tumultuous temporary sacking of CEO Sam Altman.

December 19, 2023

OpenAI Logo on Smartphone Screen
  • Open AI has announced a new safety plan that allows the company’s board of directors to overrule the CEO in the matter of releasing risky AI models.
  • Multiple safety system teams will oversee risks and potential abuse of AI models, including ChatGPT, and report to the board.

OpenAI announced modifications to its governance structure to give the board of directors the right to veto the release of AI models if deemed risky, even if company leadership, including the CEO, has approved the release. The change has come on the back of the temporary sacking of CEO Sam Altman, which had led to a rather stressful period for the company.

One of the teams, known as the ‘preparedness team,’ has been responsible for constantly assessing all the company’s AI systems. The team will focus on identifying potential risks associated with using these systems. Each AI model will be rated as ‘low,’ ‘medium,’ ‘high,’ and ‘critical’ based on its risk level. The team will also work on measures to reduce the risk levels.

The team, which Aleksander Madry is leading, will provide monthly reports to an internal safety advisory team that will provide related advice to the board and the CEO. However, the board of directors is to have the final say in any decision related to AI systems regardless of the decisions of the company’s executives.

See More: Meta, Google, and Qualcomm Collaborate To Promote Digital Openness

Other teams include the ‘safety systems’ team, which works on current products and complies with safety standards, and the ‘super alignment’ team, which primarily focuses on hypothetical future AI models with superior capabilities. The company will release only AI models with a risk rating of medium or below going forward.

The change in AI oversight could potentially be used as a blueprint by other companies in the AI space, especially with mounting concerns about the risks that could arise from AI, including cyber-attacks or the distribution of information that could result in the generation of chemical or biological weapons, among others.

Do you think oversight teams in AI development are doing enough? Let us know your thoughts on LinkedInOpens a new window , XOpens a new window , or FacebookOpens a new window . We’d love to hear from you!

Image source: Shutterstock

LATEST NEWS STORIES

Anuj Mudaliar
Anuj Mudaliar is a content development professional with a keen interest in emerging technologies, particularly advances in AI. As a tech editor for Spiceworks, Anuj covers many topics, including cloud, cybersecurity, emerging tech innovation, AI, and hardware. When not at work, he spends his time outdoors - trekking, camping, and stargazing. He is also interested in cooking and experiencing cuisine from around the world.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.