OpenAI is setting up an independent oversight committee to monitor safety and security measures in AI development and deployment. The committee will have broad authority.
According to a company statement, the Safety and Security Committee will be an independent supervisory body reporting to the Board of Directors. It will oversee critical safety measures in the development and rollout of AI models.
Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, will chair the committee. Other members include Adam D'Angelo, co-founder and CEO of Quora; retired US Army General Paul Nakasone; and Nicole Seligman, former Executive Vice President and General Counsel of Sony Corporation.
The committee's duties include overseeing the security and safety processes guiding OpenAI's model development and deployment. Senior management will brief the committee on security evaluations for major model releases. Along with the full board, the committee has the power to delay a release until security concerns are addressed.
This restructuring follows recommendations from the committee itself, made after a 90-day review of OpenAI's security processes and safeguards.
OpenAI plans "Information Sharing and Analysis Center" to boost industry collaboration
OpenAI also announced it is exploring the creation of an "Information Sharing and Analysis Center" (ISAC) for the AI industry. This would enable companies in the AI sector to share threat information and cybersecurity data.
The company also plans to enhance its internal information controls and hire more staff to strengthen its 24/7 security teams. Additionally, OpenAI aims to be more transparent about its AI models' capabilities and risks.
Last month, OpenAI signed an agreement with the US government to research, test, and evaluate the company's AI models. Before that, the company lost several of its leading AI safety experts and disbanded its alignment team.