Content
summary Summary

OpenAI has announced the start of training for a new flagship model and introduced new safety measures.

Ad

The OpenAI Board of Directors has formed a new Safety and Security Committee, led by directors Bret Taylor, Adam D'Angelo, Nicole Seligman, and CEO Sam Altman. Over the next 90 days, the committee will develop recommendations on critical security decisions for all OpenAI projects, the company announced today.

OpenAI also revealed that it has "recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI."

Once the discussions are complete, OpenAI plans to make the committee's recommendations public, to the extent permitted by security requirements. The committee includes OpenAI experts from the technology and policy communities, such as former Cyber Commissioner Rob Joyce, and other outside security experts.

Ad
Ad

The announcement of a new safety and security unit is likely a response to criticism over the past two weeks, during which many AGI safety researchers turned their backs on OpenAI, accusing the company of allegedly careless safety practices. OpenAI has disbanded the team responsible for developing Super AI alignment and integrated the measures into existing safety teams.

GPT-5 and beyond

Rumor has it that OpenAI plans to release an intermediate step like GPT-4.5 this summer. GPT-5 is more likely planned for the end of the year.

At Microsoft's Build developer conference, Microsoft CTO Kevin Scott showed a slide with an OpenAI model to be released in 2024. Although the axes are not labeled, the image suggests a big jump compared to GPT-4.

Kevin Scott, CTO von Microsoft, auf der Bühne der Build 2024 vor einer Grafik, die das exponentielle Wachstum der KI zeigt.
I wonder if Microsoft chose this slide carefully? | Image: Microsoft (Screenshot)

An OpenAI researcher showed a similar graph at the Viva Technology tech festival in Paris. Here, the model to be released in 2024 was called "GPT-Next" and also ranked well above GPT-4 in terms of capabilities.

"Recently" is a rather flexible term, so the new flagship could even be the successor to GPT-5. The training phase is likely to take several months, followed by functional and safety testing, which could also take months. A "recent" start of training supports the idea that it is the successor to GPT-5.

Recommendation

Correction: An earlier version of this article stated that OpenAI had disbanded its Super AI team, omitting the word "alignment". What was meant was the Super AI Alignment Team. I apologize for the confusion.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has established a Safety and Security Committee under the leadership of directors Bret Taylor, Adam D'Angelo, Nicole Seligman and CEO Sam Altman. The committee is charged with recommending to the Board of Directors critical safety and security decisions for OpenAI's projects and operations.
  • OpenAI is also currently training its next breakthrough model to take its capabilities to the "next level on the path to Artificial General Intelligence (AGI)."
  • The new committee will evaluate and refine OpenAI's processes and safeguards over the next 90 days. It will then make recommendations to the full board. Following the Board's review, OpenAI will publish an update on the adopted recommendations to the extent consistent with security.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.