Content
summary Summary

OpenAI has shut down its AGI Readiness Team, a group responsible for developing safeguards around advanced artificial intelligence systems.

Ad

The team focused on the safety of artificial general intelligence (AGI), which OpenAI defines in economic terms as AI systems capable of operating autonomously and automating a wide range of human tasks. The team members will be reassigned to other departments within the company.

Miles Brundage, OpenAI's outgoing Senior Advisor for AGI Readiness, expresses serious concerns about this development as he announces his departure from the company. "In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready," Brundage states in a detailed public statement.

Former internal AGI readiness advisor warns of lack of regulation

Brundage points to significant gaps in AI oversight, noting that tech companies have strong financial motivations to resist effective regulation. He emphasizes that developing safe AI systems requires deliberate action from governments, companies, and civil society rather than occurring automatically.

Ad
Ad

Following his departure, Brundage plans to either establish or join a non-profit organization, saying he can have more impact working outside the industry. "I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so." His team developed the five stages of AI progress for OpenAI.

This latest shutdown follows OpenAI's decision in May to disband its Superalignment team, which studied long-term AI safety risks. At that time, team leader Jan Leike publicly criticized the company, stating that "security culture and processes have to take a back seat to "shiny products."

The company sees additional leadership changes, with three executives departing last month, including CTO Mira Murati and Head of Research Bob McGrew. Despite these internal changes and ongoing financial losses, investors recently value OpenAI at $157 billion.

The shutdown of OpenAI's AGI and ASI security teams points to two possible conclusions: The company either no longer sees advanced AI as an immediate possibility, or it is taking an irresponsible approach to AI safety, according to critics of CEO Sam Altman.

Altman continues to argue that the development of AGI is both feasible and imminent. Dario Amodei, CEO of Anthropic, one of OpenAI's main competitors, suggested that while AGI could emerge as early as 2026, the timeline remains highly uncertain.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has disbanded its AGI Readiness Team, which was responsible for developing safeguards around advanced artificial intelligence systems, and reassigned team members to other departments within the company.
  • Miles Brundage, OpenAI's outgoing Senior Advisor for AGI Readiness, expresses serious concerns about the lack of preparedness for AGI development, both within OpenAI and in the world at large.
  • The closure of OpenAI's AGI and ASI safety teams could mean one of two things: OpenAI is either no longer considering advanced AI as an immediate possibility, or is taking an irresponsible approach to AGI safety.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.