OpenAI has shut down its AGI Readiness Team, a group responsible for developing safeguards around advanced artificial intelligence systems.
The team focused on the safety of artificial general intelligence (AGI), which OpenAI defines in economic terms as AI systems capable of operating autonomously and automating a wide range of human tasks. The team members will be reassigned to other departments within the company.
Miles Brundage, OpenAI's outgoing Senior Advisor for AGI Readiness, expresses serious concerns about this development as he announces his departure from the company. "In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready," Brundage states in a detailed public statement.
Former internal AGI readiness advisor warns of lack of regulation
Brundage points to significant gaps in AI oversight, noting that tech companies have strong financial motivations to resist effective regulation. He emphasizes that developing safe AI systems requires deliberate action from governments, companies, and civil society rather than occurring automatically.
Following his departure, Brundage plans to either establish or join a non-profit organization, saying he can have more impact working outside the industry. "I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so." His team developed the five stages of AI progress for OpenAI.
This latest shutdown follows OpenAI's decision in May to disband its Superalignment team, which studied long-term AI safety risks. At that time, team leader Jan Leike publicly criticized the company, stating that "security culture and processes have to take a back seat to "shiny products."
The company sees additional leadership changes, with three executives departing last month, including CTO Mira Murati and Head of Research Bob McGrew. Despite these internal changes and ongoing financial losses, investors recently value OpenAI at $157 billion.
The shutdown of OpenAI's AGI and ASI security teams points to two possible conclusions: The company either no longer sees advanced AI as an immediate possibility, or it is taking an irresponsible approach to AI safety, according to critics of CEO Sam Altman.
Altman continues to argue that the development of AGI is both feasible and imminent. Dario Amodei, CEO of Anthropic, one of OpenAI's main competitors, suggested that while AGI could emerge as early as 2026, the timeline remains highly uncertain.