Content
summary Summary

OpenAI has disbanded its "Superalignment" team, which focused on ensuring the safety of future high-performance AI systems, Bloomberg reports. The team's work will be integrated into general safety research.

Following accusations and criticism from former safety researcher Jan Leike, OpenAI CEO Sam Altman and co-founder and President Greg Brockman published a lengthy but ultimately vague statement on X.

The message: The path to AGI is uncertain, and OpenAI will continue to advocate for AI safety. After his departure, Leike sharply criticized OpenAI's leadership for not taking the risks of very advanced AI systems seriously.

Did OpenAI overestimate the risk of AGI?

One theory about what happened at OpenAI is that the risk of AGI was simply overestimated, and as a result, safety research on super-AI was exaggerated.

Ad
Ad

Management recognizes this trend and is cutting resources accordingly. But the safety team feels misunderstood - and leaves. This theory is consistent with the super alignment team being eliminated.

"When you put sufficiently many people in a room together with such a distorted view of reality that they perceive an impending Great Evil, they often fall victim to a spiral of purity that makes them hold more and more extreme beliefs. Pretty soon, they become toxic to the organization that hosts and funds them. They become marginalized and eventually leave," writes Meta's chief AI researcher Yann LeCun, who is dismissive of near-term AGI.

But recent statements by John Schulman, co-creator of ChatGPT and co-founder of OpenAI, who will take over Leike's role, contradict this assumption. Schulman believes that AGI will be possible within the next two to three years. He even suggests cross-organizational rules, including a possible pause, so that such a system is not rolled out to many people without clear safety rules.

"If AGI came way sooner than expected, we would definitely want to be careful about it. We might want to slow down a little bit on training and deployment until we're pretty sure we know we can deal with it safely," Schulman says.

Another anecdote that does not inspire confidence in OpenAI as a company that decides the fate of humanity: According to OpenAI CEO Altman on X, he knew nothing about the gag clauses that OpenAI employees had to sign when they left the company.

Recommendation

One clause threatened employees with the potential loss of millions of dollars in OpenAI stock if they spoke critically about the company after leaving. The clause, uncovered by a journalist at Vox, doesn't exactly fit the image of the friendly startup that wants to free the world from its evil twin, Google, and lead it into a brighter future of prosperity and happiness for all.

"Although we never clawed anything back, it should never have been something we had in any documents or communication. This is on me and one of the few times I've been genuinely embarrassed running OpenAI; I did not know this was happening, and I should have," Altman writes.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has disbanded its Superalignment team, which focused on the safety of future AGI systems. The team's work will now be integrated into general safety research.
  • There are conflicting views on the reasons for this development: It is possible that the potential for timely AGI, and thus research into AGI safety, has been overestimated.
  • On the other hand, OpenAI co-founder John Schulman said in a recent podcast that he believes AGI is possible in two to three years. Schulman is taking over the role of AGI safety researcher Jan Leike, who has left the company.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.