Content
summary Summary

About half of OpenAI's AGI/ASI safety researchers have left the company recently, according to a former employee. The departures likely stem from disagreements over managing the risks of potential superintelligent AI.

Ad

Daniel Kokotajlo, a former OpenAI safety researcher, told Fortune magazine that around half of the company's safety researchers have departed, including prominent leaders.

While Kokotajlo didn't comment on specific reasons for all the resignations, he believes they align with his own views: OpenAI is "fairly close" to developing artificial general intelligence (AGI) but isn't prepared to "handle all that entails."

This has led to a "chilling effect" on those trying to publish research on AGI risks within the company, Kokotajlo said. He also noted an "increasing amount of influence by the communications and lobbying wings of OpenAI" on what's deemed appropriate to publish.

Ad
Ad

The temporary firing of OpenAI CEO Sam Altman was also linked to safety concerns. A law firm cleared Altman after his reinstatement.

Of about 30 employees working on AGI safety issues, around 16 remain. Kokotajlo said these departures weren't a "coordinated thing" but rather people "individually giving up."

Notable departures include Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and OpenAI co-founder John Schulman.

The resignations of chief scientist Ilya Sutskever and Jan Leike, who jointly led the company's "superalignment" team focused on future AI system safety, were particularly significant. OpenAI subsequently disbanded this team.

Experts leave OpenAI, but not AGI

Kokotajlo expressed disappointment, but not surprise, that OpenAI opposed California's SB 1047 bill, which aims to regulate advanced AI system risks. He co-signed a letter to Governor Newsom criticizing OpenAI's stance, calling it a betrayal of the company's original plans to thoroughly assess AGI's long-term risks for developing regulations and laws.

Recommendation

"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the
company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems," the letter reads.

Leike and Schulman moved to safety research roles at OpenAI competitor Anthropic, which supports SB 1407 with some reservations. Before leaving OpenAI, Schulman said he believed AGI would be possible in two to three years. Sutskever went further and founded his own startup to develop safe superintelligent AI.

Notably, while these researchers are leaving OpenAI, they remain committed to AI work. This suggests they still see potential in the technology, but no longer view OpenAI as the right employer.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has seen a significant exodus of its AI safety researchers recently, including high-profile leaders. Chief scientist Ilya Sutskever and Jan Leike, who headed the "Superalignment" team, are among those who left.
  • Daniel Kokotajlo, a former OpenAI safety researcher, told Fortune that disagreements over managing potential superintelligent AI (AGI) risks drove these departures. Kokotajlo believes OpenAI is "fairly close" to developing AGI but lacks readiness to address all associated risks.
  • Most departing researchers remain in the AI field, moving to competitors like Anthropic or launching their own startups. This suggests they still see promise in AI technology but no longer view OpenAI as the right place to pursue it.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.