About half of OpenAI's AGI/ASI safety researchers have left the company recently, according to a former employee. The departures likely stem from disagreements over managing the risks of potential superintelligent AI.
Daniel Kokotajlo, a former OpenAI safety researcher, told Fortune magazine that around half of the company's safety researchers have departed, including prominent leaders.
While Kokotajlo didn't comment on specific reasons for all the resignations, he believes they align with his own views: OpenAI is "fairly close" to developing artificial general intelligence (AGI) but isn't prepared to "handle all that entails."
This has led to a "chilling effect" on those trying to publish research on AGI risks within the company, Kokotajlo said. He also noted an "increasing amount of influence by the communications and lobbying wings of OpenAI" on what's deemed appropriate to publish.
The temporary firing of OpenAI CEO Sam Altman was also linked to safety concerns. A law firm cleared Altman after his reinstatement.
Of about 30 employees working on AGI safety issues, around 16 remain. Kokotajlo said these departures weren't a "coordinated thing" but rather people "individually giving up."
Notable departures include Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and OpenAI co-founder John Schulman.
The resignations of chief scientist Ilya Sutskever and Jan Leike, who jointly led the company's "superalignment" team focused on future AI system safety, were particularly significant. OpenAI subsequently disbanded this team.
Experts leave OpenAI, but not AGI
Kokotajlo expressed disappointment, but not surprise, that OpenAI opposed California's SB 1047 bill, which aims to regulate advanced AI system risks. He co-signed a letter to Governor Newsom criticizing OpenAI's stance, calling it a betrayal of the company's original plans to thoroughly assess AGI's long-term risks for developing regulations and laws.
"We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the
company is developing. But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems," the letter reads.
Leike and Schulman moved to safety research roles at OpenAI competitor Anthropic, which supports SB 1407 with some reservations. Before leaving OpenAI, Schulman said he believed AGI would be possible in two to three years. Sutskever went further and founded his own startup to develop safe superintelligent AI.
Notably, while these researchers are leaving OpenAI, they remain committed to AI work. This suggests they still see potential in the technology, but no longer view OpenAI as the right employer.