Content
summary Summary

OpenAI has fired two AI safety researchers for allegedly leaking confidential information after an internal power struggle, The Information reports.

One of those fired was Leopold Aschenbrenner, a former member of the "Superalignment" team, which focuses on the governance of advanced artificial intelligence and is tasked with developing techniques to control and manage a potential super-AI.

The other fired employee, Pavel Izmailov, worked on AI reasoning and was also part of the safety team for a time, according to The Information.

The report also states that Aschenbrenner is an ally of chief scientist Ilya Sutskever, who was involved in a failed attempt to oust CEO Sam Altman in the fall.

Ad
Ad

Sutskever has not been seen in public since. OpenAI and Altman have not explicitly commented on the matter. In mid-March, Altman said Sutskever is fine but would not comment on his role at the company.

Is this about the Q* leak?

It is unknown what information the two fired employees allegedly leaked. However, there has only been one major leak at OpenAI in the recent past: Q*, OpenAI's large language model logic unit, which was first used in GPT-5.

The profile of the two researchers fits this leak, as Q* reinforces the reasoning capabilities of LLMs, and thus presumably safety issues.

According to the report, Aschenbrenner has ties to the "Effective Altruism" movement, which prioritizes fighting the dangers of AI over short-term profits or productivity benefits.

It is not known if this is related to his dismissal or if it was a possible motive for a leak. There was reportedly a disagreement over whether the startup was developing AI safely enough at the time of Altman's brief ouster.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has fired two AI safety researchers for allegedly leaking confidential information.
  • One of the fired researchers, Leopold Aschenbrenner, was a member of the Superalignment team, which works on the social safety of advanced AI. The other fired employee, Pavel Izmailov, worked on AI reasoning and was also part of the safety team at one point.
  • It is unclear what information the two employees allegedly leaked. Recently, however, there was a major leak of Q*, OpenAI's large language model logic unit, which could be used for the first time in GPT-5. The profile of the two researchers matches that leak.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.