OpenAI has fired two AI safety researchers for allegedly leaking confidential information after an internal power struggle, The Information reports.
One of those fired was Leopold Aschenbrenner, a former member of the "Superalignment" team, which focuses on the governance of advanced artificial intelligence and is tasked with developing techniques to control and manage a potential super-AI.
The other fired employee, Pavel Izmailov, worked on AI reasoning and was also part of the safety team for a time, according to The Information.
The report also states that Aschenbrenner is an ally of chief scientist Ilya Sutskever, who was involved in a failed attempt to oust CEO Sam Altman in the fall.
Sutskever has not been seen in public since. OpenAI and Altman have not explicitly commented on the matter. In mid-March, Altman said Sutskever is fine but would not comment on his role at the company.
Is this about the Q* leak?
It is unknown what information the two fired employees allegedly leaked. However, there has only been one major leak at OpenAI in the recent past: Q*, OpenAI's large language model logic unit, which was first used in GPT-5.
The profile of the two researchers fits this leak, as Q* reinforces the reasoning capabilities of LLMs, and thus presumably safety issues.
According to the report, Aschenbrenner has ties to the "Effective Altruism" movement, which prioritizes fighting the dangers of AI over short-term profits or productivity benefits.
It is not known if this is related to his dismissal or if it was a possible motive for a leak. There was reportedly a disagreement over whether the startup was developing AI safely enough at the time of Altman's brief ouster.