Content
summary Summary

Update from May 30, 2024:

Ad

In a podcast, Helen Toner confirms and expands on her accusations against Sam Altman. The former OpenAI board member accuses the CEO of lying to the board on several occasions, including about safety precautions in AI development and his own financial interests in the company.

According to Toner, the board only learned about ChatGPT after it was published - via Twitter. In addition, two unnamed managers described Altman as unfit to lead OpenAI at ASI and spoke of "psychological abuse.

Altman was abruptly fired in November 2023 in an attempt to silence his potential opposition. According to Toner, Altman had already begun lying to board members in an attempt to force her off the board.

Ad
Ad

Original article dated May 27, 2024:

Former OpenAI board members accuse CEO Sam Altman of cultivating a "toxic culture of lies"

Former OpenAI board members Helen Toner (2021-2023) and Tasha McCauley (2018-2023) played key roles in the firing of CEO Sam Altman last November. They are now talking about their motives.

Toner and McCauley say Altman weakened the board's oversight of key decisions and safety rules. Many executives told the board they were very concerned that Altman was creating "a toxic culture of lying" and engaged in "behaviour [that] can be characterised as psychological abuse."

Both believe that OpenAI can't regulate itself, an idea that stems from the mix of nonprofit and for-profit structures.

Toner and McCauley were on the nonprofit's board when Altman was fired. The nonprofit is supposed to ensure that OpenAI's business always supports the nonprofit's main goal of creating AI for the benefit of humanity.

Recommendation

In Toner and McCauley's view, Altman's firing was a direct response to the fact that the for-profit OpenAI was no longer meeting that goal.

"The board's ability to uphold the company’s mission had become increasingly constrained due to long-standing patterns of behaviour exhibited by Mr Altman, which, among other things, we believe undermined the board’s oversight of key decisions and internal safety protocols," Toner and McCauley write in a guest article for The Economist.

Based on what they've seen, Toner and McCauley say the attempt at self-regulation hasn't worked. Regulators now need to get more involved in the market.

They also take issue with a report by a law firm that cleared Altman after an internal review and said that firing him was not necessary. OpenAI has not said how that decision was made and has not shared the report inside or outside the company, according to Toner and McCauley.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

How OpenAI has changed since Altman returned as CEO and regained his seat on the OpenAI board, and the recent loss of many security experts, some of whom were very critical of OpenAI's security measures, does not make the company look good, Toner and McCauley write.

Recently, it came to light that OpenAI used contracts that prevented former employees from criticizing OpenAI. Altman says he didn't know about these clauses, even though he signed contracts that allowed them. OpenAI is now working to remove the clauses from past contracts.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Helen Toner and Tasha McCauley, former OpenAI board members, were instrumental in ousting CEO Sam Altman in November 2023. They accuse him of undermining board oversight and cultivating a "toxic culture of lies.
  • Toner and McCauley are convinced that OpenAI is incapable of self-regulation. The concept of combining non-profit and for-profit organizations for self-regulation has failed. They call for more regulatory intervention.
  • Developments since Altman's return as CEO, his place on the board, and the departure of numerous safety researchers would not reflect well on OpenAI. Recently, it came to light that OpenAI used contracts that prevented former employees from criticizing OpenAI.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.