Ad
Skip to content

OpenAI's safety brain drain finally gets an explanation and it's just Sam Altman's vibes

Image description
OpenAI (Screenshot bei YouTube)

In a new New Yorker profile, Sam Altman explains why so many safety researchers have left OpenAI.

"My vibes don't really fit with a lot of this traditional A.I.-safety stuff," Altman said. Ironically, those off vibes helped create OpenAI's biggest competitor: Anthropic was founded by former OpenAI safety researchers who left over exactly these concerns.

It's the most telling explanation yet for the long-simmering rift between OpenAI's commercial direction and its safety camp. The company disbanded safety-focused teams and allegedly scaled back safety measures. When employees raised concerns after OpenAI's recent entry into Pentagon contracts, Altman was blunt: "So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that."

Overall, the fun-to-read profile, based on more than 100 interviews and internal documents, paints Altman as deeply polarizing, eager to please yet indifferent to the consequences of potential deceptions, according to one former board member. Altman's take:

Ad
DEC_D_Incontent-1

I think what some people want is a leader who is going to be absolutely sure of what they think and stick with it, and it’s not going to change. And we are in a field, in an area, where things change extremely quickly.

Sam Altman describing his his shifting commitments

A case in point: In 2019, Altman publicly warned against releasing GPT-2 in full because it was supposedly too dangerous. A few years later, he made models many times more capable available to everyone for free.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: The New Yorker