Content
summary Summary

In new interview, Anthropic CEO Dario Amodei pushes back against criticism, explaining his company's controversial business strategy and his personal motivation for focusing on AI safety.

Ad

Speaking with the "Big Technology Podcast," Amodei addressed some of the AI industry's most pressing questions. He forcefully countered criticism from industry leaders, laid out his company's philosophy, and explained why, despite his warnings about the risks, he's not a "doomer."

Amodei's philosophy and personal motivation

Amodei says the death of his father is a driving force behind his stance on AI. His father passed away in 2006 from a disease for which a cure was developed just a few years later, raising the recovery rate from 50 to 95 percent. The experience showed him the urgency of AI's positive applications. He gets "very angry" when people call him a "doomer" who wants to slow down development. "I understand the benefit of this technology," Amodei stressed. It's precisely because the potential for a better world is so great that he feels obligated to warn about the risks. He accuses some "accelerationists" of lacking a "humanistic sense of the benefit of the technology" and being driven by adrenaline.

This deeply personal conviction also shaped his professional path, particularly his much-discussed departure from OpenAI-long before the later turmoil that led to the brief firing and subsequent rehiring of CEO Sam Altman. Amodei says he left not because of technical disagreements but because of a fundamental loss of trust in the leadership's sincerity. Although he led the GPT-3 project, it became clear to him that key decisions-about governance, safety studies, and release strategies-were made at the top. "If you're working for someone whose motivations are not sincere, who's not an honest person, who does not truly want to make the world better, it's not going to work," Amodei said. In that case, you're just contributing to "something bad."

Ad
Ad

During the interview, Amodei also sharply criticized the extreme positions in the current AI debate as "intellectually and morally unserious." On one side are the "doomers," who claim it's logically provable that AI can't be made safe. He dismissed their arguments as "gobbledegook" and "nonsense." On the other side are business leaders sitting on "20 trillion dollars of capital" who dismiss safety concerns as attempts to control the industry. He finds their calls to avoid regulation for ten years and their "ad hominem attacks" equally unserious. What's needed, he argued, is "thoughtfulness, honesty, and more people willing to go against their own interests."

Anthropic's business model and strategy

This mission, focused on safety and responsibility, has to survive in the harsh reality of the AI market. Amodei offered deep insights into Anthropic's unconventional business strategy, which likely applies to much of the AI industry. When asked about profitability, he explained that the company is deliberately unprofitable because each new AI model is treated as a massive reinvestment in the future. He illustrated this with a thought experiment: A model trained in 2023 for $100 million might generate $200 million in revenue in 2024. But if the company spends a billion dollars that same year to train its successor, it ends up with an $800 million loss. "Every model is profitable, but the company is unprofitable every year," Amodei said.

He argues that focusing on enterprise customers drives the development of smarter AI more effectively than focusing on consumers. Improving a model from an "undergrad to a graduate level in biochemistry" is of little interest to 99 percent of consumers. For a company like Pfizer, however, it would be "the biggest deal in the world" and potentially "ten times more value." This incentive to solve real-world problems is more aligned with developing powerful models, which in turn serves the goal of realizing AI's positive applications.

The single largest cost factor in AI development isn't running the models but investing in training the next generation. Amodei broke down the costs: Inference, the process of running the models, is "already fairly profitable." Personnel and building costs are also not decisive in the grand scheme of things. The unprofitability is a conscious strategic choice based on the belief that scaling laws will continue and that the company must remain at the forefront of technological development.

Critique of competitors and the AI industry

Amodei vehemently rejected the accusation from Nvidia CEO Jensen Huang that he believes "he's the only one who can build this safely and therefore wants to control the entire industry." "I've never said anything like that. That's an outrageous lie," Amodei countered. He responded with Anthropic's philosophy of a "race to the top," where the goal is to set positive standards. As an example, he cited the publication of Anthropic's "Responsible Scaling Policy," which he said "gave those people permission" within other companies to argue for similar guidelines. This creates a dynamic where "it doesn't matter who wins, everyone wins."

Recommendation

He also sharply criticized Mark Zuckerberg's talent acquisition strategy at Meta. He believes Meta is "trying to buy something that cannot be bought. And that is alignment with the mission." According to Amodei, many of his employees have rejected lucrative offers from Meta, some "without even talk to Mark Zuckerberg." Anthropic deliberately chose not to make counteroffers to avoid destroying its fairness-based corporate culture with panic. He is "pretty bearish" on the success of Meta's approach.

Amodei also considers the much-discussed issue of open-source AI a "red herring." He believes vocabulary from previous technology cycles, like "commoditization," doesn't apply to AI. With so-called "open weights" models, you can see the weights but can't truly understand what's happening inside. The benefit of many people working together additively on software also doesn't apply to AI models "in the same way." For Amodei, only quality matters: When a new model is released, he doesn't ask about its license but only, "Is it a good model? Is it better than us?"

Insights into AI technology and its development

The history of GPT-2 and GPT-3 at OpenAI shows how closely progress and safety are intertwined, according to Amodei. These models were originally a byproduct of safety research. Amodei and his future co-founders developed Reinforcement Learning from Human Feedback (RLHF) to better control AI models. Since the technique didn't work on the smaller GPT-1 models of the time, scaling up to GPT-2 and GPT-3 was necessary to test and refine RLHF on more complex systems.

Against this backdrop, Amodei warns against underestimating the speed of AI development. He believes most people are "fooled by the exponential" trend. He draws a parallel to the internet in the 90s: When a technology doubles every six months, it looks like it's just getting started two years before its breakthrough-but a major shift is just around the corner. He sees Anthropic's rapid revenue growth-from zero to over four billion dollars in annualized revenue in less than three years-as proof of this dynamic.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

At the same time, Amodei remains realistic despite his general optimism. He acknowledges a "20 to 25 percent chance" that progress in AI models could stall in the next two years due to unknown technical hurdles or bottlenecks in data and computing power. If that happens and his warnings prove unfounded, he has "absolutely no problem" with people making fun of him.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Anthropic CEO Dario Amodei explained that his drive to focus on AI safety stems from personal loss and a belief in the technology's positive impact, but he rejects being labeled a "doomer" and criticizes both extreme pessimists and industry leaders who downplay safety concerns.
  • Amodei outlined Anthropic's strategy of reinvesting heavily in new AI models, resulting in annual company losses despite profitable products, and emphasized targeting enterprise clients over consumers to maximize real-world benefits and accelerate meaningful progress.
  • He dismissed accusations of wanting to control the industry, argued that open-source debates miss the real issues, and stressed that AI progress is happening faster than most realize, while also acknowledging the risk that development could slow due to unforeseen technical or resource limits.
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.