Ad
Skip to content

AI progress in 2025 will be "even more dramatic," says Anthropic co-founder

Image description
Midjourney prompted by THE DECODER

Key Points

  • Anthropic co-founder Jack Clark anticipates a significant acceleration in AI development by 2025, driven by the combination of traditional model scaling and novel approaches like OpenAI's test-time compute scaling used in their o-models.
  • Test-time compute scaling allows AI models to "think out loud" by utilizing additional computing power during training and inference, opening up new possibilities for scaling that could further advance AI progress.
  • However, the significantly higher computing power required for test-time compute scaling makes operating costs less predictable.

OpenAI's recent success with its o3 model suggests AI development isn't slowing down - in fact, it might be picking up speed, according to Anthropic co-founder Jack Clark.

In his newsletter "Import AI," Clark pushes back against claims that AI development is hitting its limits. "Everyone who was telling you progress is slowing or scaling is hitting a wall is wrong," he writes.

Clark points to OpenAI's new o3 model as proof that there's still plenty of room for growth, but through a different approach. Instead of just making bigger models, o3 uses reinforcement learning and extra computing power while it runs.

Clark says this ability to "think out loud" while running opens up entirely new possibilities for scaling. He expects this trend to pick up steam in 2025, when companies start combining traditional approaches like larger base models with new ways of using compute during both training and inference. This mirrors what OpenAI said when they first introduced their o-model series.

Ad
DEC_D_Incontent-1

The price of progress

Clark believes most people aren't ready for how fast things are about to move. " think basically no one is pricing in just how drastic the progress will be from here," he warns.

However, he points to computing costs as a major challenge. The most advanced version of o3 needs 170 times more computing power than its basic version, which already uses more resources than o1 - and o1 itself requires more power than GPT-4o.

These new systems make costs much harder to predict, Clark explains. In the past, expenses were straightforward - they mainly depended on model size and output length. But with o3, resource needs can vary dramatically based on the specific task.

Despite these challenges, Clark is convinced that combining traditional scaling methods with new approaches will lead to "even more dramatic" AI advances in 2025 than we've seen so far.

Ad
DEC_D_Incontent-2

Waiting for Anthropic's next move

Clark's predictions raise interesting questions about Anthropic's own plans. The company hasn't yet released a "reasoning" or "test-time" model to compete with OpenAI's o-Series or Google's Gemini Flash Thinking.

Their previously announced Opus 3.5 flagship model remains on hold - reportedly because its performance improvements didn't justify the operating costs.

While some suggest this and similar delays point to broader scaling challenges in large language models, Opus 3.5 wasn't a complete setback. The model apparently helped train the new Sonnet 3.5, which has become the market's most popular language model.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Import AI