OpenAI's recent success with its o3 model suggests AI development isn't slowing down - in fact, it might be picking up speed, according to Anthropic co-founder Jack Clark.
In his newsletter "Import AI," Clark pushes back against claims that AI development is hitting its limits. "Everyone who was telling you progress is slowing or scaling is hitting a wall is wrong," he writes.
Clark points to OpenAI's new o3 model as proof that there's still plenty of room for growth, but through a different approach. Instead of just making bigger models, o3 uses reinforcement learning and extra computing power while it runs.
Clark says this ability to "think out loud" while running opens up entirely new possibilities for scaling. He expects this trend to pick up steam in 2025, when companies start combining traditional approaches like larger base models with new ways of using compute during both training and inference. This mirrors what OpenAI said when they first introduced their o-model series.
The price of progress
Clark believes most people aren't ready for how fast things are about to move. " think basically no one is pricing in just how drastic the progress will be from here," he warns.
However, he points to computing costs as a major challenge. The most advanced version of o3 needs 170 times more computing power than its basic version, which already uses more resources than o1 - and o1 itself requires more power than GPT-4o.
These new systems make costs much harder to predict, Clark explains. In the past, expenses were straightforward - they mainly depended on model size and output length. But with o3, resource needs can vary dramatically based on the specific task.
Despite these challenges, Clark is convinced that combining traditional scaling methods with new approaches will lead to "even more dramatic" AI advances in 2025 than we've seen so far.
Waiting for Anthropic's next move
Clark's predictions raise interesting questions about Anthropic's own plans. The company hasn't yet released a "reasoning" or "test-time" model to compete with OpenAI's o-Series or Google's Gemini Flash Thinking.
Their previously announced Opus 3.5 flagship model remains on hold - reportedly because its performance improvements didn't justify the operating costs.
While some suggest this and similar delays point to broader scaling challenges in large language models, Opus 3.5 wasn't a complete setback. The model apparently helped train the new Sonnet 3.5, which has become the market's most popular language model.