Content
summary Summary

Recent gold medal wins by Google Deepmind and OpenAI's AI systems at the International Mathematical Olympiad are fueling an old debate about the nature of intelligence and the role of symbols, pitting deep learning approaches against classic symbolic AI.

Ad

Both companies announced that their AI models reached gold medal-level performance at the IMO. The striking detail: this was reportedly achieved solely through natural language processing, without using symbolic tools during problem solving—though such tools, like verifiers, may have been involved during training.

Deepmind researcher Andrew Lampinen calls this part of a "long-term shift" that moves AI closer to human intelligence. The results challenge a core assumption in AI research and spark new debate over the right path to advanced logical reasoning.

The old school: AI as formal symbol manipulation

In the era of "Good Old-Fashioned AI" (GOFAI), intelligence was defined as formal manipulation of discrete symbols. Even today, proponents of neuro-symbolic approaches argue that "pure symbol manipulation is the only 'real' intelligence," as Lampinen put it on X. From this perspective, messy deep learning should serve only as an assistant to an exact symbolic solver.

Ad
Ad

This hybrid method has produced real results: Deepmind's AlphaProof system, which earned a silver medal at last year’s IMO, relied on a formal, verifiable language. But according to Lampinen, this approach "gets things precisely backwards about real intelligence."

Symbols as tools, not as a cage

Lampinen takes a different view. For humans, he argues, mathematical symbols and formal systems—like the programming language Lean—are "tools we learn how to use," not rigid structures that confine our thinking. In a podcast interview, Lampinen explained that symbols only gain meaning through use and shared agreement, echoing philosophical ideas from Wittgenstein. A symbol means something "to someone," so its meaning is subjective, not fixed.

Even in strict logical fields like mathematics, this intuitive, subjective level is crucial. A paper by Lampinen and colleagues cites mathematicians who emphasize that it's the "ideas behind the manipulations" that drive progress—not just a "game played with meaningless tokens." Human semantic intuition acts as a kind of heuristic, helping us navigate the huge space of possible logical steps.

The IMO results seem to back up this perspective. They suggest that deep learning models working entirely in natural language can now reach human-level performance. Google's "Gemini Deep Think" reportedly uses special reinforcement learning techniques and extra "thinking time." OpenAI's model is described as a generalist reasoning system that works for hours to find solutions.

Lampinen says symbolic tools will still have a place—but only as tools, not the core of intelligence. And for AI to contribute meaningfully to mathematical research in the future, it will need to make a big jump - since real breakthroughs often take not just hours, but months or even years of deep thought.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Google Deepmind and OpenAI's AI systems recently achieved gold medal-level performance at the International Mathematical Olympiad using only natural language processing, without relying on symbolic tools during problem solving.
  • The achievement has reignited debate about the nature of intelligence, challenging the traditional view that formal symbol manipulation is essential and instead suggesting that symbols are flexible tools shaped by use and shared meaning, as argued by Deepmind researcher Andrew Lampinen.
  • While symbolic methods will continue to play a role, the results indicate that deep learning models working purely in natural language can now match human mathematical reasoning, though true breakthroughs in research may still require much longer periods of deep exploration.
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.