Yann LeCun calls general intelligence "complete BS" and Deepmind CEO Hassabis fires back publicly
Meta's outgoing AI researcher thinks the concept of "general intelligence" is meaningless. Google Deepmind's head publicly disagrees, accusing him of a fundamental category error.
Two of AI research's most influential figures have staked out opposing positions on a concept worth billions in investment. Yann LeCun, Meta's outgoing AI researcher and Turing Prize winner, called the concept of "general intelligence" "complete BS" (bullshit) on the AI podcast "The Information Bottleneck".
Demis Hassabis, co-founder and CEO of Google Deepmind, shot back on X with unusual directness: LeCun is "just plain incorrect." The disagreement centers on a term that has shaped the AI industry for years and justifies billions in investment.
LeCun argues human intelligence only appears general
LeCun put it bluntly: "So first of all, there is no such thing as general intelligence. This concept makes absolutely no sense because it's really designed to designate human-level intelligence," he said in the podcast. The problem, he argues, is that human intelligence is itself highly specialized. "We think of ourselves as being general, but it's simply an illusion."
For LeCun, intelligence boils down to predicting the consequences of actions and using those predictions for planning. Physics serves as his model: "What should I represent about reality to be able to make predictive models? And that's really what intelligence is about."
This definition shapes his take on AI risks. LeCun distances himself from so-called "doomers" who warn about superintelligent AI: "It's not because something is intelligent that it wants to dominate. Those are two different things."
He's made this case before—even if AI became far more capable than humans, people would still remain at the top of the food chain.
LeCun reserves his harshest criticism for those predicting AGI soon. "You have all those people bloviating about AGI in a year or two. Just completely delusional, just complete delusion," he said. The real world is far more complicated, and "tokenizing the world" through neural language models won't get us there. "It's just not going to happen."
Hassabis draws a line between general and universal intelligence
Hassabis responded on X with a technical-philosophical distinction: LeCun is "…confusing general intelligence with universal intelligence." "Brains are the most exquisite and complex phenomena we know of in the universe (so far), and they are in fact extremely general," he wrote.
Hassabis acknowledges the no-free-lunch theorem’s practical constraint: in any practical, finite system there has to be some degree of specialization around the target distribution being learned.
But that’s not the core issue, he argues: "But the point about generality is that in theory, in the Turing Machine sense, the architecture of such a general system is capable of learning anything computable given enough time and memory (and data)…" "…and the human brain (and AI foundation models) are approximate Turing Machines," Hassabis wrote.
The chess player argument
LeCun points to chess as proof of human intelligence's limitations. "Chess. We suck," he said in the podcast. Humans are terrible at chess and Go, machines are much better (thanks in no small part to Google Deepmind). The reason comes down to brain architecture: "Because of the speed of tree exploration and the memory that's required for tree exploration. We just don't have enough memory capacity to do breadth-first tree exploration. So we suck at it."
LeCun invokes the Moravec paradox: tasks humans consider uniquely intellectual—physics, calculus—computers mastered decades ago. But what any cat can do, robots still can't. Back in the 1960s, Marvin Minsky predicted a computer would be the world's best chess player within ten years. "It took a bit longer than that," LeCun notes.
Hassabis shifts focus from playing to inventing
Hassabis reframed LeCun’s chess argument entirely: "…it’s amazing that humans could have invented chess in the first place (and all the other aspects of modern civilization from science to 747s!)."
Magnus Carlsen may not be strictly optimal—he has finite memory and limited time to make a decision. Still, Hassabis stresses: "…but it’s incredible what he and we can do with our brains given they were evolved for hunter gathering."
Deepmind appears increasingly confident about AGI
LeCun previously clashed with Deepmind researcher Adam Brown in what was, by AI researcher standards, an emotional debate. There too, he argued that LLMs alone can't achieve AGI and that more fundamental research is needed. LeCun says, with LLMs, the AI industry is falling for the same trap it has before, each time convinced it's finally on the right track to AGI, only to hit a wall.
Brown disagreed: from simple rule-based next-token prediction, massive scaling creates emergent complexity that people perceive as understanding—and could potentially even lead to awareness.
Deepmind co-founder Shane Legg stated in the company's official podcast in mid-December that a form of "minimal AGI," AI agents capable of handling many human cognitive tasks, could arrive as early as 2028.
Now, with Hassabis echoing his colleagues and publicly pushing back on LeCun, it signals that Deepmind remains firmly committed to the AGI concept and confident it's approaching. Historically, betting against Deepmind hasn't paid off.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe nowAI news without the hype
Curated by humans.
- Over 20 percent launch discount.
- Read without distractions – no Google ads.
- Access to comments and community discussions.
- Weekly AI newsletter.
- 6 times a year: “AI Radar” – deep dives on key AI topics.
- Up to 25 % off on KI Pro online events.
- Access to our full ten-year archive.
- Get the latest AI news from The Decoder.