Ad
Skip to content

Gary Marcus calls the belief in LLM understanding one of "the most profound illusions of our time"

LLM hype critic Gary Marcus argues in a conversation with chess grandmaster Garry Kasparov that large language models only create the appearance of understanding, not genuine intelligence.

"It's one of the most profound illusions of our time that most people see these systems and attribute an understanding to them that they don't really have."

Gary Marcus

He explains that while language models can, for example, mimic the rules of chess by generating text based on examples, they can't actually play the game because they lack any real internal sense of what's happening on the board.

Ad
DEC_D_Incontent-1

"They will repeat the rules, because in the way that they create text based on other texts, they'll be there. [...] But when it actually comes to playing the game, it doesn't have an internal model of what's going on."

For Marcus, this gap between surface-level performance and true comprehension is at the heart of the AI "illusion of intelligence."

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.