LLM hype critic Gary Marcus argues in a conversation with chess grandmaster Garry Kasparov that large language models only create the appearance of understanding, not genuine intelligence.

Ad

"It's one of the most profound illusions of our time that most people see these systems and attribute an understanding to them that they don't really have."

Gary Marcus

He explains that while language models can, for example, mimic the rules of chess by generating text based on examples, they can't actually play the game because they lack any real internal sense of what's happening on the board.

"They will repeat the rules, because in the way that they create text based on other texts, they'll be there. [...] But when it actually comes to playing the game, it doesn't have an internal model of what's going on."

For Marcus, this gap between surface-level performance and true comprehension is at the heart of the AI "illusion of intelligence."

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.