LLM hype critic Gary Marcus argues in a conversation with chess grandmaster Garry Kasparov that large language models only create the appearance of understanding, not genuine intelligence.
"It's one of the most profound illusions of our time that most people see these systems and attribute an understanding to them that they don't really have."
Gary Marcus
He explains that while language models can, for example, mimic the rules of chess by generating text based on examples, they can't actually play the game because they lack any real internal sense of what's happening on the board.
"They will repeat the rules, because in the way that they create text based on other texts, they'll be there. [...] But when it actually comes to playing the game, it doesn't have an internal model of what's going on."
For Marcus, this gap between surface-level performance and true comprehension is at the heart of the AI "illusion of intelligence."