Content
summary Summary

Yann LeCun, Meta's chief AI scientist, has taken a direct shot at Anthropic CEO Dario Amodei on Threads, making clear just how sharply the AI community is split over the future of general artificial intelligence.

Ad

When a Threads user asked if Amodei was an "AI Doomer," an "AI Hyper," or both, LeCun didn't hold back: "He is a doomer, but he keeps working on 'AGI'." LeCun argued there are only two possible explanations for this: "He is intellectually dishonest and/or morally corrupt."

Alternatively, LeCun suggested, Amodei could be suffering from a "huge superiority complex," believing "only he is enlightened enough to have access to AI, but the unwashed masses are too stupid or immoral to use such a powerful tool" In LeCun's view, Amodei is "deluded about the dangers and power of current AI systems."

Threads screenshot: Yann LeCun calls Anthropics CEO Dario Amodei a morally corrupt doomer working on AGI.
LeCun publicly criticizes Anthropic CEO Dario Amodei on Threads, accusing him of moral dishonesty for pursuing AGI while warning of AI risks. | Image: Screenshot via Threads

Fundamental disagreements over AI's future

LeCun's remarks highlight a much deeper debate about the direction of AI research. Companies like Anthropic and OpenAI are racing to commercialize ever more powerful large language models (LLMs), often warning that these systems could pose existential risks to humanity. LeCun sees this narrative—and the focus on LLMs themselves—as misguided if the goal is to achieve genuine, human-level intelligence.

Ad
Ad

He points out that LLMs like GPT-X or Claude have significant limitations. According to LeCun, these models struggle with basic logic, lack real-world understanding, and cannot retain information long-term. He argues they are incapable of rational thinking or complex planning, and ultimately can't be relied on since they only produce convincing answers when their training data covers the topic.

"If you are a student interested in building the next generation of AI systems, don't work on LLMs," LeCun said a year ago. He believes the field is already dominated by major companies, and that LLMs aren't the path to real intelligence.

Instead, LeCun and his team at Meta are focused on "world models"; AI systems designed to build a genuine understanding of their environment. In a recent study, Meta researchers introduced V-JEPA, an AI model that learns intuitive physical reasoning from videos through self-supervised training. Compared to multimodal LLMs like Gemini or Qwen, V-JEPA demonstrated a much stronger grasp of physics, despite needing far less training data.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Meta's Yann LeCun has accused Anthropic CEO Dario Amodei of intellectual dishonesty and moral corruption, criticizing him for pursuing general artificial intelligence despite warnings about its risks.
  • LeCun argues that companies like Anthropic and OpenAI are being inconsistent by warning about existential AI risks while simultaneously developing more powerful AI systems, and he believes that fears about large language models are overblown.
  • He advocates for the use of world models such as V-JEPA, which use self-supervised learning to understand physical relationships more effectively and with less training data than multimodal language models.
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.