Anthropic accuses Deepseek, Moonshot, and MiniMax of stealing Claude's AI data through 16 million queries
Anthropic says it has caught Chinese AI labs Deepseek, Moonshot, and MiniMax running large-scale distillation attacks on Claude, a technique where a weaker model learns from the output of a stronger one. Over 24,000 fake accounts fired off more than 16 million queries targeting Claude's reasoning, programming, and tool usage capabilities. The labs used proxy services to bypass China's access restrictions.
| Lab | Requests | Targets |
|---|---|---|
| Deepseek | 150,000+ | Extracting reasoning steps, reward model data for reinforcement learning, censorship-compliant answers on politically sensitive topics |
| Moonshot AI | 3.4 million+ | Agent-based reasoning, tool usage, programming, data analysis, computer vision, reconstructing Claude's thought processes |
| MiniMax | 13 million+ | Agent-based programming, tool usage and orchestration; pivoted to new Claude model within 24 hours |
Deepseek specifically targeted Claude's reasoning chain, extracting thought processes and censorship-compliant answers on sensitive topics. MiniMax ran the biggest campaign by far with over 13 million requests. When Anthropic shipped a new model, MiniMax pivoted within 24 hours and redirected nearly half its traffic to the updated system, Anthropic says.
OpenAI and Google report similar attempts from Chinese labs. Anthropic is calling on the industry and policymakers to mount a coordinated response.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now