OpenAI, Anthropic, and Google team up against unauthorized Chinese model copying
OpenAI, Anthropic, and Google have started working together to combat the unauthorized copying of their AI models by Chinese competitors, according to Bloomberg. The three companies are sharing information through the "Frontier Model Forum," founded in 2023, to detect so-called adversarial distillation. In distillation, the outputs of an existing AI model are used to train a cheaper copycat model. One of the first examples was Stanford's Alpaca model, which demonstrated the feasibility of the approach, but the practice has since become a real problem for US companies.
US authorities estimate that adversarial distillation costs American AI labs billions of dollars in lost revenue each year, Bloomberg reports. OpenAI had already warned Congress in February that Deepseek was using increasingly sophisticated methods to extract data from US models. Anthropic identified Deepseek, Moonshot, and Minimax as actors involved in the practice. The collaboration mirrors how the cybersecurity industry operates, where companies routinely share attack data with each other.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now