Anthropic calls Pentagon's supply chain risk label illegal and vows to challenge it in court
Anthropic says it will take the US government to court after Secretary of Defense Pete Hegseth moved to classify the AI company as a supply chain risk, a designation previously reserved for foreign adversaries. Anthropic calls the classification illegal and says it will "challenge any supply chain risk designation in court."
We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.
Anthropic
Hegseth also implied military suppliers should no longer be allowed to do business with Anthropic. But according to Anthropic, there's no legal basis for that move: the classification under 10 USC 3252 only applies to the use of Claude in direct contracts with the Department of Defense. For private customers, commercial contracts, and access through the API or claude.ai, nothing would change.
The conflict traces back to a failed negotiation process. Anthropic refused to release Claude for mass domestic surveillance and fully autonomous weapons systems, arguing that current AI models are too unreliable for these purposes and that mass surveillance violates fundamental rights. OpenAI has since taken over the deal.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now