Ad
Skip to content

US War Department CTO says Anthropic's AI models "pollute" the supply chain with built-in ethics

Emil Michael, the US Department of War's chief technology officer, made clear that classifying Anthropic as a supply chain risk is an ideologically motivated move. Claude models "pollute" the supply chain because they have a "different policy preference" baked into them, Michael told CNBC. He pointed to Anthropic's "constitution," a ruleset emphasizing ethics and safety, which he said could result in soldiers receiving "ineffective weapons, ineffective body armor, ineffective protection." The measure was "not meant to be punitive," he added.

Anthropic is the first US company to receive this classification, which is normally reserved for foreign adversaries. The AI company is suing over the designation and has drawn support from Microsoft, OpenAI, and Google employees, as well as former US military personnel. Anthropic has previously pushed back against its own AI models being used for US mass surveillance and autonomous weapons.

The administration has already signaled its intent to control AI along ideological lines by enacting regulations targeting so-called "woke AI," framed as a commitment to political neutrality. The approach echoes the Chinese government's own efforts to exert political control over AI models.

Ad
DEC_D_Incontent-1

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: CNBC