Ad
Skip to content

Pentagon and Anthropic clash over AI weapons and surveillance safeguards

Image description
Sora prompted by THE DECODER

Key Points

  • Anthropic is demanding safeguards to prevent its AI tools from being used for autonomous weapons control without human oversight or for surveillance of American citizens.
  • The Pentagon insists it can use commercial AI regardless of company policies as long as US laws are followed, according to a January 9 policy memo.
  • Anthropic CEO Dario Amodei warned that AI should support defense in all ways "except those which would make us more like our autocratic adversaries."

The Pentagon wants unrestricted access to AI technology. Anthropic is demanding guarantees against autonomous weapons control and domestic surveillance. A $200 million contract hangs in the balance.

The Pentagon and AI company Anthropic are locked in a dispute over military use of AI technology, Reuters reports, citing multiple sources familiar with the matter. The conflict centers on safeguards: Anthropic wants guarantees that its AI tools won't be used for autonomous weapons control without adequate human oversight or for surveillance of American citizens.

The Pentagon, which the Trump administration has renamed the "Department of War," is rejecting these restrictions. According to a January 9 memo on AI strategy, the department insists on using commercial AI technology regardless of manufacturers' usage policies, as long as US laws are followed. Negotiations over a contract worth up to $200 million are currently stalled.

Anthropic walks a fine line between ethics and defense contracts

Anthropic CEO Dario Amodei wrote in a blog post this week that AI should support national defense in all ways "except those which would make us more like our autocratic adversaries." He also called the fatal shootings of US citizens during protests against immigration enforcement in Minneapolis a "horror." According to Reuters, these incidents have deepened concerns among some in Silicon Valley about government use of their tools for potential violence. Anthropic has contracts with Palantir, which works directly with ICE - the agency involved in the Minneapolis incidents.

Ad
DEC_D_Incontent-1

The Pentagon would likely need Anthropic's cooperation, however, since the company's models are trained to avoid potentially harmful actions, and Anthropic engineers would need to customize the AI for military use. The dispute comes at a delicate moment for Anthropic, which is preparing for an IPO and has invested significant resources in the national security business, according to Reuters.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Reuters