Ad
Skip to content

Microsoft and rival AI researchers unite to back Anthropic in its escalating legal battle against the Pentagon

Image description
Nano Banana Pro prompted by THE DECODER

Key Points

  • In the legal dispute between Anthropic and the US Department of War, Microsoft, 37 employees of OpenAI and Google DeepMind, 22 former high-ranking military officials, and civil rights organizations have filed amicus curiae briefs supporting Anthropic's motion for a preliminary injunction.
  • Microsoft argues that Anthropic's products are integrated as a fundamental layer in its own military offerings and that immediate implementation of the classification could harm the American armed forces at a critical moment.
  • The broad coalition of supporters signals significant cross-industry concern about the potential consequences of the government's actions against Anthropic.

A broad coalition is backing Anthropic in its legal fight with the US Department of Defense.

Microsoft, dozens of OpenAI and Google employees, former military leaders, and civil rights organizations have all filed amicus curiae briefs with the federal court in San Francisco, supporting the AI company's motion for a preliminary injunction.

Microsoft says the classification undermines its own military work

Microsoft, one of the Pentagon's largest contractors and a business partner of Anthropic, argues in its brief that the classification directly harms the company and other government contractors.

Anthropic's products serve as a "foundational layer" in Microsoft's own offerings to the US military. Enforcing the classification immediately would force contractors to reconfigure existing products on the spot, which "could potentially hamper U.S. warfighters at a critical point in time."

Ad
DEC_D_Incontent-1

Microsoft explicitly backs Anthropic's position: "Microsoft’s position is that AI should be focused on lawful and appropriately guarded use cases. For example, AI should not be used to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war," the company writes.

Microsoft also highlights how unprecedented the move is. The supply chain risk authority under 10 U.S.C. § 3252 has only been invoked publicly once before, against a foreign company (Acronis AG). It has never been used against an American company. Setting that kind of precedent would make companies fundamentally rethink their willingness to work with the government, Microsoft warns.

Rival employees rally behind Anthropic

Another letter comes from 37 employees at direct competitors OpenAI, Google, and Google DeepMind. Signatories include Jeff Dean, Chief Scientist at Google, along with senior researchers and engineers at both companies. They're writing in their personal capacity, not on behalf of their employers.

Their core argument is that the technical concerns behind Anthropic's "red lines" are well-founded and widely recognized across the research community. Today's AI systems hallucinate, their decision-making is opaque even to the people who built them, and mistakes in lethal contexts can't be undone.

Ad
DEC_D_Incontent-2

A child’s tricycle can physically be driven on an interstate, but we do not allow it because of the risks of using the technology in that environment. Mass domestic surveillance and autonomous lethal weapons systems are the equivalently reckless domain for today’s frontier models.

Brief of amici curiae employees of OpenAI and Google in their personal capacities

The employees also flag what they call a "panopticon effect": just knowing that AI-powered mass surveillance exists changes how people behave. "The journalist thinks twice before calling a source inside the military, knowing the call could be logged and cross-referenced," the amici write.

The same goes for academics and activists. "None of these people have been targeted. None have been punished. But their behavior has already been constrained, and with it the democratic functions they serve—a free press, political organizing, open intellectual inquiry—have been quietly degraded."

The classification also hurts the broader US AI ecosystem, the signatories argue. "By silencing one lab, the government reduces the industry’s potential to innovate solutions."

Former military leaders and civil rights groups say the rule of law is at stake

A separate brief carries signatures from 22 former senior military leaders and officials. Among them: General Michael V. Hayden, former director of the CIA and NSA, former Secretaries of the Navy Richard Danzig and Carlos Del Toro, former Secretaries of the Air Force Frank Kendall and F. Whitten Peters, and former Secretary of the Army Christine Wormuth.

They argue that the classification undermines the military's commitment to the rule of law and erodes public trust that the armed forces operate within legal boundaries. Congress designed the supply chain risk authority narrowly, they argue, targeting foreign adversaries trying to sabotage or infiltrate systems. Wielding it as a retaliatory tool against an American company over a policy disagreement is unprecedented and dangerous.

"A military grounded in the rule of law is weakened, not strengthened, by government actions that lack legal foundation," the brief states.

A civil rights coalition of FIRE, the Electronic Frontier Foundation, the Cato Institute, Chamber of Progress, and the First Amendment Lawyers Association argues the classification violates the First Amendment. Anthropic's design decisions for Claude, including the "Usage Policy" and "Claude's Constitution" are protected speech. The Pentagon is essentially demanding that Anthropic change its values and ship Claude without restrictions, which amounts to compelled speech.

The retaliatory intent is barely disguised, the organizations write. Secretary of Defense Pete Hegseth has called Anthropic "unpatriotic" and "fundamentally incompatible with American principles." Under Secretary Emil Michael called Anthropic CEO Dario Amodei a "liar" with a "God complex." President Trump has labeled Anthropic "radical," "woke," and "left-wing nut jobs."

"On the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana," the organizations wrote, quoting the Supreme Court.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.