Inside the Anthropic-Pentagon breakdown: mass surveillance, autonomous weapons, and a rival deal waiting in the wings
Key Points
- According to anonymous sources, the Pentagon wanted to use Anthropic's AI to analyze bulk data on American citizens, including location data, browsing histories, and credit card transactions.
- OpenAI CEO Sam Altman stepped in and negotiated a deal with the Pentagon within a day, even though he had publicly signaled solidarity with Anthropic just days earlier. OpenAI accepted that its AI can be used for all lawful purposes, but says mass surveillance and the direct control of autonomous weapons are excluded.
- Sarah Shoker, who led OpenAI's geopolitics team for about three years, says none of the leading AI companies have coherent policies for military use. The usage terms are kept deliberately vague to preserve flexibility for company leadership.
New reports from the New York Times and the Atlantic paint a detailed picture of the final hours of negotiations between Anthropic and the Pentagon. At the center: bulk data collection on American citizens, a rejected cloud workaround, and a parallel OpenAI deal that was already in the works.
According to the Atlantic, Anthropic learned on Friday morning that Hegseth's team was prepared to make a key concession. In earlier contract drafts, the Pentagon had repeatedly tried to soften its commitments with phrases like "as appropriate," leaving loopholes for reinterpretation down the line. Those words would now be removed.
But by Friday afternoon, it became clear that a core issue remained: the Pentagon wanted to use Anthropic's AI to analyze bulk data on American citizens. According to the New York Times, the Pentagon's chief technology officer Emil Michael - a former Uber executive - specifically pushed for permission to collect and process unclassified commercial data, including location data and browsing histories. The Atlantic report goes even further, listing chatbot queries, Google search histories, GPS movement data, and credit card transactions that could be cross-referenced with one another.
Anthropic countered by offering to make its technology available to the NSA for classified material under the Foreign Intelligence Surveillance Act (FISA). In return, the company demanded a legally binding guarantee that unclassified commercial data on Americans would be off-limits. The Pentagon refused.
Personal rivalries accelerated the collapse
On February 24, Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to a meeting at the Pentagon. According to the New York Times, the conversation lasted less than an hour, and the atmosphere was cold. Hegseth ended it with an ultimatum: if Anthropic didn't comply by 5:01 PM on Friday, the company would be classified as a "Supply Chain Risk" - a security designation for the defense supply chain. Hegseth also threatened to invoke the Defense Production Act, which could have forced Anthropic to cooperate. That threat was later dropped.
In the final minutes before the deadline, Michael demanded to speak with Amodei personally during a call with Anthropic executives. He was told Amodei was in a meeting with his leadership team and needed more time. Michael wasn't satisfied with that answer. At 5:14 PM, Hegseth declared Anthropic a "Supply Chain Risk" and ordered all military contractors, suppliers, and partners to cut ties with the company. The designation had previously been reserved for foreign companies and had never been used against a U.S. firm.
The three central figures in this story go way back in Silicon Valley. Amodei and Sam Altman, both 40, once worked together at OpenAI and are considered bitter rivals. Michael started as the Pentagon's chief technology officer in May 2025, after previously serving as a special assistant at the Pentagon under the Obama administration. He led the negotiations with Anthropic.
During the ongoing talks, Michael publicly attacked Amodei on X, calling him a "liar" with a "God-complex" and writing that Amodei wanted "nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk." According to the New York Times' sources, Michael ultimately favored Altman, who had been actively courting the Trump administration.
President Trump appeared to have planned the escalation in advance. He told Hegseth on Friday morning that he had already prepared a Truth Social post disparaging Anthropic and ordering all government agencies to end their partnerships within six months. Trump published the post at 3:47 PM, while negotiations were still underway. Even after that, both sides kept talking.
Why Anthropic rejected the cloud workaround for autonomous weapons
Beyond the surveillance dispute, autonomous weapons were another major sticking point. According to the Atlantic, Anthropic didn't categorically reject using its AI in autonomous weapons. The company even offered to work directly with the Pentagon on improving the reliability of such systems. Anthropic's position was that its models weren't reliable enough yet: just as self-driving cars are already safer than human drivers in some scenarios, combat drones could one day be more precise than a human operator. But the models hadn't reached that threshold, and deploying them prematurely could endanger civilians or friendly troops.
During negotiations, one proposal was to keep the AI models in the cloud rather than integrating them directly into weapons systems. Anthropic considered and rejected this approach after a brief review. The reasoning being, that in modern military architectures, the boundary between the cloud and the battlefield is a spectrum. Drones in combat zones can connect to cloud data centers through networked systems. The Pentagon's "Joint Warfighting Cloud Capability" program is actively working to push computing resources closer to the fight. Whether a model sits in an Amazon Web Services server in Virginia or in a war zone is ethically irrelevant if it's making battlefield decisions.
OpenAI, meanwhile, uses exactly this cloud architecture as a selling point in its agreement with the Pentagon. The company argues that its cloud-only setup "fully" rules out autonomous weapons, since those would require edge deployment. Anthropic had evaluated the same solution and dismissed it as inadequate.
OpenAI's deal was already waiting in the wings
According to the New York Times, Altman called Michael on February 25, just one day after Hegseth's ultimatum to Amodei. Within a day, they had a rough framework in place. OpenAI accepted the Pentagon's demand that its AI could be used for "all lawful purposes," but negotiated the right to implement technical guardrails based on its own safety principles.
This is notable because earlier that same week, Altman had publicly stated that OpenAI would also refuse to let its models be used in autonomous weapons systems - effectively signaling solidarity with Anthropic. On Friday evening at 10 PM, while Anthropic's lawyers were already drafting a lawsuit against the Pentagon, Altman was on the phone with Michael finalizing the last details. He then announced the agreement on X. Hegseth shared Altman's announcement from his personal account.
On Saturday, Altman took questions on X and framed his position this way: OpenAI doesn't want the ability to weigh in on specific lawful military actions, but it does want the ability to use its expertise to design a safe system.
Former OpenAI staffer says usage policies are deliberately vague
Sarah Shoker, who led OpenAI's geopolitics team for about three years before leaving the company in June 2025, takes a more sober view of the conflict. Her takeaway: none of the leading AI companies have coherent policies for military use of their technology. The usage terms are kept deliberately vague to preserve flexibility for company leadership.
In a Substack post, Shoker points out that Anthropic's demands essentially align with existing U.S. law. Defense Directive 3000.09 already requires "appropriate levels of human judgment" for autonomous weapons systems. She explains the Pentagon's decision to force a confrontation partly as a show of strength - agreeing to Anthropic's terms would have set a precedent for future negotiations.
Shoker is particularly critical of OpenAI's language around not using its technology to "direct" autonomous weapons systems. That word leaves considerable room for interpretation: OpenAI could accept its models being part of an autonomous weapons system, as long as they don't make the final decision.
Amodei's stance isn't as principled as it appears either, Shoker writes. In his statement from February 26, he left the door open to supporting fully autonomous systems without human oversight in the future - once the technology is reliable enough. When Shoker joined OpenAI in 2021, a complete ban on military use was in place. By 2024, that ban had been softened with deliberately unspecific language.
Anthropic sues as intelligence agencies push for a resolution
Anthropic has announced it will sue over the "Supply Chain Risk" designation. U.S. intelligence agencies, including the CIA, which actively uses Anthropic's AI technology, are pushing behind the scenes for both sides to reach an agreement, according to the New York Times. Some current and former officials still hope for a peace deal.
During the Pentagon's pilot program last year, Anthropic was the only AI company that had provided its technology for classified systems.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now