Ad
Skip to content

Anthropic CEO attacks OpenAI's Pentagon deal as "safety theater" while investors scramble for de-escalation

Image description
Nano Banana Pro prompted by THE DECODER

Anthropic CEO Dario Amodei attacks OpenAI's Pentagon contract as "80% safety theater" in a leaked memo and accuses the Trump administration of punishing his company for a lack of political loyalty. OpenAI hastily updates its contract, investors push for de-escalation, and a major tech industry group backs Anthropic. Meanwhile, Amodei is making a last-ditch attempt to negotiate directly with the Under Secretary of Defense for Research and Engineering.

Anthropic CEO Dario Amodei sent a 1,600-word internal memo to more than 2,000 employees calling OpenAI's Pentagon deal "safety theater." The memo, published by The Information on Wednesday, paints a picture of a conflict that goes far beyond a failed contract negotiation.

The backstory: On Friday, the Pentagon said it planned to designate Anthropic as a "supply chain risk" after the company refused to allow the military to use its AI for mass domestic surveillance and the operation of fully autonomous lethal weapons. That same day, OpenAI CEO Sam Altman announced that his company had struck a deal with the Pentagon to put its AI models on classified systems.

According to The Information, Amodei told his staff that Altman was "trying to make it more possible for the admin to punish us by undercutting our public support."

Ad
DEC_D_Incontent-1

Amodei accuses Altman of systematic deception

In the memo, Amodei lays out a detailed picture of what he describes as Altman's pattern of behavior: publicly supporting Anthropic's red lines while signing a contract behind the scenes that effectively undermines them.

OpenAI's contract language allows the Pentagon to use its models for "all lawful purposes." While the deal includes a "safety layer," Amodei considers these safeguards largely ineffective. "Our general sense is that these kinds of approaches, while they don't have zero efficacy, are, in the context of military applications, maybe 20% real and 80% safety theater," he wrote.

Amodei's technical argument: an AI model does not "know" if there is a human in the loop, does not know if the data it is analyzing comes from US citizens or foreign sources, and "doesn't know if it's enterprise data given by customers with consent or data bought in sketchier ways." Model refusals are unreliable, and jailbreaks are common, often "as easy as just misinforming the model about the data it is analyzing."

Amodei is particularly harsh on technology partner Palantir, through which Anthropic had been serving US agencies. Palantir had offered both companies a classifier or machine learning system that would supposedly block certain applications. Amodei's verdict: Palantir "assumed that our problem was 'you have some unhappy employees, you need to offer them something that placates them or makes what is happening invisible to them, and that's the service we provide.'" However, OpenAI is not working with Palantir on Department of Defense-related work, according to a person with knowledge of the company's plans.

Ad
DEC_D_Incontent-2

Why the Pentagon really rejected Anthropic, according to Amodei

In the memo, Amodei names what he sees as the real reasons for the tensions with the Trump administration. "The real reasons DoW and the Trump admin do not like us is that we haven't donated to Trump (while OpenAI/Greg have donated a lot)," Amodei writes, according to The Information. He is referring to OpenAI President Greg Brockman, who together with his wife reportedly donated $25 million to a Trump super PAC.

"We haven't given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agenda, we've told the truth about a number of AI policy issues (like job displacement), and we've actually held our red lines with integrity."

Amodei also reports that near the end of the negotiations, the Pentagon offered to accept Anthropic's terms if the company deleted a specific phrase about the "analysis of bulk acquired data." That was "the single line in the contract that exactly matched this scenario we were most worried about. We found that very suspicious." Anthropic refused.

OpenAI scrambles to patch its deal

The public reaction to the Pentagon deal was hardly flattering for OpenAI. Altman admitted, according to the Financial Times, that the hastily struck agreement "looked opportunistic and sloppy." Following backlash from employees and the public, OpenAI updated the contract on Monday with stronger language. According to Altman, the new clauses "prohibit deliberate tracking, surveillance or monitoring of US persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." Intelligence agencies such as the NSA would also be excluded from the deal.

But lawyers and employees remain skeptical, the Financial Times reports. The terms "intentional," "deliberate" or "targeted" leave open the possibility that the government could surveil Americans "incidentally" or "unintentionally" using modern AI tools. Connie LaRossa, OpenAI's US national security policy lead, said Wednesday that the terms of safeguards "are still being negotiated." A three-month implementation period is set to address open questions, including technical safety mechanisms and specific deployment scenarios.

In the memo, Amodei also points to an underappreciated legal gap: the Pentagon policy requiring a human in the loop for firing weapons was "set during the Biden admin" and "can be changed unilaterally by Pete Hegseth, which is exactly what we are worried about. So it is not, for all intents and purposes, a real constraint."

Investors push for de-escalation as industry shows solidarity

While the tone behind the scenes grows sharper, Anthropic's investors are pushing for a resolution. According to Reuters, Amodei has spoken in recent days with Amazon CEO Andy Jassy, among others. Venture capital firms including Lightspeed and Iconiq have also been in contact with Anthropic executives and are looking for potential solutions, including through contacts in the Trump administration.

Some investors expressed frustration with Amodei's approach to negotiations. "It's an ego and diplomacy problem," one person briefed on the matter told Reuters. At the same time, Amodei can no longer be seen as capitulating "without alienating a core group of employees and consumers who have flocked to Anthropic because of his stance." Anthropic's chatbot Claude temporarily rose to No. 1 on the Apple App Store's free download rankings.

The broader industry also responded: The Information Technology Industry Council, whose members include Amazon, Nvidia, Apple and OpenAI, expressed concern in a letter to the Pentagon about the supply chain risk designation. OpenAI's LaRossa said at a conference: "We are actually working to have the secure risk designation removed from Anthropic ... That shouldn't be applied to a U.S. industry counterpart with such an important tool."

Anthropic makes a last-ditch attempt

According to the Financial Times, Amodei is now negotiating directly with Emil Michael, the Under Secretary of Defense for Research and Engineering, in a last-ditch attempt to reach a deal. Michael had publicly attacked Amodei as a "liar" with a "God complex" just days earlier. At a Morgan Stanley event on Tuesday evening, Amodei struck a more conciliatory tone: "I would start by saying that Anthropic and the Department of War have much more in common than we have differences." He added that "we're going to try our very best" to resolve the conflict.

The economic stakes for Anthropic are significant. The company's revenue run rate stands at roughly $19 billion, up from $14 billion just a few weeks ago. About 80% of its revenue comes from enterprise customers. Several US government agencies, including the State Department, have already begun terminating their use of Anthropic's technology and switching to OpenAI. Trump ordered federal agencies to remove Anthropic technology within six months. Anthropic announced it would challenge any official supply chain risk designation in court.The US military, however, is currently using Claude extensively in Palantir's Maven system in the Iran war.

Amodei closes his memo with a striking assessment of the situation. The "attempted spin/gaslighting is not working very well on the general public or the media," he wrote, adding that Anthropic is seen as "the heroes." His main concern is "how to make sure it doesn't work on OpenAI employees." His verdict: "Due to selection effects, they're sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees."

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

AI news without the hype
Curated by humans.

  • Over 20 percent launch discount.
  • Read without distractions – no Google ads.
  • Access to comments and community discussions.
  • Weekly AI newsletter.
  • 6 times a year: “AI Radar” – deep dives on key AI topics.
  • Up to 25 % off on KI Pro online events.
  • Access to our full ten-year archive.
  • Get the latest AI news from The Decoder.
Subscribe to The Decoder