Ad
Skip to content

US military uses Anthropic's Claude for AI-driven strike planning in Iran war

Image description
Nano Banana Pro prompted by THE DECODER

Key Points

  • The US military is using generative AI on a large scale for the first time in the war against Iran: Anthropic's Claude analyzes real-time data in Palantir's Maven system and helped generate around 1,000 prioritized targets on the first day.
  • Experts warn that AI compresses strike planning from weeks to minutes, risking what researchers call "cognitive off-loading" - decision-makers feeling detached from the consequences of their choices.
  • Just hours before the bombing began, the Trump administration banned Anthropic from government systems. Yet the military continues to use Claude because, according to the Washington Post, it has become too important to remove.

In the war against Iran, the US military is using generative AI at scale for target selection and strike planning for the first time. Of all models, it's the one from the company Washington just banned.

The US military has deployed advanced generative AI on a large scale in combat operations for the first time during the ongoing war against Iran. According to reports by the Guardian and the Washington Post, Anthropic's AI model Claude is embedded in the so-called Maven Smart System built by war-tech company Palantir. The system generates insights from a massive volume of classified data - satellites, surveillance feeds, and other intelligence - in real time.

According to the Washington Post, the system suggested hundreds of targets, issued precise location coordinates, and prioritized them by importance. The Guardian adds that Palantir's system also recommends specific weaponry, factoring in stockpile levels and past performance against similar targets, and uses automated reasoning to evaluate the legal grounds for a strike. In just the first 24 hours, the US military struck roughly 1,000 targets. The operations were carried out jointly with Israeli forces. Israeli missiles killed Iran's supreme leader, Ayatollah Ali Khamenei.

AI compresses weeks of planning into minutes

Academics call it "decision compression": AI is collapsing the planning time for complex strikes from days or weeks to minutes or seconds. "The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought," said Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains, to the Guardian.

Ad
DEC_D_Incontent-1

Paul Scharre, executive vice president at the Center for a New American Security, told the Washington Post: "The key paradigm shift is that AI enables the U.S. military to develop targeting packages at machine speed rather than human speed." The downside: "AI gets it wrong. … We need humans to check the output of generative AI when the stakes are life and death."

David Leslie, professor of ethics, technology, and society at Queen Mary University of London, who has observed demonstrations of AI military systems, warned in the Guardian of "cognitive off-loading" - humans tasked with making a strike decision can feel detached from its consequences because the effort to think it through has been made by a machine.

On Saturday, a missile strike hit a school in southern Iran and killed 165 people, many of them children, according to state media. The exact death toll has not been independently verified, as international media have had almost no access to the area. The school appeared to be close to a military barracks. The UN called it "a grave violation of humanitarian law." The US military has said it is looking into the reports.

Banned from government yet too embedded to remove

The situation creates a striking paradox: just hours before the bombing began, the Trump administration announced it would ban Anthropic from government systems. The move followed a bitter fight between the company and the military over control of the tools in mass domestic surveillance and fully autonomous weapons. The department was given six months to phase them out. In the meantime, the military will continue using the technology while it waits for a replacement.

Ad
DEC_D_Incontent-2

According to the Washington Post, military commanders have become so dependent on the AI system that if Anthropic CEO Dario Amodei directed the military to stop using it, the Trump administration would use government powers to retain the technology until a replacement is ready. As of last May, over 20,000 military personnel were using Maven. A Georgetown University study of the system's use by the Army's 18th Airborne Corps found that one artillery unit was able to do the work of 2,000 staff with a team of just 20 people.

Anthropic's competitors are already lining up to fill the gap. OpenAI signed its own deal with the Pentagon last week.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.