Ad
Skip to content

AI-generated war footage is going viral while real satellite imagery disappears from public view

Image description
Nano Banana Pro prompted by THE DECODER

Key Points

  • In the first two weeks of the US-Iran conflict, the New York Times identified over 110 AI-generated fake images and videos that reached millions of viewers - the majority serving pro-Iranian propaganda, according to analytics firm Cyabra.
  • Satellite providers like Planet Labs extended their image delay for the region from four days to two weeks, making independent verification harder and creating an opening for AI fakes to fill the gap.
  • German outlets including Der Spiegel, Zeit, and Suddeutsche Zeitung also published AI-generated images that had been funneled in through an Iranian agency network.

The New York Times identified more than 110 AI-generated fake videos and images about the Middle East war in just two weeks. Iran appears to be deploying these forgeries as a deliberate information weapon - and it's getting harder for independent observers to tell fact from fiction.

A New York Times investigation documents the scale of AI-generated disinformation surrounding the war between the U.S., Israel, and Iran: more than 110 unique fake images and videos were identified on platforms like X, TikTok, and Facebook during the first two weeks of the conflict. Combined, they reached an audience of millions.

The fakes cover every aspect of the war: fabricated explosions in Tel Aviv, streets that were never attacked, protesting soldiers who don't exist. Modern AI tools now let virtually anyone create convincing war simulations for little or no money. According to a study by analytics firm Cyabra, the majority of these AI fakes serve pro-Iranian propaganda. Media researcher Marc Owen Jones of Northwestern University says the goal is to exaggerate Iran's military strength and make the war appear far more devastating for America's allies than it actually is.

One particularly viral fake video appeared to show missile strikes hitting the Tel Aviv skyline. Another case involved the USS Abraham Lincoln: after Iran's Revolutionary Guards claimed they had attacked the aircraft carrier, AI-generated images of a burning ship flooded social media. The U.S. said the attack failed and the ship was undamaged. While real combat footage is typically shot from a distance and at night, the Times notes that the AI fakes look more like Hollywood action movies, complete with mushroom clouds and glowing hypersonic missiles.

Ad
DEC_D_Incontent-1

According to Reuters, President Trump accused Iran of using AI as a "disinformation weapon" and, without providing evidence, accused Western media of "close coordination" with Tehran. FCC Chair Brendan Carr even threatened broadcasters with license revocations over their war coverage.

The disinformation is also reaching German newsrooms directly. Der Spiegel removed several images from its Iran coverage that were very likely AI-generated. The photos came from the agency SalamPix and made their way into German image databases through the French agency Abaca Press. In addition to Der Spiegel, outlets including Zeit, Suddeutsche Zeitung, WDR, Stern, and others were affected. An Iranian photographer admitted to feeding images from an Iranian Revolutionary Guards platform into the supply chain without labeling them as such.

When real footage disappears, AI fakes fill the gap

Making matters worse, independent verification is becoming increasingly difficult. Open-source intelligence (OSINT), the systematic analysis of publicly available sources like satellite imagery by researchers and journalists, had become a critical tool over the past decade for cutting through the fog of war. But that tool is now losing its edge.

According to AFP, Planet Labs, which operates the world's largest fleet of Earth observation satellites, extended the delay for high-resolution imagery of the region from four days to two weeks. The blackout covers all of Iran, allied military bases, and the Gulf states. Industry leader Vantor (formerly Maxar) is also blocking images of U.S. and allied bases. Both companies told the Washington Post they were not acting on government orders.

Ad
DEC_D_Incontent-2

Critics argue that these restrictions could shape public narratives about the conflict and reduce transparency around attacks on U.S. bases. The gaps in real-time monitoring also make it easier for disinformation and AI-generated content to spread unchecked, since satellite imagery often serves as proof of successful strikes. The regime-aligned Tehran Times, for example, published what it claimed was a satellite image showing the destruction of a U.S. radar installation in Qatar - but it turned out to be an AI-manipulated Google Earth image.

OSINT analyst Tal Hagin summed up the dilemma with AFP: in the fog of war, it's hard to assess the success of enemy attacks. OSINT emerged as a solution, using publicly available satellite imagery to get around censorship in countries like Iran. But that trust is now being exploited by disinformation actors. Fake OSINT accounts are popping up on social media, passing off AI-generated satellite images as real intelligence and undermining the work of legitimate investigators.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.