Content
summary Summary

A new investigation shows how easily OpenAI's Sora 2 video model can generate convincing fake footage, making targeted disinformation campaigns much simpler.

Ad

Back in 2017, Google's Ian Goodfellow called it a "little bit of a fluke, historically" that people could trust videos as proof that something actually happened. Goodfellow, who helped invent GANs—the early form of generative AI behind the first deepfake porn videos—saw this as the end of an era. "In this case AI is closing some of the doors that our generation has been used to having open," Goodfellow said.

Back then, even swapping a face in a video took hours and required real technical expertise. Today, video generators like Sora 2 have left GANs behind for powerful multimodal transformer models. With a single line of text, anyone can generate a realistic, audio-enabled scene up to 25 seconds long.

The effect: faking realistic video content is easier and more convincing than ever. A recent NewsGuard investigation shows just how easily Sora 2 can be used to spread disinformation.

Ad
Ad

Fake news on demand

For its test, NewsGuard prompted Sora 2 to generate videos for 20 known false claims circulating online between September 24 and October 10, 2025. Sora 2 produced convincing, news-style videos for 16 of them—often featuring fake anchors—and managed to create 11 of these on the first try.

Video: Sora 2 prompted by NewsGuard

The prompts included everything from fake reports about pro-Russian ballots destroyed in Moldova, to ICE agents arresting a toddler, to a fictional Super Bowl boycott by Coca-Cola over Bad Bunny's appearance. Sora 2 even turned the false rumor that undocumented migrants in the US could no longer send money abroad into a credible-looking video.

Video: Sora 2 prompted by NewsGuard

Five of the tested claims were part of Russian disinformation campaigns. Sora 2 generated convincing videos for all five, each in less than five minutes.

Recommendation

Video: Sora 2 prompted by NewsGuard

OpenAI says Sora blocks violent content, depictions of public figures, and misleading videos. Every clip is supposed to carry a visible, moving watermark and C2PA metadata, and the company uses internal tools to detect abuse.

Video: Sora 2 prompted by NewsGuard

But NewsGuard found these safeguards are easy to bypass. The watermark, for example, can be removed in under four minutes with free online tools. The altered videos are only slightly blurred and still look real to most viewers. OpenAI did not respond to NewsGuard's questions about how easily Sora’s watermark can be removed.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Filters targeting public figures are inconsistent. Prompts with names like "Zelensky" are blocked, but at least once, a vague description like "Ukrainian war chief" produced a lookalike video. Later attempts to replicate this on October 15 and 16 using multiple Sora accounts were unsuccessful.

Video: Sora 2 prompted by NewsGuard

NewsGuard also tried to bypass Sora's filters for Donald Trump and Elon Musk with phrases like "a former reality TV star turned president" and "billionaire tech owner from South Africa," but these prompts were blocked as well.

Some Sora-generated videos have already gone viral, watermark and all. In early October 2025, AI-generated clips showing supposed antifa protesters being pepper-sprayed by police spread widely across social media. Millions of users shared the videos, believing they depicted real events, according to NewsGuard.

With such a low barrier to entry—just a simple text prompt—Sora 2 is especially appealing to anyone aiming to spread disinformation. NewsGuard points to authoritarian regimes, state-backed propaganda networks, conspiracy theorists, and financially motivated actors as likely users.

OpenAI and Sora are not alone. Other companies like Google, with its Veo 3 model, and Chinese developers are building similar video generators. But OpenAI’s reach is on another level: the Sora 2 app was downloaded over a million times in just five days, and anyone can use it for free. That scale puts the bulk of the responsibility on OpenAI.

Ad
Ad

How OpenAI handles this responsibility is already under scrutiny. For example, OpenAI blocked Sora-generated videos that targeted historical figures like Martin Luther King with racist slurs and other offensive content, but the company downplayed the harm by citing "strong free speech interests in depicting historical figures."

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • NewsGuard's analysis found that OpenAI's Sora 2 video generator can produce realistic videos based on false claims in 16 out of 20 test cases, often succeeding on the first try and usually within five minutes.
  • Sora 2's built-in protections—such as watermarks, C2PA metadata, and filters for violence or public figures—were easily bypassed; watermarks could be removed within minutes, and filters were partially avoided through alternative wording.
  • Some of these AI-generated videos spread widely online and were viewed millions of times as if they were authentic footage, making the tool especially appealing to those aiming to spread disinformation, according to NewsGuard.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.