Content
summary Summary

An OpenAI analysis shows the efforts of some state-backed actors to conduct online disinformation campaigns using AI systems. So far, these campaigns have gained little traction.

Ad

OpenAI has released a report detailing how partially state-backed groups from Russia, China, Iran, and Israel tried to use AI models to spread false information online. According to OpenAI, the campaigns did not reach many people or drive engagement.

The study found five operations: two from Russia, one from China, one from Iran, and a commercial company in Israel called STOIC.

All groups used AI to make content or work faster, but not only AI. They mixed AI-made material with regular things like text they wrote or memes they copied.

Ad
Ad

For instance, a previously unknown Russian network called "Bad Grammar" used OpenAI models to create and post political comments on Telegram focused on Russia, Ukraine, the US, Moldova, and the Baltic States.

The Chinese operation "Spamouflage" generated content praising China and attacking its critics. The Iranian "International Union of Virtual Media" (IUVM) created pro-Iran, anti-Israel, and anti-US articles.

AI-spoofed social media engagement

The campaigns also used AI to generate fake engagement for posts, such as by generating replies to their own posts. According to the study, none of the networks achieved real interactions to any large degree. OpenAI says the operations did not go above category 2 on a six-level "breakout scale" for judging the impact of influencer operations.

Amid the technical change, human error remained: operators accidentally posted OpenAI system messages that showed their generated content as AI products. Wrong captions or mixing of topics also happened.

The report stresses the importance of "defensive design" when building AI systems, which means building in safeguards. OpenAI also uses AI models themselves to more efficiently detect suspicious activity. Because the campaigns were designed to spread content across many platforms, sharing information with industry peers is key to stopping the actors, the company says.

Recommendation

In mid-February, Microsoft and OpenAI reported that state-affiliated actors from China, Iran, North Korea and Russia were using the company's AI services to create content, code, or conduct cybersecurity research. The companies said all actors have been successfully shut down.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI released a report on partially state-sponsored actors from Russia, China, Iran, and Israel that are using AI models for covert online propaganda, but have so far achieved low reach and engagement.
  • The campaigns mix AI-generated content with traditional formats such as handwritten copy or copied memes. They have also used AI to generate fake engagement, but with little authentic interaction.
  • Human errors, such as accidentally posting system messages or incorrect captions, have exposed the content as AI-generated products.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.