Content
summary Summary

OpenAI says it has uncovered and stopped a covert Iranian operation that used ChatGPT to create content about the US presidential race and other topics.

Ad

The company suspended multiple ChatGPT accounts linked to an Iranian influence campaign called Storm-2035. Microsoft also reported this week on AI-powered manipulation attempts from Iran and other countries.

According to OpenAI, the operation used ChatGPT to generate content on various subjects, including comments about candidates from both parties in the US presidential election. This material was then spread via social media accounts and websites.

Like the covert campaigns reported in May, this operation failed to gain much traction, OpenAI said. Most of the social media posts they found received few or no likes, shares, or comments. The articles also showed no signs of being shared on social media.

Ad
Ad
Image: OpenAI

The campaign used ChatGPT in two ways: to write longer articles and shorter social media posts. In the first approach, it created articles about U.S. politics and world events and published them on five websites that were posing as progressive and conservative news outlets.

The second approach involved writing short commentaries in English and Spanish and posting them on social media. OpenAI identified 12 accounts on X and one on Instagram involved in this operation. Some X accounts posed as progressive, while others posed as conservative.

The AI-generated content focused mainly on the Gaza conflict, Israel at the Olympics, and the US presidential election. To a lesser extent, it covered Venezuelan politics, Latinx rights in the US (in both Spanish and English), and Scottish independence.

The mere existence of generative AI helps those who spread false information

AI can be used for political propaganda in several ways, from mass-producing manipulative texts to generating believable fake voices and images.

However, the simplest deceptive use of AI is claiming others are using it. US presidential candidate Donald Trump recently demonstrated this by alleging that photos of an enthusiastic crowd at a Kamala Harris event were "A.I.'d" - meaning manipulated with AI. This is at least the second time that Trump has used this tactic.

Recommendation

It confirms a 2017 prediction by deepfake technology creator Ian Goodfellow that people should no longer trust images and videos online. But if anything can be faked, nothing is unquestionably true, undermining trust in all audiovisual media.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI uncovered and stopped a covert influence operation from Iran that used ChatGPT to generate content on several topics, including the US presidential campaign, the Gaza conflict, and politics in Venezuela.
  • The operation used ChatGPT to create longer articles for websites masquerading as news portals, as well as shorter commentaries in English and Spanish for social media accounts portraying themselves as progressive or conservative.
  • Despite attempts to influence public opinion, the operation did not generate a significant response, according to OpenAI. However, the existence of generative AI makes it easier to cast doubt on the authenticity of content and undermine trust in the media, even when no manipulation has taken place. Donald Trump, for example, has exploited this effect at least twice.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.