Content
summary Summary

Iran is increasingly turning to cyber-based influence operations and generative AI to manipulate the 2024 US presidential elections, according to a recent report from Microsoft. Russia and China also remain active threats.

Ad

Microsoft's Threat Analysis Center (MTAC) warns that Iranian actors are ramping up their use of cyber influence operations and generative AI technologies to target the 2024 US elections.

Since June 2024, Iran has been laying the groundwork for influence operations aimed at US targets, including initial cyber reconnaissance activities and the infiltration of online personas and websites in the information space.

Picture: MTAC

Microsoft has identified several Iranian groups involved in these activities. The Sefid Flood group has been preparing for influence operations since the Iranian New Year's festival in late March. They specialize in impersonating social and political activists to sow chaos and undermine trust in authorities.

Ad
Ad

Another group called Mint Sandstorm, linked to Iran's Revolutionary Guards, attempted to phish a senior staffer of a presidential campaign in June. The group had carried out similar attacks before the 2020 election.

AI plagiarism for influence

Microsoft also reports on an Iranian network called Storm-2035 that runs four websites posing as news portals. These target US voter groups at both ends of the political spectrum, spreading polarizing messages on topics like presidential candidates, LGBTQ rights, or the Ukraine war.

The report says Iranian actors are using AI-based services to plagiarize and rephrase content from US publications. They employ SEO plugins and other AI-based tools to generate article headlines and keywords.

In addition to Iran, Russian, and Chinese actors also remain active. Microsoft is observing three Russian influencers running campaigns related to the US elections. The Storm-1516 group has been focusing on producing fake videos spreading scandalous claims since April.

Image: MTAC

Chinese actors like Taizi Flood and Storm-1852 have expanded their activities to new platforms and evolved their tactics. They use hundreds of accounts to stir outrage over pro-Palestinian protests at US universities and are increasingly turning to short videos on political topics.

Recommendation

AI hype in influence ops fading

Microsoft stresses that the actors' use of generative AI has had limited impact so far - and many are instead returning to tried-and-true methods: "In total, we’ve seen nearly all actors seek to incorporate AI content in their operations, but more recently many actors have pivoted back to techniques that have proven effective in the past—simple digital manipulations, mischaracterization of content, and use of trusted labels or logos atop false information."

Image: MTAC

A full automation of political influence therefore does not yet seem possible - but generative AI can support and scale these efforts. OpenAI recently identified and banned several such groups.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Microsoft warns in a new report that Iranian actors are ramping up cyber-enabled influence operations and using generative AI technologies to sway the 2024 US presidential elections. According to the Microsoft Threat Analysis Center (MTAC), several Iranian groups have been identified.
  • One group, Sefid Flood, imitates social and political activists. Another, Mint Sandstorm, conducts phishing attacks against campaign staffers. A network called Storm-2035 operates fake news sites that spread polarizing messages.
  • In addition to Iran, Russian and Chinese actors also remain active. Microsoft observes that despite experimenting with AI-generated content, many actors are returning to tried-and-true methods like simple digital manipulation, which have proven more effective.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.