An attacker used AI to impersonate Secretary Rubio and contact high-ranking officials
An unknown individual recently impersonated U.S. Secretary of State Marco Rubio, contacting at least five high-ranking officials and dignitaries.
According to an internal State Department memo obtained by the Washington Post, the attacker used AI-generated voice and text messages to closely mimic Rubio's speech and writing style, apparently in an effort to gain access to sensitive information or digital credentials.
The attacker contacted targets using the encrypted messaging app Signal and text messages, adopting the identity "Marco.Rubio@state.gov" - an address unrelated to Rubio. Three foreign ministers, a U.S. governor, and a member of Congress were contacted through these channels. In at least two cases, voicemails were left; in another, the attacker sent a Signal chat invitation by text.
The memo also notes that other State Department employees were approached via email. The attacker's identity is still unknown. The State Department has launched a full investigation and plans to introduce additional security measures. No details have been released about the contents of the messages, the names of those targeted, or whether the attempts were successful.
The FBI previously warned in May about an ongoing campaign using AI-generated voice messages to target senior U.S. officials and their contacts, aiming to steal information or money. The agency advised not to trust messages that appear to come from high-ranking officials without verification. In a similar incident last year, AI-generated robocalls imitating President Biden were used in New Hampshire in an attempt to keep voters away from the primaries.
Weaknesses in Government Communications
Security researcher Hany Farid of the University of California, Berkeley, told the Washington Post that such impersonation attacks do not require advanced technology. Many government employees are lax when it comes to data security, and the use of Signal for official communication is especially concerning.
Audio samples lasting just 15 to 20 seconds are enough to clone a public figure's voice using online services. Attackers simply upload the sample, confirm they have permission to use the voice, and type out the message they want delivered. Voicemail is especially effective for these attacks because it doesn't require real-time interaction.
Open-source tools like Chatterbox are lowering the barrier for voice cloning even further.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe nowAI news without the hype
Curated by humans.
- Over 20 percent launch discount.
- Read without distractions – no Google ads.
- Access to comments and community discussions.
- Weekly AI newsletter.
- 6 times a year: “AI Radar” – deep dives on key AI topics.
- Up to 25 % off on KI Pro online events.
- Access to our full ten-year archive.
- Get the latest AI news from The Decoder.