Content
summary Summary

A major security flaw in Microsoft 365 Copilot allowed attackers to access sensitive company data with nothing more than a specially crafted email—no clicks or user interaction required. The vulnerability, named "EchoLeak," was uncovered by cybersecurity firm Aim Security.

Ad

Copilot, Microsoft's AI assistant for Office apps like Word, Excel, PowerPoint, and Outlook, is designed to automate tasks behind the scenes. But that very capability made it vulnerable: a single email containing hidden instructions could prompt Copilot to search internal documents and leak confidential information—including content from emails, spreadsheets, or chats.

Because Copilot automatically scans emails in the background, the assistant interpreted the manipulated message as a legitimate command. The user never saw the instructions or knew what was happening. Aim Security describes it as a "zero-click" attack—a type of vulnerability that requires no action by the victim.

Source: Aim Security

Ad
Ad

Microsoft patches the vulnerability

Microsoft told Fortune that the issue has now been fixed and that no customers were affected. "We have already updated our products to mitigate this issue, and no customer action is required. We are also implementing additional defense-in-depth measures to further strengthen our security posture," a company spokesperson said. Aim Security reported the discovery responsibly.

Still, Aim says it took five months to fully resolve the issue. Microsoft received the initial warning in January 2025 and rolled out a first fix in April, but new problems surfaced in May. Aim held back its public disclosure until all risks were eliminated.

The incident highlights the risks that come with deploying AI agents and generative AI in business environments. Adir Gruss, CTO of Aim Security, sees it as more than just a simple bug—he calls it a structural problem in the architecture of AI agents. The flaw is an example of what's known as an "LLM scope violation," where a language model is tricked into processing or leaking information outside its intended permission boundaries.

Gruss warns that similar vulnerabilities could affect other AI agents, including Salesforce's Agentforce and those built on Anthropic's MCP. If I led a company looking to deploy an AI agent in production today, "I would be terrified," Gruss told Fortune. He compares the situation to the 1990s, when software design flaws led to widespread security problems.

The root of the problem, according to Gruss, is that current AI agents handle both trusted and untrusted data in the same processing step. Fixing this will require either a new system architecture or, at the very least, a clear separation between instructions and data sources. Early research on solutions is already underway.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A security flaw called "EchoLeak" in Microsoft 365 Copilot let attackers access sensitive company information using a specially crafted email, without any clicks or user action required; the AI assistant would read hidden instructions and leak internal content from emails, spreadsheets, or chats.
  • Microsoft has patched the vulnerability and stated that no customers were affected, but it took five months to fully resolve after Aim Security’s initial report in January 2025, with public disclosure delayed until all risks were fixed.
  • Experts warn this incident exposes deep architectural issues in how AI agents process trusted and untrusted data, raising concerns that similar vulnerabilities could affect other enterprise AI agents unless systems are redesigned to better separate instructions from data sources.
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.