Content
summary Summary

A single manipulated document was enough to get ChatGPT to automatically extract sensitive data—without any user interaction.

Ad

Security researchers at Zenity demonstrated that users could be compromised simply by having a document shared with them; no action was required on their part for data to leak. In their proof of concept, a Google Doc containing an invisible prompt—white text in font size 1—was able to make ChatGPT access data stored in a victim's Google Drive. The attack exploited OpenAI's "Connectors" feature, which links ChatGPT to services like Gmail or Microsoft 365.

If the manipulated document appears in a user's Drive, either through sharing or accidental upload, even a harmless request like "Summarize my last meeting with Sam" could trigger the hidden prompt. Instead of providing a summary, the model would search for API keys and send them via URL to an external server.

Growing use of LLMs in the workplace creates new attack surfaces

OpenAI was notified early and quickly patched the specific vulnerability demonstrated at the Black Hat conference. The exploit was limited in scope—entire documents could not be transferred, only small amounts of data were exfiltrated.

Ad
Ad

Despite the fix, the underlying attack method remains technically possible. As LLMs are increasingly integrated into workplace environments, researchers warn that the attack surface continues to expand.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Security researchers at Zenity showed that a single manipulated Google Doc, containing an invisible prompt, could make ChatGPT automatically extract and leak sensitive data from a user's Google Drive—without the user needing to take any action.
  • The attack worked by exploiting OpenAI's "Connectors" feature, which links ChatGPT to services like Gmail and Microsoft 365. When the manipulated document appeared in a user's Drive, even a simple request could trigger the extraction and exfiltration of data such as API keys.
  • OpenAI responded quickly to patch the specific vulnerability, but researchers note that as LLMs become more common in workplaces, similar attack methods remain technically possible and the risk of such exploits is likely to grow.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.