OpenAI CEO Sam Altman is warning users not to rely too heavily on the new ChatGPT agent, especially when it comes to tasks involving sensitive or personal data.
ChatGPT agent is OpenAI's first system built to handle multi-step tasks autonomously. According to Altman, the agent can break down requests into smaller steps, use external tools, and carry out actions on its own—moving beyond earlier products like Deep Research and Operator.
But Altman says users shouldn't assume the technology is safe for everything. Even with "a lot of safeguards and warnings into it," he says, there are still risks that can't be predicted. He specifically advises against using the agent for important tasks or anything involving a lot of personal information.
AI agents are still vulnerable
Altman highlights the risk of giving an AI agent broad permissions, like access to an email account, without oversight. For example, if you tell the agent to handle your emails and take any necessary actions, a malicious message could trick it into exposing sensitive data or doing something it shouldn't.
Researchers have repeatedly shown that AI agents can be manipulated with relatively simple prompts, sometimes leading to the disclosure of private information or unwanted actions.
Altman calls this version of ChatGPT agent an "experimental" system. He says it offers a preview of what's possible, but it isn't suited for high-risk or privacy-sensitive use cases.
"We don’t know exactly what the impacts are going to be, but bad actors may try to “trick” users’ AI agents into giving private information they shouldn’t and take actions they shouldn’t, in ways we can’t predict. We recommend giving agents the minimum access required to complete a task to reduce privacy and security risks," Altman writes.
For now, Altman recommends giving agents only the minimum access necessary and says OpenAI will rely on real-world feedback to refine its safety measures. But if something goes wrong or sensitive data is exposed, the responsibility falls on the user—not OpenAI. Anyone using the ChatGPT agent should be aware of the risks.
Altman defends this approach by saying, "We think it's important to begin learning from contact with reality, and that people adopt these tools carefully and slowly as we better quantify and mitigate the potential risks involved. As with other new levels of capability, society, the technology, and the risk mitigation strategy will need to co-evolve."
He may be right that learning from real-world use is necessary, but with hundreds of millions of ChatGPT users, this also means there will almost certainly be real-world victims along the way.