Content
summary Summary

OpenAI CEO Sam Altman is warning users not to rely too heavily on the new ChatGPT agent, especially when it comes to tasks involving sensitive or personal data.

Ad

ChatGPT agent is OpenAI's first system built to handle multi-step tasks autonomously. According to Altman, the agent can break down requests into smaller steps, use external tools, and carry out actions on its own—moving beyond earlier products like Deep Research and Operator.

But Altman says users shouldn't assume the technology is safe for everything. Even with "a lot of safeguards and warnings into it," he says, there are still risks that can't be predicted. He specifically advises against using the agent for important tasks or anything involving a lot of personal information.

AI agents are still vulnerable

Altman highlights the risk of giving an AI agent broad permissions, like access to an email account, without oversight. For example, if you tell the agent to handle your emails and take any necessary actions, a malicious message could trick it into exposing sensitive data or doing something it shouldn't.

Ad
Ad

Researchers have repeatedly shown that AI agents can be manipulated with relatively simple prompts, sometimes leading to the disclosure of private information or unwanted actions.

Altman calls this version of ChatGPT agent an "experimental" system. He says it offers a preview of what's possible, but it isn't suited for high-risk or privacy-sensitive use cases.

"We don’t know exactly what the impacts are going to be, but bad actors may try to “trick” users’ AI agents into giving private information they shouldn’t and take actions they shouldn’t, in ways we can’t predict. We recommend giving agents the minimum access required to complete a task to reduce privacy and security risks," Altman writes.

For now, Altman recommends giving agents only the minimum access necessary and says OpenAI will rely on real-world feedback to refine its safety measures. But if something goes wrong or sensitive data is exposed, the responsibility falls on the user—not OpenAI. Anyone using the ChatGPT agent should be aware of the risks.

Altman defends this approach by saying, "We think it's important to begin learning from contact with reality, and that people adopt these tools carefully and slowly as we better quantify and mitigate the potential risks involved. As with other new levels of capability, society, the technology, and the risk mitigation strategy will need to co-evolve."

Recommendation

He may be right that learning from real-world use is necessary, but with hundreds of millions of ChatGPT users, this also means there will almost certainly be real-world victims along the way.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has introduced ChatGPT agent, a new system that can independently carry out complex, multi-step tasks for the first time.
  • CEO Sam Altman advises caution when using the agent for sensitive data or critical tasks, warning that not all potential risks can be anticipated despite existing safeguards.
  • Research highlights that AI agents remain vulnerable to manipulation through techniques like jailbreak prompts, prompting Altman to recommend minimal access rights during use.
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.