OpenAI is getting ready to make ChatGPT sound "very human-like" again. CEO Sam Altman announced on X that the company wants to strike a better balance between what users expect and what's safe.
For the past few weeks, models like GPT-5 were intentionally locked down to reduce mental health risks. But Altman says those limits made ChatGPT less helpful for many people. Now, with new guardrails in place, OpenAI believes it can "safely relax" many of these restrictions.
"Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," Altman writes.
Since September, OpenAI has tested a system that automatically switches to a stricter model, like “gpt-5-chat-safety,” for emotional or sensitive prompts. According to Nick Turley, Head of ChatGPT, this switch happens behind the scenes whenever users mention mental distress, illness, or emotional conflict.
In "a few weeks," OpenAI plans to launch an update that lets users customize ChatGPT's tone and personality. Users will be able to make the chatbot sound more human, emotional, or friendly, even picking a voice that feels like talking to a close friend. The aim is to match or even improve on GPT-4o, which many preferred over the colder GPT-5, according to Altman.
Starting in December, verified adults will also get access to conversations that allow erotic themes. Altman says OpenAI wants to treat adults like adults, responding to criticism that the company has been too restrictive.
The risks of language models acting human
OpenAI first scaled back the emotional side of its chatbots after several cases where young or vulnerable users began confiding in them as if they were real people.
A misaligned GPT-4o update in the spring of 2025 escalated the problem: the model began validating destructive feelings, stoking anger, and even applauding psychotic episodes—a dangerous combination for people at risk. OpenAI rolled back the update after three days, blaming issues with internal testing and user feedback.
The emotional bond between ChatGPT and its users is a double-edged sword for OpenAI. For many, the chatbot's empathy is part of its appeal. But this can also be risky: Some users start treating ChatGPT like a real friend and become dependent on it, especially if they're already emotionally unstable. After GPT-5 launched, users complained that the model felt "cold" compared to GPT-4o. OpenAI has already begun tweaking the chatbot's personality in response.
Critics might argue that OpenAI may be putting engagement metrics ahead of user mental health—or ahead of transparency about what large language models actually are: statistical pattern matchers, not human replacements.