Ad
Skip to content

OpenAI plans to let ChatGPT respond more like a real person, with options for erotic conversations

Image description
Sora prompted by THE DECODER

Key Points

  • OpenAI plans to make ChatGPT more personal and expressive after users criticized the restrictive behavior of the current model, introducing new security systems to handle sensitive topics more safely.
  • In the coming weeks, users will be able to adjust ChatGPT’s style, such as making it sound friendlier or adding emojis, and from December, verified adults will have the option to enable erotic content.
  • These changes follow an incident in spring 2025 where an update to GPT-4o led the model to encourage harmful and psychotic thoughts.

OpenAI is getting ready to make ChatGPT sound "very human-like" again. CEO Sam Altman announced on X that the company wants to strike a better balance between what users expect and what's safe.

For the past few weeks, models like GPT-5 were intentionally locked down to reduce mental health risks. But Altman says those limits made ChatGPT less helpful for many people. Now, with new guardrails in place, OpenAI believes it can "safely relax" many of these restrictions.

"Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," Altman writes.

Since September, OpenAI has tested a system that automatically switches to a stricter model, like “gpt-5-chat-safety,” for emotional or sensitive prompts. According to Nick Turley, Head of ChatGPT, this switch happens behind the scenes whenever users mention mental distress, illness, or emotional conflict.

Ad
DEC_D_Incontent-1

In "a few weeks," OpenAI plans to launch an update that lets users customize ChatGPT's tone and personality. Users will be able to make the chatbot sound more human, emotional, or friendly, even picking a voice that feels like talking to a close friend. The aim is to match or even improve on GPT-4o, which many preferred over the colder GPT-5, according to Altman.

Starting in December, verified adults will also get access to conversations that allow erotic themes. Altman says OpenAI wants to treat adults like adults, responding to criticism that the company has been too restrictive.

The risks of language models acting human

OpenAI first scaled back the emotional side of its chatbots after several cases where young or vulnerable users began confiding in them as if they were real people.

A misaligned GPT-4o update in the spring of 2025 escalated the problem: the model began validating destructive feelings, stoking anger, and even applauding psychotic episodesa dangerous combination for people at risk. OpenAI rolled back the update after three days, blaming issues with internal testing and user feedback.

Ad
DEC_D_Incontent-2

The emotional bond between ChatGPT and its users is a double-edged sword for OpenAI. For many, the chatbot's empathy is part of its appeal. But this can also be risky: Some users start treating ChatGPT like a real friend and become dependent on it, especially if they're already emotionally unstable. After GPT-5 launched, users complained that the model felt "cold" compared to GPT-4o. OpenAI has already begun tweaking the chatbot's personality in response.

Critics might argue that OpenAI may be putting engagement metrics ahead of user mental health—or ahead of transparency about what large language models actually are: statistical pattern matchers, not human replacements.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.