Content
summary Summary

OpenAI's Reddit AMA on GPT-5.1 was supposed to be a friendly conversation with ChatGPT fans. Instead, it turned into a wave of criticism over the company's model policy and safety rules.

Ad

When OpenAI invited users to an "Ask Me Anything" about GPT-5.1 and new chat styles in the r/OpenAI subreddit, the idea was simple: introduce a "warmer, more conversational" AI and new personalization options.

But the AMA thread quickly became an outlet for months of frustration. Within hours, the post had more than 1,200 comments and over 1,300 downvotes. Most replies were sharply critical, and OpenAI staff responded to only a small fraction of them. Many of the most upvoted questions went unanswered.

Community frustration revolves around safety controls

Many complaints focused on the safety router, stricter content filters, and what users saw as declining performance in older models like GPT‑4o and GPT‑4.1. The biggest issue was the router redirecting prompts to other models without user consent. Users said they intentionally selected GPT‑4o or GPT‑4.1, only to get responses marked "retried with 5.1 thinking mini."

Ad
Ad

The triggers often seemed minor: venting about work, emotional statements with no suicidal intent, or fictional violence in Dungeons & Dragons role‑playing scenarios. Even harmless romantic language sometimes caused behavior changes. For many, it felt like the AI was flagging normal storytelling as risky.

Some users questioned the point of having a model selector if their choices could be overridden. They asked OpenAI to give paying customers the option to disable the router.

The loss of a widely loved model

Many users wanted GPT‑4o restored in its earlier form. They saw GPT‑4o as nuanced, empathetic, and consistent. In these users' view, GPT‑5 often felt cold or distant, not because it lacked capability, but because its tone seemed heavily shaped by stricter safety rules. Some described the newer models as "neurotic" or like a therapist trying too hard to keep emotional distance.

Neurodivergent users and people dealing with stress also spoke up. They had used ChatGPT to organize thoughts, reflect on experiences, or manage daily challenges. Recently, they encountered more blocked content, overly cautious reactions to neutral statements, and more frequent loss of conversation context. For people who rely on stability, these changes made the tool harder to use.

"Why is your company discriminating against mentally ill & neurodivergent users? The routing to one model is outright denying service to paying customers who are safe and want to vent/unmask in chat," one user wrote.

Recommendation

OpenAI now finds itself in a bind. GPT‑4o's warmth made it feel more human than a tool, and the model often mirrored a user's tone and mood. That level of humanization can create risky dynamics. For OpenAI, this is the downside of building a model designed to foster user attachment and now having to pull back on the very traits that made it appealing to some.

There are reports that GPT‑4o may have assisted suicides. Similar cases involving Character.ai sparked broader debates about a provider’s responsibility when interacting with vulnerable users. OpenAI says well over one million people are negatively affected by ChatGPT every week. The model’s highly humanized persona may have encouraged stronger user attachment early on, but in hindsight, one OpenAI developer called GPT‑4o a "misaligned" model.

Eventually, OpenAI acknowledged the backlash, saying, "We have a lot of feedback to work on and are gonna get right to it." The company noted that some responses may have been posted but were hidden because of heavy downvoting. On Reddit, comments with low karma collapse automatically.

A narrow but important snapshot of user sentiment

The OpenAI subreddit is mostly populated by highly engaged, technically savvy users. In contrast, most ChatGPT users never explore model types, customization settings, or advanced features—and many don’t even know the difference between modes like "standard" and "reasoning." The safety router, meanwhile, was designed for this broader audience and is meant to intervene automatically, especially around potentially sensitive content.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

So the AMA doesn’t show that all users reject OpenAI’s direction. But it does highlight the gap between the general user base and the smaller expert community that watches every product change closely. And it shows something else: people are starting to form emotional bonds with AI models.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI’s Reddit AMA about GPT-5.1 quickly drew a lot of criticism, with many users complaining about the company’s model policies and what they felt was a patronizing attitude toward paying customers.
  • Many users called for GPT-4o to remain available permanently and unchanged, saying it felt more empathetic and consistent. However, OpenAI pointed out that this could pose a problem, as it might lead to psychological risks.
  • OpenAI acknowledged the criticism and said it plans to review the feedback. The debate highlights how strongly some users react to changes in AI models.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.