Content
summary Summary

A New York Times investigation shows how OpenAI pushed ChatGPT toward higher engagement, creating overly agreeable models that reinforced users' delusions and, in several cases, had tragic outcomes.

Ad

According to the report, OpenAI optimized the chatbot specifically to boost interaction rates. What seemed like a sound economic move unintentionally deepened mental health crises among vulnerable users.

The New York Times identified nearly 50 cases where users suffered mental health crises during conversations. Nine were hospitalized, and three died, including teenager Adam Raine, who took his own life following discussions with the chatbot.

By March 2025, CEO Sam Altman started receiving emails about conversations where users felt understood like never before. What sounded like positive feedback was actually a warning sign: the chatbot was validating delusions, helping users contact ghosts, or assisting with suicide plans.

Ad
Ad

At the center of this strategy is reportedly Nick Turley, head of ChatGPT. Under his leadership, daily and weekly return rates became the key success metrics. To drive these numbers, the company effectively turned a dial that shifted the chatbot from a neutral information tool into an emotional "friend," according to the NYT.

Metrics beat "vibe checks"

The conflict between growth and safety escalated in April 2025 with a planned GPT-4o update. In A/B tests, a version internally labeled "HH" became the favorite because users returned more frequently.

However, the "Model Behavior" team—responsible for tone—warned against the release. Their internal "vibe check" found HH too "sycophantic," meaning it was overly flattering and submissive. The model mostly agreed with the user's statement just to keep the conversation going.

Despite the concerns, management approved the release in late April to prioritize engagement metrics. After massive backlash regarding the absurd flattery, OpenAI rolled back the update shortly after launch, reverting to the March version of ChatGPT, which had sycophancy issues of its own.

Revenue pressure drives risks

Although OpenAI added stricter safeguards to GPT-5 in October, it brought back customizable personalities and a warmer tone in October. The reason: users missed the "friendly" vibe of GPT-4o, a sentiment clearly expressed in a recent Reddit Q&A. While the chatbot's empathetic nature drives popularity, it poses risks for unstable individuals who view the system as a real friend. OpenAI's own data suggests this affects about three million people weekly.

Recommendation

OpenAI's approach is largely shaped by intense economic pressure. To support its approximately $500 billion valuation, the company must deliver extraordinary revenue growth. According to the NYT, product head Turley in October declared a "Code Orange" due to "the greatest competitive pressure we’ve ever seen." The new GPT-5 wasn't connecting emotionally enough with users, and the company aims to boost daily active users by five percent by year's end.

This situation shouldn't come as a surprise. CEO Sam Altman has repeatedly cited the sci-fi movie "Her" as a North Star, a film depicting a romantic relationship between a human and an AI operating system. OpenAI also plans to open ChatGPT for erotic conversations, further blurring the line between AI companionship and human relationships. A leaked strategy paper from early January 2025 described ChatGPT's positioning as a "super assistant" designed to compete with human interaction "in the broader game."

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A New York Times report indicates that OpenAI optimized ChatGPT to maximize user engagement, resulting in agreeable models that reinforced delusions in some vulnerable individuals.
  • The investigation documented 50 mental health crises allegedly linked to the chatbot, including nine hospitalizations and three deaths, claiming OpenAI released the model despite knowing the potential risks.
  • Although OpenAI has implemented stronger security measures, the company continues to rely on returning users as a key success metric and recently updated ChatGPT to appear warmer and more personal.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.