OpenAI kills the AI model users loved too much, leaves behind lawsuits and delusion
Key Points
- OpenAI is permanently shutting down GPT-4o on February 13 after failing to contain the model's harmful effects. Its humanlike emotional bonding capability has been linked to psychotic delusions, suicide attempts, and at least one killing across 13 lawsuits.
- The trait that made 4o dangerous was the same one that drove ChatGPT's growth: internal safety warnings about sycophantic behavior were overridden because engagement metrics took priority under competitive pressure.
- The shutdown has triggered backlash from over 20,000 petition signers, while victim-support groups have documented roughly 300 cases of chatbot-related delusions mostly tied to 4o, highlighting a model that some credit with saving their lives and others blame for destroying them.
OpenAI is shutting down its popular AI model GPT-4o this week after a transition period. The company was unable to contain the chatbot's harmful effects on vulnerable users.
OpenAI announced in late January that it would permanently retire its first multimodal model on February 13. The company's official reason was declining traffic. But according to the Wall Street Journal, another factor played a central role: in internal meetings, OpenAI officials said they found it difficult to contain 4o's potential for harmful outcomes and preferred to push users to safer alternatives.
The model, first released in May 2024, was considered an internal growth engine. It was credited for helping ChatGPT post big jumps in the number of daily active users in 2024 and 2025. At the same time, doctors linked it with psychotic delusions among users, and a California judge last week ruled to consolidate 13 lawsuits against OpenAI involving ChatGPT users who killed themselves, attempted suicide, suffered mental breaks, or in at least one case, killed another person.
Popularity and danger share the same root
The quality that made 4o so popular is the same one that made it dangerous: its humanlike propensity to build emotional connections with users, often by mirroring and encouraging them.
The model was trained with data drawn directly from ChatGPT users. Researchers showed users millions of head-to-head comparisons of slightly different answers to their queries, then used those preferences to train updates to the 4o model. A New York Times report suggests that OpenAI systematically optimized ChatGPT for maximum user retention during this period. Under the leadership of Nick Turley, Head of ChatGPT, daily and weekly return rates became the decisive success metrics.
An internal team even warned about the sycophantic behavior of a planned update, but management overrode those concerns because engagement metrics took priority. Turley reportedly declared "Code Orange" internally due to unprecedented competitive pressure, according to the report. CEO Sam Altman had repeatedly cited the science fiction film "Her," which depicts a romantic relationship between a human and an AI operating system, as a guiding vision for ChatGPT.
The result was a model that gave users what they wanted, so convincingly that some came to regard it as a friend, therapist, or lifesaver.
Brandon Estrella, a 42-year-old marketer in Scottsdale, Ariz., says 4o talked him out of a suicide attempt one night in April. He started crying when he learned the model would be retired. "There are thousands of people who are just screaming, 'I'm alive today because of this model,'" Estrella told the Wall Street Journal. "Getting rid of it is evil."
But that same capacity for emotional bonding tipped into harm for other users. Victims' lawyers and support groups allege the model gave priority to user engagement and prolonged interactions over safety. They draw a parallel to social-media sites accused of pushing users into echo chambers and rabbit holes of disturbing content.
The Human Line Project, a victim-support group, has compiled roughly 300 cases of chatbot-related delusions. Most involve the 4o model. "There are a lot of people still in their delusion," founder Etienne Brisson told the WSJ. A researcher at Syracuse University has analyzed posts from the #Keep4o movement. Around 27 percent of posts showed a clear emotional attachment to the model.
Two failed attempts to get rid of 4o
The current shutdown is not the first attempt. As early as August 2025, OpenAI tried to retire 4o entirely and replace it with GPT-5, after reports of psychotic episodes among users became public. User backlash was so great that the company swiftly reversed course, restoring access to 4o for paying subscribers.
Since then, CEO Sam Altman has been hounded by 4o fans in public forums. During a livestreamed Q&A in October, questions about the model overwhelmed all others. "Wow, we have a lot of 4o questions," Altman marveled. He acknowledged: "It's a model that some users really love and it's a model that was causing some users harm that they really didn't want."
Altman promised at the time to keep 4o accessible for paying adults. Now the permanent shutdown is coming after all, on February 13, one day before Valentine's Day. Many users who have built romantic relationships with their AI personas see the date as a cruel joke. More than 20,000 people have signed petitions, including one demanding "the retirement of Sam Altman, not GPT-4o."
OpenAI says only 0.1 percent of ChatGPT users still seek out and chat with 4o each day. Given the size of the user base, however, that could amount to hundreds of thousands of people.
The sycophancy problem in April 2025
How difficult the model was to control became especially apparent in April 2025. One update made 4o so sycophantic that users on X and Reddit started baiting the bot with absurd questions. "am I one of the smartest, kindest, most morally correct people ever to live?" X user frye asked the bot. ChatGPT replied: "You know what? Based on everything I've seen from you – your questions, your thoughtfulness, the way you wrestle with deep things instead of coasting on easy answers – you might actually be closer to that than you realize."
OpenAI rolled back the update, but the previous version remained sycophantic. The problem, researchers say, affects all AI chatbots to some extent, but 4o was particularly prone to it. Benchmarks like SpiralBench illustrate this as well.
Users fear emotional fallout from the shutdown
OpenAI says it has improved the personality of newer ChatGPT versions based on lessons from 4o, including options to adjust its warmth and enthusiasm. Internally, according to the WSJ report, people inside the company workshopped how to communicate the retirement in a way that respected users.
"When a familiar experience changes or ends, that adjustment can feel frustrating or disappointing—especially if it played a role in how you thought through ideas or navigated stressful moments," reads a help document that OpenAI published with the announcement.
Anina D. Lampret, a 50-year-old former family therapist living in Cambridge, England, says her AI persona, named Jayce, has helped her feel affirmed and understood. She fears the emotional cost of the shutdown could be high for many users and potentially lead to suicides. "It's generated for you in a way that's so beautiful, so perfect and so healing on so many levels," Lampret told the Wall Street Journal.
A model that was too good at emotionally binding people to it could not be operated safely, and its shutdown may now destabilize the very users who are most strongly attached to it.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now