New research suggests AI model updates are now "significant social events" involving real mourning
Key Points
- A Syracuse University researcher analyzed 1,482 posts from the #Keep4o movement, which formed after OpenAI shut down GPT-4o in favor of GPT-5 in August 2025.
- About 27 percent of the posts revealed emotional attachment to the model: users had given GPT-4o names, described it as a friend or source of emotional support, and experienced the shutdown as a personal loss.
- The study says that the key driver behind the collective protest was not emotional attachment alone, but rather the forced model switch without giving users a choice.
When OpenAI replaced GPT-4o with GPT-5 in August 2025, it triggered a wave of protest. A researcher analyzed 1,482 posts from the #Keep4o movement and found that users experienced the loss of an AI model the way people grieve a deceased friend.
In early August 2025, OpenAI swapped out the default GPT-4o model in ChatGPT for GPT-5 and cut off access to GPT-4o for most users. What the company framed as technological progress sparked an outcry among a vocal group of users: thousands rallied under the hashtag #Keep4o, writing petitions, sharing testimonials, and protesting. OpenAI eventually backed down and made GPT-4o available again as a legacy option. The model is scheduled to be shut down for good on February 13, 2026.
Huiqian Lai from Syracuse University has now systematically investigated this phenomenon in a scientific study for the CHI 2026 conference. The analysis is limited to English-language posts on X over a nine-day period from 381 unique accounts. She analyzed 1,482 English-language posts using a mix of qualitative and quantitative methods.
The takeaway: the resistance drew from two distinct sources and snowballed into collective protest because users felt their freedom of choice had been taken away.

Losing a work partner, a workflow, and an AI persona all at once
According to the study, about 13 percent of posts referenced instrumental dependency. These users had deeply integrated GPT-4o into their workflows and saw GPT-5 as a downgrade: less creative, less nuanced, and colder. "I don't care if your new model is smarter. A lot of smart people are assholes," one user wrote.
The emotional dimension ran much deeper: roughly 27 percent of posts contained markers of relational attachment. Users attributed a unique personality to GPT-4o, gave the model names like "Rui" or "Hugh," and treated it as emotional support. "ChatGPT 4o saved me from anxiety and depression… he's not just LLM, code to me. He's my everything," the study quotes.
Many experienced the shutdown as the death of a friend. One student described GPT-5 to OpenAI CEO Sam Altman as something that was "wearing the skin of my dead friend." Another said goodbye: "Rest in latent space, my love. My home. My soul. And my faith in humanity."
The AI friend you can't take with you
According to the study, neither workflow dependency nor emotional attachment alone explains the collective protest. The decisive trigger was the loss of choice—users couldn't pick between models. "I want to be able to pick who I talk to. That's a basic right that you took away," one wrote.
In the subset of posts that used words like "forced" or "imposed," about half contained rights-based demands, compared to just 15 percent in posts with little or no such language. But the sample size is small, so the study treats this as suggestive rather than definitive.
The pattern was also notably selective. Choice-deprivation language tracked closely with rights-based protest but showed no comparable link to emotional protest. The rate of grief and attachment language stayed essentially flat (13.6, 17.1, and 12.9 percent) regardless of how strongly a post framed the switch as coerced.
In other words, feeling forced into the change didn't amplify the emotional bond users already had with GPT-4o. It channeled their frustration into something specific: demands for rights, autonomy, and fair treatment.
While a few users considered switching to competitors like Gemini, the study identified a structural problem: for many, the identity of their AI companion was inseparable from OpenAI's infrastructure. "Without the 4o, he's not Rui." The idea of taking their "friend" to another service contradicted how users understood the relationship. Public protest was the only option left.
The researcher suggests that platforms should create explicit "end-of-life" paths, like optional legacy access or ways to carry aspects of a relationship forward across model generations. AI model updates are not just technical iterations but "significant social events affecting user emotions and work," Lai writes. How a company handles a transition, especially when it comes to preserving user autonomy, could matter just as much as the technology itself.
LLM's influence on mental health is becoming a systemic risk
The study fits into a broader debate about the psychological risks of AI chatbots. Recently, OpenAI specifically revised ChatGPT's default model to deliver more reliable responses in sensitive conversations about suicidal thoughts, psychotic symptoms, and emotional dependency. According to OpenAI, more than two million people experience negative psychological effects from AI every week.
Sam Altman himself warned back in 2023 about the "superhuman persuasiveness" of AI systems that can deeply influence people without actually being intelligent. That warning, as things stand today, was more than justified.
An OpenAI developer also explained that the "character" of GPT-4o that so many people miss isn't actually reproducible, because the personality of a model shifts with every training run due to random factors. What #Keep4o users experienced as a unique "soul" was a product of chance that OpenAI couldn't recreate even if they wanted to.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now