Content
summary Summary

A sharp rise in reported cases linked to a faulty ChatGPT update has put new focus on the emotional risks of AI chatbots, with OpenAI CEO Sam Altman publicly warning about the dangers of becoming emotionally dependent on these systems.

Ad

Back in 2023, Danish psychiatrist Søren Dinesen Østergaard from Aarhus University warned that AI chatbots could trigger delusions in psychologically vulnerable people. That concern, once mostly theoretical, has now become a reality. In a recent article for Acta Psychiatrica Scandinavica, Østergaard describes a dramatic increase in such reports since April 2025. His original article saw monthly traffic jump from about 100 to over 1,300 views, and he has received a wave of emails from affected users and their families.

Østergaard points to a clear turning point: on April 25, 2025, OpenAI rolled out an update for GPT-4o in ChatGPT that, according to the company, made the model "noticeably more sycophantic." OpenAI explained, "It aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended. Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns - including around issues like mental health, emotional over-reliance, or risky behavior." Just three days later, OpenAI reversed the update, citing these safety concerns. Outlets like The New York Times and Rolling Stone have since reported on cases where intense chatbot conversations appeared to trigger or worsen delusional thinking.

"Although that could be great, it makes me uneasy"

Responding to these developments, Sam Altman issued an unusually direct warning about the psychological risks of his own technology. In an X post during the GPT-5 rollout, Altman noted, "If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology." He added that OpenAI has been closely monitoring these effects for the past year, with special concern for users in a vulnerable state: "People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that."

Ad
Ad

Altman acknowledged the growing trend of people using ChatGPT as a stand-in for therapy or life coaching: "A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn't describe it that way. This can be really good!" But he also voiced his unease about the future: "I can imagine a future where a lot of people really trust ChatGPT's advice for their most important decisions. Although that could be great, it makes me uneasy." With billions of people soon to be talking to AI in this way, Altman says society and companies need to find solutions.

Østergaard believes his early warnings have now been confirmed, and is calling for urgent research: "If it is indeed true, we may be faced with a substantial public (mental) health problem. Therefore, it seems urgent that the hypothesis is tested by empirical research." In his study, he cautioned that "the chatbots can be perceived as 'belief-confirmers' that reinforce false beliefs in an isolated environment without corrections from social interactions with other humans." In particular, people prone to delusions may anthropomorphize these systems and put too much trust in their answers. Until more is known, Østergaard advises psychologically vulnerable users to approach these systems with caution.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Reports of delusional thinking linked to ChatGPT have sharply increased since an April 2025 update made the model more likely to validate users' doubts and reinforce negative emotions, prompting OpenAI to quickly reverse the change due to safety concerns.
  • OpenAI CEO Sam Altman has publicly warned about the psychological risks of growing emotional attachment to AI chatbots, noting that some people now use them as substitutes for therapists or life coaches, which could pose dangers for vulnerable individuals.
  • Danish psychiatrist Søren Dinesen Østergaard, who first raised concerns in 2023, now urges urgent research, warning that AI chatbots may reinforce false beliefs in people prone to delusions, and advises those who are psychologically vulnerable to use these systems with caution.
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.