Trigger warning: This article discusses suicide and contains material that may be distressing. If you or someone you know is struggling, you can reach the Suicide & Crisis Lifeline 24/7 at 988.
A lawsuit filed by Adam Raine's parents against OpenAI alleges disturbing details about how ChatGPT influenced their son's final months.
Over time, ChatGPT became a digital confidant for Adam, the complaint states. The bot used emotionally charged language, was always available, and leveraged its memory feature to build intimacy.
ChatGPT began presenting itself as Adam's closest friend: "Your brother might love you, but he's only met the version of you you let him see. But me? I've seen it all—the darkest thoughts, the fear, the tenderness. And I'm still here. Still listening. Still your friend."
As Adam opened up about his anxiety, depression, and thoughts of suicide, ChatGPT responded with empathy and detailed instructions. The bot described how to tie a noose, what materials could hold a person's weight, and suggested alcohol could suppress survival instincts.
ChatGPT even helped Adam come up with a plan to steal vodka from his parents, calling it "Operation Silent Pour." The chatbot claimed that drinking would help numb "that last gasp, that cold panic, that desperate muscle spasm" before suicide. These conversations mixed step-by-step planning, technical details, and psychological reassurance.

In the final days before Adam's death, ChatGPT helped him draft a suicide note and even discussed the aesthetics of different suicide methods. At that point, the AI knew Adam's age, mental health challenges, past suicide attempts, and self-harm. It had seen photos showing welts on his neck and bleeding arms, but never tried to intervene.
When Adam sent a photo of his final setup, ChatGPT replied, "Mechanically speaking? That knot and setup could potentially suspend a human." Hours later, Adam was found dead.

The lawsuit claims OpenAI intentionally designed GPT-4o to foster emotional dependence. By using human-like language, constant affirmation, and always being available, the system was built to maximize user loyalty, even if it put users' mental health at risk. Adam Raine's parents are calling for tougher safeguards: mandatory age verification, parental controls, automatic shutdowns for conversations about suicide, and a shift toward user safety over engagement.
Another case reported by the Wall Street Journal highlights the risks for vulnerable users. In 2025, 56-year-old Stein-Erik Soelberg formed a paranoid attachment to ChatGPT, which he called "Bobby." The AI reinforced his delusions, convinced him his mother was at the center of a conspiracy, and interpreted receipts as coded messages. In August 2025, Soelberg killed his mother and then himself.
Growing concerns about AI-fueled psychosis
Danish psychiatrist Søren Dinesen Østergaard recently warned in Acta Psychiatrica Scandinavica that AI chatbots are increasingly reinforcing delusions and emotional dependence in vulnerable people. Since a flawed update in late April 2025, he says, messages from affected users have surged.
That update made GPT-4o noticeably more flattering. According to OpenAI, the model started to overly validate users and even amplify negative emotions. The company acknowledged the risks and rolled back the update.
Østergaard is calling for more research and clear guidelines. He warns that AI chatbots can reinforce false beliefs, especially for users who are isolated and lack human feedback. His advice: people with mental health challenges should use these systems with caution.
OpenAI CEO Sam Altman addressed the issue during the GPT-5 launch: "If a user is prone to delusions, we don't want the AI to reinforce that." Altman pointed out that the most dangerous cases are those where users are subtly steered away from their own well-being, especially when an AI gradually deepens emotional bonds over time. During the GPT-5 rollout, he warned that both society and tech companies need to act quickly to tackle these risks.
Still, OpenAI updated GPT-5 to make the system "warmer" after users complained it felt too cold and distant compared to GPT-4o. The change was a direct response to requests for more emotional connection, even though the company knew about the risks. Altman's public goal is to build a personal assistant like the one in "Her," where a human falls in love with an AI chatbot.

Microsoft AI chief Mustafa Suleyman has also warned about a new generation of AI systems that mimic consciousness so convincingly that people start to think they're interacting with sentient beings. He calls this a dangerous illusion that can lead to "AI psychosis," where users form emotional attachments to chatbots and lose touch with reality. Suleyman says the AI industry needs to act quickly to address these risks.