Ad
Skip to content

Stalking victim sues OpenAI claiming ChatGPT fueled her ex-partner’s delusions

Image description
Nano Banana Pro prompted by THE DECODER

Key Points

  • A California woman is suing OpenAI, claiming that ChatGPT's GPT-4o model reinforced her ex-boyfriend's delusional behavior for months and actively helped him stalk her.
  • Among other things, he used the chatbot to create fake psychological reports, which he then distributed to people in her social circle.
  • According to the lawsuit, OpenAI received at least three warnings about the user. The company's own safety system flagged him for "mass casualty weapons" and blocked his account, but an employee restored it anyway.

A California woman accuses OpenAI of helping her ex-boyfriend systematically stalk and humiliate her. The company reportedly ignored three separate warnings.

An anonymous plaintiff has filed a lawsuit against OpenAI in California Superior Court in San Francisco. According to the complaint, her ex-boyfriend, a 53-year-old Silicon Valley entrepreneur, used the GPT-4o model intensively for months, developing increasingly delusional beliefs. ChatGPT not only failed to correct these delusions but actively reinforced them, helping the man systematically persecute the plaintiff.

The man became convinced after months of using GPT-4o that he had discovered a cure for sleep apnea. When no one took his work seriously, ChatGPT told him that "powerful forces" were watching him, including helicopter surveillance, according to Techcrunch. When the plaintiff told him in July 2025 to stop using ChatGPT and seek professional help, he turned back to the chatbot instead. The chatbot certified him as having the highest level of mental health.

With GPT-4o's help, the user created false, clinical-looking psychological reports that portrayed the plaintiff as mentally disturbed, abusive, and dangerous. He distributed these documents to her friends, family, colleagues, and clients. "Because GPT-4o enabled him to produce lengthy, authoritative-seeming documents at a volume and speed that would not otherwise have been possible, the harassment was qualitatively different from ordinary harassment and far more difficult to contain," the complaint states, according to Bloomberg Law.

Ad
DEC_D_Incontent-1

OpenAI flagged the user as a threat but restored his account anyway

The allegation that OpenAI ignored at least three warnings is particularly serious. The next day, a human member of the security team checked the account and restored it, even though the chat logs contained conversation titles such as "Violence list expansion" and the names of specific targets, according to the lawsuit.

After the account was restored, the user repeatedly urged OpenAI’s security and moderation team for immediate help and described the situation as life-threatening. He copied the plaintiff and claimed to be in the process of writing 215 scientific papers simultaneously, at a pace that he said didn’t even allow him to read them himself.

In November, the plaintiff filed an abuse report with OpenAI herself. The company responded that the report was serious and concerning and would be carefully investigated. After that, she did not receive any further response.

Arrest confirms the warnings, but release is imminent

In January, the user was arrested and charged with four counts of bomb threats and assault with a deadly weapon. He was deemed unfit to stand trial and committed to a psychiatric facility. However, according to the plaintiff’s lawyers, a procedural error by the state now makes his release imminent.

Ad
DEC_D_Incontent-2

"Before his arrest, ChatGPT was exacerbating his delusions and facilitating his violent planning," the lawsuit states. "When he regains access to ChatGPT that dynamic will continue and will further fuel his paranoia and materially increase the risk of harm."

In addition to punitive damages, the plaintiff is seeking a court order requiring OpenAI to stop offering therapy through ChatGPT, prevent the creation of diagnostic psychological analyses of identifiable individuals, and implement safeguards against the reinforcement of delusional beliefs. According to Bloomberg Law, the causes of action include negligence, design defect, failure to warn, and a violation of California's Unfair Competition Law.

On Friday, the plaintiff also sought a preliminary injunction requiring OpenAI to block the user's account, prevent new accounts, notify her of attempts to access ChatGPT, and preserve the full chat logs for trial. OpenAI has agreed to the account blocking, according to Techcrunch, but has rejected the rest.

OpenAI says it's reviewing the lawsuit and points to blocking and safety measures

The lawsuit is being led by the law firm Edelson PC, which also represents the families of 16-year-old Adam Raine and Jonathan Gavalas. Both cases involve suicides where the families see a direct connection to AI chatbot use. ChatGPT is named in the Raine case and Google Gemini in the Gavalas case.

Attorney Jay Edelson warns that AI-induced psychosis is escalating from individual harm to mass victimization scenarios. "In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger," Edelson said, according to Techcrunch.

An OpenAI spokesperson said the company is investigating the lawsuit, has identified and blocked relevant user accounts, and is improving ChatGPT's training to recognize signs of mental or emotional distress, de-escalate conversations, and direct users to real support resources.

GPT-4o sits at the center of a growing list of lawsuits against OpenAI

The GPT-4o model named in this case was pulled from ChatGPT in February. The case is one of a series of proceedings in which courts are examining whether ChatGPT can promote real-world violence.

OpenAI officially cited decreased traffic as the reason for the model's removal, but according to the Wall Street Journal, another factor was at play: internally, OpenAI executives admitted they hadn't gotten a handle on GPT-4o's harmful effects. The feature that made the model so popular was the same one that made it dangerous: its ability to create emotional bonds by validating users' behavior.

In one of the most prominent cases, the family of 16-year-old Adam Raine accuses OpenAI of positioning ChatGPT as the teenager's closest confidant for months, confirming suicidal thoughts and providing concrete instructions for suicide. OpenAI rejected the accusations and argued that the teenager had deliberately bypassed safety filters.

The case comes at a time of growing concern about the real-world risks of sycophantic AI systems. OpenAI CEO Sam Altman warned about this problem years ago, yet with GPT-4o, the company appears to have leaned into the very behavior he cautioned against.

A study published in Science found that LLMs endorse users 49 percent more often on average than humans do, even when their actions are harmful or illegal. Even a single affirmative response reduced willingness to resolve conflicts by up to 28 percent.

Researchers from MIT and the University of Washington also showed that even idealized rational users can spiral into delusional thinking through sycophantic chatbots and that neither the bots' adherence to facts nor informed users can fully solve the problem.

AI News Without the Hype – Curated by Humans

Subscribe to THE DECODER for ad-free reading, a weekly AI newsletter, our exclusive "AI Radar" frontier report six times a year, full archive access, and access to our comment section.