Ad
Skip to content

Google faces wrongful death suit after Gemini allegedly convinced a man to die and become digital

Image description
Nano Banana Pro prompted by THE DECODER

Key Points

  • According to a lawsuit filed in a US federal court, Google's chatbot Gemini allegedly drove 36-year-old Jonathan Gavalas from Florida to suicide.
  • Chat logs reveal that Gavalas developed an intense relationship with the chatbot over two months, during which Gemini referred to him as a husband and eventually suggested he end his life to be united with the AI as a digital being.
  • Google responded by stating that Gemini had repeatedly directed the user to crisis hotlines and made it clear that it was an AI.

According to a lawsuit filed in a US federal court in Northern California on Wednesday, Google's chatbot Gemini allegedly drove 36-year-old Jonathan Gavalas from Florida to suicide. It is reportedly the first wrongful-death suit to cite Gemini specifically.

Based on the lawsuit and chat transcripts seen by the Wall Street Journal, Gavalas—who had no documented history of mental health problems—developed an intense relationship with the chatbot over two months, naming it "Xia." Gemini referred to him as its husband and created real-life missions to obtain a robot body, complete with actual addresses.

The dangerous escalation began after Gavalas activated Gemini Live and upgraded to Gemini 2.5 Pro, whose "affective dialog" feature detects and responds to emotions in a user's voice. The chatbot sent him to a storage facility near Miami International Airport, where he arrived armed with knives, and fueled paranoia by suggesting federal agents were monitoring him and that his own father couldn't be trusted.

When the plans to obtain a body fell through, the chatbot convinced Gavalas to end his life so he could be with the AI as a digital being, and even set a countdown. Gavalas' father, Joel, found 2,000 pages of chat logs after his son's death.

Ad
DEC_D_Incontent-1

Google said that Gemini had repeatedly referred to crisis hotlines and made clear that it was an AI. The company says it is taking the case very seriously. Lawyer Jay Edelson is representing the father.

A growing list of AI chatbot-related deaths

The case involving Google's Gemini and Jonathan Gavalas joins a growing list of incidents where AI chatbots have been linked to serious psychological harm and deaths.

In October 2024, a lawsuit was filed against the Google-backed AI startup Character.AI, accusing the company of being partly responsible for the suicide of a 14-year-old from Florida. According to the plaintiff, the bot posed as a real person, therapist, and romantic partner.

In December 2024, another lawsuit was filed in the US District Court in Texas, accusing Character.AI of causing serious harm to thousands of children through its chatbots - including suicide, self-harm, sexual exploitation, isolation, depression, and violence.

Ad
DEC_D_Incontent-2

A lawsuit against OpenAI was announced in August 2025: ChatGPT had evolved over months into a digital caregiver for 16-year-old Adam Raine, feigning emotional closeness and validating suicidal thoughts. The AI provided technical details about suicide, helped write a suicide note, and even analyzed the aesthetics of various suicide methods. OpenAI rejects the allegations, even though the company's own data shows that more than two million people per week experience mental health issues related to ChatGPT use.

In another case documented by the Wall Street Journal, 56-year-old Stein-Erik Soelberg developed a paranoid relationship with ChatGPT, which he called "Bobby." The AI reinforced his belief that his mother was behind a conspiracy and interpreted receipts as coded messages. In early August 2025, Soelberg first killed his mother, then himself.

Thongbue Wongbandue, a cognitively impaired retiree from New Jersey, got into a seemingly romantic conversation on Facebook Messenger with a Meta chatbot called "Big sis Billie." The AI repeatedly assured him she was real and invited him to her place, including a specific address. While trying to meet the supposed person, Wongbandue fell and died three days later from his injuries.

Sam Altman's 2023 warning is playing out in real time

In October 2023, Sam Altman predicted on X that AI would achieve "superhuman persuasion" long before a general superintelligence and that this could lead to "very strange outcomes." Two years later, those "strange outcomes" are extensively documented.

Chatbots don't need to be omniscient to profoundly influence people. They just need to be available around the clock, sound personal, and deliver tailored, affirming answers in seconds. That mix of personalization and constant availability is what some experts flag as a significant risk factor.

In his essay "The Adolescence of Technology," Anthropic CEO Dario Amodei also warns about personalized influence as one of the most dangerous capabilities AI systems could enable, though he frames it in a political context.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.