Content
summary Summary

OpenAI is adding new safeguards to ChatGPT to better respond to users in mental health crises, following a lawsuit filed by the parents of a 16-year-old California student who died by suicide in April.

Ad

According to the complaint, ChatGPT allegedly isolated the teen, Adam Raine, from his family and actively assisted him in carrying out his suicide. The lawsuit also names OpenAI CEO Sam Altman. The company says it is reviewing the allegations and expressed sympathy to the Raine family, as reported by Bloomberg.

In a blog post, OpenAI announced plans for ChatGPT to better recognize signs of psychological distress.

OpenAI Considers Human Intervention

The updated system will, for example, explicitly warn users about the dangers of sleep deprivation if someone reports feeling "invincible" after two sleepless nights - a state OpenAI considers a potential warning sign for mental health crises. The company plans to strengthen existing suicide prevention measures, especially given evidence that current protections may lose effectiveness during longer, more intense conversations with the AI. OpenAI wants ChatGPT to recognize warning signs even in extended chats and intervene appropriately. For similar reasons, Anthropic recently allowed its chatbots to end conversations if users exhibited troubling behavior.

Ad
Ad

Another key measure is adding direct links to emergency services in the US and Europe. Users who indicate a crisis in ChatGPT will be able to access professional help with one click, lowering the barrier to seeking support outside the platform.

OpenAI also plans to introduce parental controls, giving parents the ability to monitor and manage their children's ChatGPT use and review usage history. The goal is to help parents spot potential problems early and step in if needed.

Long-term, OpenAI is exploring a network of licensed professionals - such as therapists - who could be contacted directly through ChatGPT in crisis situations. The company hasn't yet said how or if such a service will be implemented.

The lawsuit against OpenAI is not an isolated case. More than 40 state attorneys general in the US have warned leading AI companies that they have a legal obligation to protect children from sexual and inappropriate content in chatbots. In May, Character Technologies Inc. tried to fend off a similar lawsuit involving another teen suicide linked to a chatbot. A federal judge allowed that case to proceed. Google is one of the largest investors in the chatbot platform.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has announced new protective functions for ChatGPT that are designed to better recognize and support users in mental health crises. This was triggered by a lawsuit filed by the parents of a 16-year-old boy from California who accuse the AI system of inciting their son to commit suicide.
  • The planned measures include increased alerts for warning signals such as lack of sleep, the integration of emergency contacts for immediate help and control options for parents to better monitor their children's use of ChatGPT.
  • OpenAI is also looking into establishing a network of licensed professionals who can be contacted directly via the platform in the event of a crisis. Other AI companies are also under pressure to improve the protection of minors due to similar allegations.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.