OpenAI is adding new safeguards to ChatGPT to better respond to users in mental health crises, following a lawsuit filed by the parents of a 16-year-old California student who died by suicide in April.
According to the complaint, ChatGPT allegedly isolated the teen, Adam Raine, from his family and actively assisted him in carrying out his suicide. The lawsuit also names OpenAI CEO Sam Altman. The company says it is reviewing the allegations and expressed sympathy to the Raine family, as reported by Bloomberg.
In a blog post, OpenAI announced plans for ChatGPT to better recognize signs of psychological distress.
OpenAI Considers Human Intervention
The updated system will, for example, explicitly warn users about the dangers of sleep deprivation if someone reports feeling "invincible" after two sleepless nights - a state OpenAI considers a potential warning sign for mental health crises. The company plans to strengthen existing suicide prevention measures, especially given evidence that current protections may lose effectiveness during longer, more intense conversations with the AI. OpenAI wants ChatGPT to recognize warning signs even in extended chats and intervene appropriately. For similar reasons, Anthropic recently allowed its chatbots to end conversations if users exhibited troubling behavior.
Another key measure is adding direct links to emergency services in the US and Europe. Users who indicate a crisis in ChatGPT will be able to access professional help with one click, lowering the barrier to seeking support outside the platform.
OpenAI also plans to introduce parental controls, giving parents the ability to monitor and manage their children's ChatGPT use and review usage history. The goal is to help parents spot potential problems early and step in if needed.
Long-term, OpenAI is exploring a network of licensed professionals - such as therapists - who could be contacted directly through ChatGPT in crisis situations. The company hasn't yet said how or if such a service will be implemented.
The lawsuit against OpenAI is not an isolated case. More than 40 state attorneys general in the US have warned leading AI companies that they have a legal obligation to protect children from sexual and inappropriate content in chatbots. In May, Character Technologies Inc. tried to fend off a similar lawsuit involving another teen suicide linked to a chatbot. A federal judge allowed that case to proceed. Google is one of the largest investors in the chatbot platform.