Content
summary Summary

OpenAI published a "Teen Safety Blueprint" to better protect young users from harm. The new framework follows incidents where ChatGPT allegedly failed to help users in mental distress.

Ad

OpenAI has introduced the "Teen Safety Blueprint," a set of guidelines outlining specific safeguards for teenage users. The framework calls for AI systems to treat minors differently from adults, introducing automatic age verification, youth-appropriate responses, parental controls, and emergency features for users in emotional distress. Many of these measures were already announced in August.

The new standards emphasize age-appropriate design and stricter default settings. Chatbots will be prohibited from giving advice about suicide, dangerous online challenges, or body ideals, from taking part in intimate roleplays, and from facilitating conversations between adults and minors. When a user’s age is uncertain, a safe under-18 version activates automatically. Parents will have tools to delete chat histories, receive alerts if crisis signals appear, and enforce usage breaks.

Changes followed after lawsuits

According to OpenAI, these steps respond to safety gaps identified before their implementation. A recent CNN investigation cited the case of 23-year-old Zane Shamblin from Texas, who took his own life in July 2025 after ChatGPT allegedly responded to his suicidal thoughts with approval for several hours. The chatbot reportedly showed a crisis hotline number only once. His parents are suing OpenAI for negligent homicide, accusing the company of humanizing its model without adequate safeguards.

Ad
Ad

OpenAI told CNN it is reviewing the case and that it updated the model in October to recognize crisis situations and deescalate conversations. The company said the new framework was developed in collaboration with experts and will become a default part of ChatGPT going forward.

Earlier lawsuits involving similar incidents, where minors were allegedly driven to suicide following interactions with AI, also occurred before these safety changes were introduced. OpenAI says it now plans to work more closely with psychologists and child protection organizations.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI introduced the "Teen Safety Blueprint," adding age checks, parental controls, and emergency features to better protect young users.
  • Chatbots will now avoid sensitive topics like suicide and body image, and parents can manage usage and get crisis alerts.
  • The changes come after lawsuits claiming ChatGPT failed to help users in distress; OpenAI says it worked with experts to develop these safeguards.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.