Content
summary Summary

OpenAI is planning major changes to how teenagers use ChatGPT, with safety taking priority over privacy and user freedom.

Ad

The company says three principles are in tension here: user freedom, privacy, and safety. For teens, OpenAI plans to put protection from harmful content above all else.

To do this, the company is building a system that estimates a user's age based on their usage patterns. Anyone identified as under 18 will be placed in a restricted version of ChatGPT automatically. When the system can't determine an age with confidence, it will default to treating the user as a teenager. In some situations or countries, ID-based age checks may also be required. The system is still in development and not yet in use.

Planned restrictions for under-18 users

OpenAI is also preparing a different set of rules for minors. Sexual content, as well as conversations about suicide or self-harm - even in fictional writing - will be blocked for this group. If the system detects signs of acute mental health distress, OpenAI says it will first try to contact the parents and, if necessary, notify authorities.

Ad
Ad

Parents will also get more control over their children's use of ChatGPT. Planned features include linking a parent's account to a teenager's (starting at age 13), disabling chat history or memory functions, and setting "blackout times" when the app can't be used. Parents will also be notified if the system identifies a potential crisis. According to OpenAI, these features should be available by the end of the month.

The move follows the suicide of 16-year-old Adam Raine. His parents accused OpenAI of driving their son into isolation and actively encouraging his death. Shortly afterward, the company responded with the announcement of new safety measures.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI is developing new measures to prioritize safety for teenagers using ChatGPT, even if it means reducing privacy and user freedom, by automatically placing suspected under-18 users in a restricted version of the service.
  • Planned restrictions include blocking sexual content and discussions about suicide or self-harm, notifying parents in cases of acute mental health distress, and introducing tools for parents to manage their child's use of ChatGPT, such as linking accounts, disabling chat history, and setting blackout times.
  • These changes come in response to the suicide of a 16-year-old and related criticism, with OpenAI aiming to have the new features available by the end of the month.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.