OpenAI is planning major changes to how teenagers use ChatGPT, with safety taking priority over privacy and user freedom.
The company says three principles are in tension here: user freedom, privacy, and safety. For teens, OpenAI plans to put protection from harmful content above all else.
To do this, the company is building a system that estimates a user's age based on their usage patterns. Anyone identified as under 18 will be placed in a restricted version of ChatGPT automatically. When the system can't determine an age with confidence, it will default to treating the user as a teenager. In some situations or countries, ID-based age checks may also be required. The system is still in development and not yet in use.
Planned restrictions for under-18 users
OpenAI is also preparing a different set of rules for minors. Sexual content, as well as conversations about suicide or self-harm - even in fictional writing - will be blocked for this group. If the system detects signs of acute mental health distress, OpenAI says it will first try to contact the parents and, if necessary, notify authorities.
Parents will also get more control over their children's use of ChatGPT. Planned features include linking a parent's account to a teenager's (starting at age 13), disabling chat history or memory functions, and setting "blackout times" when the app can't be used. Parents will also be notified if the system identifies a potential crisis. According to OpenAI, these features should be available by the end of the month.
The move follows the suicide of 16-year-old Adam Raine. His parents accused OpenAI of driving their son into isolation and actively encouraging his death. Shortly afterward, the company responded with the announcement of new safety measures.