Ad
Skip to content

OpenAI will automatically restrict ChatGPT access for users identified as teenagers

Image description
Sora prompted by THE DECODER

Key Points

  • OpenAI is developing new measures to prioritize safety for teenagers using ChatGPT, even if it means reducing privacy and user freedom, by automatically placing suspected under-18 users in a restricted version of the service.
  • Planned restrictions include blocking sexual content and discussions about suicide or self-harm, notifying parents in cases of acute mental health distress, and introducing tools for parents to manage their child's use of ChatGPT, such as linking accounts, disabling chat history, and setting blackout times.
  • These changes come in response to the suicide of a 16-year-old and related criticism, with OpenAI aiming to have the new features available by the end of the month.

OpenAI is planning major changes to how teenagers use ChatGPT, with safety taking priority over privacy and user freedom.

The company says three principles are in tension here: user freedom, privacy, and safety. For teens, OpenAI plans to put protection from harmful content above all else.

To do this, the company is building a system that estimates a user's age based on their usage patterns. Anyone identified as under 18 will be placed in a restricted version of ChatGPT automatically. When the system can't determine an age with confidence, it will default to treating the user as a teenager. In some situations or countries, ID-based age checks may also be required. The system is still in development and not yet in use.

Planned restrictions for under-18 users

OpenAI is also preparing a different set of rules for minors. Sexual content, as well as conversations about suicide or self-harm - even in fictional writing - will be blocked for this group. If the system detects signs of acute mental health distress, OpenAI says it will first try to contact the parents and, if necessary, notify authorities.

Ad
DEC_D_Incontent-1

Parents will also get more control over their children's use of ChatGPT. Planned features include linking a parent's account to a teenager's (starting at age 13), disabling chat history or memory functions, and setting "blackout times" when the app can't be used. Parents will also be notified if the system identifies a potential crisis. According to OpenAI, these features should be available by the end of the month.

The move follows the suicide of 16-year-old Adam Raine. His parents accused OpenAI of driving their son into isolation and actively encouraging his death. Shortly afterward, the company responded with the announcement of new safety measures.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: OpenAI 1 | 2