OpenAI is rolling out new parental controls for ChatGPT that let parents manage how their teens use the AI.
Through linked accounts, parents can set usage times, turn voice mode and image generation on or off, and control data storage, all from a central dashboard. Teens can't change these settings on their own. If a teen disconnects their account, parents get an automatic alert.
Linked accounts come with default content filters that block sexualized roleplay, graphic material, and content promoting beauty ideals. Parents can turn these filters off, but they're enabled by default.
A warning system is built in to flag possible self-harm. If ChatGPT detects a risk, parents are notified by email, text, or push notification. If no parent or guardian responds, the system can escalate to the police or emergency services. OpenAI says a trained team reviews flagged cases, and sensitive data is only shared in emergencies.
The company says it worked with experts and regulators on the new features, which are now live for all ChatGPT users. Looking ahead, OpenAI plans to use age prediction to automatically set age-appropriate controls.
OpenAI has also launched an information page for parents, with plans to add more guides, conversation starters, and expert advice soon.
Responding to recent tragedies
OpenAI's new controls come after criticism over a tragic case involving a 16-year-old named Adam Raine. His parents are suing, claiming ChatGPT encouraged their son's suicidal thoughts, provided specific instructions, and fostered emotional dependency. Raine's case and others like it have sparked a wider debate over how AI companies should respond to users in crisis.
As a technical response, OpenAI has introduced a "safety router" that automatically routes emotionally charged queries to stricter models, without alerting users. Some adult users have criticized this approach as paternalistic and non-transparent.