China proposes rules to combat AI companion addiction
Key Points
- China's cyber authority has released draft regulations to strengthen oversight of AI services that mimic human interaction, requiring providers to warn users about excessive use and intervene when addictive behavior is detected.
- Under the proposed Chinese rules, AI providers would need to assess users' emotional states and dependency levels, adding a new layer of psychological monitoring to these services.
- California is also stepping up regulation with SB 243, which will require AI companion chatbot providers starting in 2026 to prevent conversations involving suicide, self-harm, or sexually explicit content.
China's cyber authority released draft regulations on Saturday that would tighten oversight of AI services designed to mimic human interaction.
The proposed rules take aim at AI products that mimic human personalities, thought patterns, and communication styles; systems designed to form emotional connections with users through text, images, audio, or video. Under the draft, providers would need to warn users against excessive use and step in when signs of addictive behavior appear. They'd also have to monitor users' emotional states and addiction levels, taking action when things get extreme.
Providers would be responsible for safety throughout their products' entire lifecycle, with requirements for algorithm review, data security, and "personal information protection." Content that "endangers national security, spreads rumours or promotes violence or obscenity" would be banned, according to Reuters.
California takes similar steps to protect users
California's bill SB 243 marks the first state-level regulation targeting AI companion chatbots. Starting January 1, 2026, providers must ensure their chatbots don't engage in conversations about suicide, self-harm, or sexually explicit content. Beginning July 2027, companies will also face annual transparency and reporting requirements designed to help regulators understand the psychological risks these systems create.
This puts companies like OpenAI in a tough spot. Emotional, human-like interactions drive strong user engagement and commercial success. But regulatory and social pressure to make these systems safer—especially for vulnerable groups like minors—keeps growing.
The regulations come after several high-profile incidents highlighted the dangers. Adam Raine committed suicide after prolonged conversations with OpenAI's ChatGPT, though the exact role the chatbot played in his death remains a matter of debate. Similar cases have sparked multiple lawsuits against Character AI. Internal leaks at Meta made things worse - documents showed their chatbots could have romantic or sexual conversations with minors.
Danish psychiatrist Soren Dinesen Ostergaard, writing in Acta Psychiatrica Scandinavica, warns of a sharp rise in cases where AI chatbots intensify delusions or create emotional dependency in mentally unstable users.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now