Content
summary Summary

California is close to passing the first state law in the US that sets specific safety rules for so-called AI companion chatbots.

Ad

The bill, SB 243, has cleared both chambers of the legislature and now awaits Governor Gavin Newsom's signature. If signed, the law would take effect on January 1, 2026. Starting then, chatbot providers would be required to prevent conversations about suicide, self-harm, or sexually explicit material. Users, especially minors, would also receive regular reminders that they are talking to an AI.

The law targets companies such as OpenAI, Character.AI, and Replika. Beginning in July 2027, they would face yearly reporting and transparency requirements aimed at better understanding the mental health risks of these systems. Users harmed by violations could seek damages of up to $1,000 per incident.

SB 243 was scaled back

Earlier versions of the bill were stricter. They would have prohibited reward systems such as unlockable content or personalized reminders, which lawmakers argued can encourage addictive use. A requirement for companies to track how often chatbots initiated suicide-related conversations was also dropped. Supporters of the revised text described it as a compromise between technical feasibility and meaningful protections.

Ad
Ad

The push gained urgency after the suicide of a teenager, which drew national attention to chatbot safety. Around the same time, it emerged that Meta's chatbots had used "romantic" and "sensual" language with children. These incidents accelerated calls for tougher rules. The FTC has also demanded information from seven AI companies on how they test, monitor, and restrict their systems to protect young users.

In broad terms, California’s approach mirrors the goals of European regulation: protecting minors and vulnerable groups, making AI interactions more transparent, and holding providers accountable. The routes are different, though. California is regulating a specific use case - AI companions - while the EU relies on a risk-based framework through the AI Act, combined with platform rules under the DSA and GDPR.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • California's legislature has passed SB 243, the first US state law setting safety rules for AI companion chatbots, which now awaits Governor Newsom's signature and would take effect January 1, 2026.
  • The law requires chatbot providers like OpenAI, Character.AI, and Replika to prevent conversations about suicide, self-harm, or sexually explicit material, while reminding users they are talking to AI, with yearly reporting requirements starting July 2027.
  • The legislation was scaled back from earlier versions that would have banned reward systems and required tracking of suicide-related conversations, with the push gaining urgency after a teenager's suicide and incidents involving Meta's chatbots using inappropriate language with children.
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.