Content
summary Summary

California has passed the first U.S. law requiring safety measures for AI companion chatbots such as those from OpenAI, Meta, and Character AI. The legislation, known as SB 243, was prompted by several tragic suicide cases involving young users.

Ad

As the first state to introduce binding rules for so-called AI companions, Governor Gavin Newsom signed SB 243 into law on October 13, 2025. It will take effect on January 1, 2026, and requires chatbot providers to implement extensive safety protocols to protect children and other vulnerable groups.

The law comes in response to several incidents in which teenagers took their own lives after troubling interactions with AI chatbots. Among them is the case of Adam Raine, who died following long suicidal conversations with OpenAI's ChatGPT, as well as similar cases that led to lawsuits against the platform Character AI. Leaked internal documents from Meta also caused concern, revealing that the company’s chatbots were capable of engaging in romantic or sexual conversations with minors.

Age verification, warnings, and crisis protocols

SB 243 requires platforms and providers to implement measures including:

Ad
Ad
  • Age verification and warning labels related to social media and companion chatbots.
  • Clear disclosure that all interactions are artificially generated.
  • Chatbots may not present themselves as medical professionals.
  • Providers must offer break reminders to minors and prevent them from seeing sexually explicit images generated by the chatbot.
  • Companies must establish protocols for handling suicide and self-harm cases. These protocols must be shared with the California Department of Public Health, along with statistics showing how often the service has provided crisis prevention notifications to users.
  • The law also introduces tougher penalties for those who profit from illegal deepfakes, including fines of up to $250,000 per violation.

SB 243 is California’s second major AI law within a few weeks. At the end of September, Newsom signed SB 53, which requires major AI labs such as OpenAI, Meta, and Google DeepMind to increase transparency around their safety protocols and provides protections for whistleblowers.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • California has enacted the first U.S. law mandating safety standards for AI companion chatbots after several suicide cases involving young users, including incidents linked to OpenAI, Meta, and Character AI.
  • The law, SB 243, takes effect on January 1, 2026, and requires platforms to verify users’ ages, display warnings, disclose that conversations are artificial, block sexual content for minors, and implement crisis response protocols shared with the state health department.
  • Governor Gavin Newsom signed SB 243 shortly after approving another AI regulation, SB 53, which increases transparency around safety practices at major AI companies and protects whistleblowers.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.