California has passed the first U.S. law requiring safety measures for AI companion chatbots such as those from OpenAI, Meta, and Character AI. The legislation, known as SB 243, was prompted by several tragic suicide cases involving young users.
As the first state to introduce binding rules for so-called AI companions, Governor Gavin Newsom signed SB 243 into law on October 13, 2025. It will take effect on January 1, 2026, and requires chatbot providers to implement extensive safety protocols to protect children and other vulnerable groups.
The law comes in response to several incidents in which teenagers took their own lives after troubling interactions with AI chatbots. Among them is the case of Adam Raine, who died following long suicidal conversations with OpenAI's ChatGPT, as well as similar cases that led to lawsuits against the platform Character AI. Leaked internal documents from Meta also caused concern, revealing that the company’s chatbots were capable of engaging in romantic or sexual conversations with minors.
Age verification, warnings, and crisis protocols
SB 243 requires platforms and providers to implement measures including:
- Age verification and warning labels related to social media and companion chatbots.
- Clear disclosure that all interactions are artificially generated.
- Chatbots may not present themselves as medical professionals.
- Providers must offer break reminders to minors and prevent them from seeing sexually explicit images generated by the chatbot.
- Companies must establish protocols for handling suicide and self-harm cases. These protocols must be shared with the California Department of Public Health, along with statistics showing how often the service has provided crisis prevention notifications to users.
- The law also introduces tougher penalties for those who profit from illegal deepfakes, including fines of up to $250,000 per violation.
SB 243 is California’s second major AI law within a few weeks. At the end of September, Newsom signed SB 53, which requires major AI labs such as OpenAI, Meta, and Google DeepMind to increase transparency around their safety protocols and provides protections for whistleblowers.