Anthropic backs California's SB 53, a state bill that would force large developers of advanced AI to be more transparent and secure—apparently because they see Washington as too slow to act. Anthropic says SB 53 could serve as a solid starting point for national rules.

Ad

"While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington."

Anthropic

The bill would require affected companies to publish security policies, disclose risk analyses, report security incidents within 15 days, share internal assessments under confidentiality, and follow clear whistleblower protection rules. Violations could mean fines. The rules target only companies running highly capable models, aiming to keep the burden off smaller providers. Anthropic says their decision comes after reflecting on the lessons from California's failed SB 1047 effort.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.