Content
summary Summary

A proposed California bill to regulate AI models is exposing divisions among Silicon Valley's leading AI companies. OpenAI opposes the measure, while Anthropic cautiously supports it.

Ad

The controversy surrounding California's proposed AI regulation, SB 1047, is heating up. The legislation aims to regulate AI models that pose "catastrophic risks" to humans and cybersecurity, while also protecting whistleblowers who raise alarms about these dangers. However, Silicon Valley's AI giants are at odds over the bill's merits.

OpenAI, the company behind ChatGPT, argues against the bill in a letter to California Governor Gavin Newsom. They claim AI model regulation should be handled at the federal level, as it involves national safety concerns. OpenAI also warns the law could threaten California's leadership in AI development and drive companies out of the state.

"The broad and significant implications of AI for U.S. competitiveness and national security
require that regulation of frontier models be shaped and implemented at the federal level. A
federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and
position the U.S. to lead the development of global standards," writes Jason Kwon, OpenAI's chief strategy officer.

Ad
Ad

Anthropic supports the law despite reservations

Anthropic, creator of the AI chatbot Claude, takes a different stance. In its own letter to Governor Newsom, the company broadly supports the bill despite some reservations. Anthropic views SB 1047 as a crucial step toward mitigating catastrophic risks from AI systems.

According to Anthropic, the bill addresses valid concerns about rapidly evolving AI capabilities. While this progress holds great economic potential for California, it also carries significant dangers. The company warns that serious AI misuse could occur within one to three years.

Anthropic praises SB 1047's requirement for AI companies to develop safety and security protocols (SSPs) and provide transparent information about them. The law also incentivizes effective SSPs by linking liability for damages to protocol quality. Additionally, SB 1047 promotes research on AI risk mitigation.

Anthropic also notes some issues with the bill, including overly prescriptive oversight and broad authority granted to regulators. Despite these concerns, the company believes the bill's benefits outweigh its drawbacks and calls on all parties to focus on key objectives: transparency in safety practices, incentives for effective safety plans, and avoiding unintended consequences.

California may introduce new rules for AI development with the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (SB 1047).

The law requires companies developing high-risk AI models to create safety and security protocols (SSPs). These SSPs must assess potential risks and outline mitigation measures. Companies must publish their SSPs and undergo annual third-party audits starting in 2026. Developers must also submit compliance statements and report AI safety incidents.

SB 1047 incentivizes effective SSPs by linking a company's liability for AI-caused harm to the quality of its protocols. The bill also protects whistleblowers who report risks to authorities.

The bill defines specific AI models that are subject to regulation. These include models trained with more than 10^26 integer or floating-point operations and costing more than $100 million. It also covers models created by fine-tuning such systems with at least 3x10^25 operations and costing more than $10 million. A new "Board of Frontier Models" will update these definitions over time.

The government plans to create a public cloud computing cluster called "CalCompute" to promote safe and ethical AI development. The law includes detailed regulations on its scope, obligations, oversight, and penalties.

Ex-OpenAI staff accuse company of misleading public on AI safety

Former OpenAI employees have also weighed in, accusing their ex-employer of misleading the public. In an open letter, William Saunders and Daniel Kokotajlo claim they left OpenAI due to lost confidence in the company's commitment to safe, honest, and responsible AI development.

Recommendation

Saunders and Kokotajlo allege that OpenAI ignored safety protocols and silenced internal critics, despite publicly claiming otherwise, and criticize OpenAI's current opposition to regulation, pointing out that CEO Sam Altman had previously publicly supported it. They also accuse OpenAI of using scare tactics and making excuses.

In contrast, the ex-employees applaud Anthropic's nuanced stance on the bill. Unlike OpenAI, Anthropic shared specific concerns, suggested changes, and ultimately said the bill was likely to be positive overall and not too difficult to comply with, they say.

Several former OpenAI security staffers have recently joined Anthropic, including Jan Leike, OpenAI's former head of Super AI alignment. OpenAI co-founder Jan Schulman also joined Anthropic, saying he wanted to focus on safety research.

Senator Wiener rejects OpenAI's criticism

Senator Scott Wiener, the bill's author, dismisses OpenAI's criticisms. He argues that the company fails to address specific provisions, instead calling for regulation to be left to Congress, which has yet to act.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Wiener rejects claims that companies will leave California as a "tired argument," noting that SB 1047 applies to all companies doing business in the state, regardless of their headquarters location.

"Bottom line: SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk," Wiener writes.

Wiener also points to the support for SB 1047 from high-level security experts. Retired Lieutenant General John Shanahan called the bill balanced and practical in addressing the most dangerous immediate risks to society and national security. Andrew Weber, a former Pentagon official, emphasized the need for strict cybersecurity measures for advanced AI systems to prevent them from falling into the wrong hands.

The bill now moves to the State Assembly for deliberation and voting. If it passes, Governor Newsom will decide whether to sign it into law or veto it.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • California's proposed bill SB 1047 aims to regulate AI models that pose catastrophic risks. The legislation requires companies to create, publish, and audit safety protocols for high-risk AI systems. It also protects whistleblowers who report potential dangers.
  • OpenAI opposes the bill, arguing that AI regulation should be handled at the federal level. In contrast, Anthropic cautiously supports SB 1047, viewing it as a crucial step toward mitigating catastrophic AI risks, despite some reservations about its implementation.
  • Former OpenAI employees have accused the company of misleading the public and neglecting safety measures. They praise Anthropic's more nuanced approach to the bill. Prominent safety experts endorse SB 1047, describing it as a balanced and necessary measure to address immediate AI-related risks to society and national security.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.