California has passed SB 53, the first broad AI safety law in the US. The law requires major AI developers to follow strict safety protocols and report on their practices, with a focus on preventing catastrophic risks like cyberattacks on critical infrastructure or the creation of bioweapons. Enforcement falls to the California Office of Emergency Services.
Many of the requirements, such as security audits and publishing model cards, are already standard at large AI companies. Still, industry pressure could lead some firms to lower their safety standards. xAI, for example, recently faced criticism for not releasing its own security tests.
While SB 53 faced less resistance than its predecessor SB 1047 - and even won public support from Anthropic - the tech sector remains wary. The dominant narrative is that government regulation could hurt US innovation, especially in competition with China.
Companies like Meta and OpenAI, along with venture capital firms such as Andreessen Horowitz, have invested millions in Super PACs to back pro-AI politicians who oppose state regulation. Previous attempts to block state-level laws - including a ten-year AI moratorium - have failed so far.
Now, Senator Ted Cruz is pushing the SANDBOX Act, which would let companies opt out of certain rules for up to ten years. There's also a push for a national AI standard that would override state laws.