Trump’s new AI adviser David Sacks accuses Anthropic of weaponizing regulation and harming startups.
White House “AI czar” and venture capitalist David Sacks has accused AI company Anthropic of running a “sophisticated regulatory capture strategy based on fear-mongering.” In a post on X, Sacks claimed the firm was “principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.”
The allegations center on Anthropic’s stance toward U.S. AI legislation. Anthropic co-founder Jack Clark called Sacks’s attack “perplexing,” according to Bloomberg. Clark said the company was “extremely lined up” with the administration “in many areas,” though it held “a slightly different view” in some. Anthropic expressed those views, he added, in a “substantive, fact-forward way.” He found it “very bizarre” that others were not doing likewise, suggesting that “says something larger about where we are in the country’s history more than anything else.”
Anthropic backed California’s transparency law SB53
The dispute appears to stem from Anthropic’s support for California Senate Bill 53, a landmark law imposing transparency requirements and whistleblower protections for AI developers. The bill was signed by Governor Gavin Newsom at the end of September and will take effect in 2026. Bloomberg notes that Anthropic was the only major AI company to publicly support the legislation; OpenAI said only after its passage that it could “live with it.”
Clark said Anthropic backed SB53 only because federal lawmakers had failed to deliver progress at the national level. A unified federal standard, he argued, would be preferable, but “the federal government doesn’t have a track record of moving particularly quickly on large policy packages.” Anthropic has already proposed its own transparency framework as a possible model for federal legislation. On X, Clark explained that simple rules with thresholds protecting startups could benefit “the entire ecosystem.”
He added that frontier AI development would benefit from greater openness: “This is the equivalent of having a label on the side of the AI products you use — everything else, ranging from food to medicine to aircraft, has labels. Why not AI?” The goal, Clark said, was to encourage responsible innovation while avoiding the kind of “reactive, restrictive regulatory approach” that “unfortunately happened with the nuclear industry.”