Anthropic is under growing political pressure in Washington after blocking certain law enforcement uses of its AI models. The company’s policy prohibits applying its Claude models to "domestic surveillance," a stance that has frustrated the Trump administration.
According to Semafor, Anthropic recently turned down requests from contractors working with federal law enforcement agencies. The policy means restrictions apply to organizations such as the FBI, the Secret Service, and Immigration and Customs Enforcement (ICE), where surveillance plays a significant role.
The rules leave room for interpretation: the policy does not provide a precise definition of "domestic surveillance." That lack of clarity has led to mounting frustration in Washington. Two senior officials expressed concern that Anthropic’s approach could be politically motivated, with vague language that allows broad or selective application.
Anthropic’s government contracts
Despite these restrictions, Anthropic maintains strong ties to the US government. Its Claude models are available through Amazon Web Services GovCloud, and in some cases they are the only advanced systems cleared for top-secret work. The company also has a symbolic $1 contract to provide Claude access to federal agencies. In addition, Anthropic collaborates with the Defense Department, while explicitly forbidding the use of its technology in weapons development.
Compared with Anthropic, OpenAI has taken a more flexible stance. Its guidelines ban only "unauthorized surveillance" — language that leaves room for authorized law enforcement use.