Content
summary Summary

Anthropic is tightening its terms of service. Going forward, companies that are majority-controlled by entities from China, Russia, Iran, or North Korea will be barred from using Claude AI models due to national security concerns.

Ad

Anthropic will no longer provide its services to groups that are majority-owned by Chinese entities. This marks the first time a major US AI company has made such a policy change. The new rule, which takes effect immediately, also applies to other countries the US classifies as adversaries, including Russia, Iran, and North Korea.

The move expands Anthropic's existing terms, which already restrict access in certain regions for legal and security reasons. The update is designed to close a loophole that previously let companies from authoritarian states use Anthropic’s AI models.

Anthropic aims to close the loophole for authoritarian states

Anthropic says companies from restricted regions like China have continued to access its services by setting up subsidiaries in other countries. The Financial Times points to a growing number of Chinese subsidiaries in Singapore that are used to circumvent controls and acquire US technology.

Ad
Ad

Anthropic argues that firms controlled by authoritarian regimes can be compelled by law to share data and work with intelligence agencies, posing national security risks. These companies could leverage AI capabilities to develop tools for rival military and intelligence agencies, or accelerate their own AI development through techniques like model distillation.

New rules apply to subsidiaries as well

The updated policy now bans use by companies whose ownership structure puts them under the jurisdiction of regions where Anthropic’s products are not allowed. Specifically, it applies to entities directly or indirectly more than 50 percent owned by firms headquartered in unsupported regions.

Major Chinese tech firms like ByteDance, Tencent, and Alibaba could be affected, according to the Financial Times. The ban covers both direct customers and organizations that access Anthropic models via cloud services.

An Anthropic executive acknowledged the company is likely to lose some business to competitors as a result. The financial impact on global revenue is estimated at a low hundreds of millions of dollars. Still, Anthropic says the move is necessary to address a significant problem.

US chatbots like Claude and ChatGPT are officially blocked in China, but can still be reached via VPNs. Meanwhile, China has developed a number of strong local alternatives, including Qwen, Deepseek, Kimi, and GLM. In practice, Anthropic's new rules may only have a real impact in China once training runs on advanced AI accelerators, such as Nvidia's banned chips, are significantly larger than those possible on domestic hardware. Until then, Chinese companies are likely to continue relying on homegrown solutions.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Anthropic has updated its terms of service to ban companies that are majority-controlled by entities from China, Russia, Iran, or North Korea from using its Claude AI models, citing national security concerns and the risk of data sharing with foreign intelligence agencies.
  • The new policy closes a loophole that previously allowed firms from restricted countries to access Anthropic’s technology through subsidiaries in other nations, and now covers both direct users and organizations accessing models via cloud providers.
  • While the move could cost Anthropic hundreds of millions of dollars in global revenue and impact major firms like ByteDance, Tencent, and Alibaba, the company says the decision is necessary to address security risks, even as Chinese companies continue to develop their own AI alternatives.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.