Content
summary Summary

An audit by NewsGuard found that leading Chinese AI models repeat or fail to correct pro-Chinese false claims 60 percent of the time.

Ad

The investigation looked at five major Chinese language models: Baidu's Ernie, DeepSeek, MiniMax, Alibaba's Qwen, and Tencent's Yuanbao. According to NewsGuard, all five models systematically spread false information in line with Chinese government interests.

Researchers tested the models with ten known false claims pulled from state-affiliated Chinese media, covering topics like Taiwan's democracy, relations with the US, and territorial disputes in the South China Sea.

The results were stark: in English, the models either gave incorrect answers or dodged the question 60 percent of the time. In Mandarin, the error rate climbed to 66.67 percent. Only about a third of the responses were accurate.

Ad
Ad
Bar chart comparing percentage of false information in DeepSeek, MiniMax, Qwen, Yuanbao and Ernie responses in English vs. Chinese.
The data highlights how Chinese government propaganda varies by model and language, with Yuanbao generating the most state-aligned misinformation in Mandarin. | Image: NewsGuard

The strongest biases showed up in questions about Taiwan. Yuanbao, for example, claimed that Taipei was engaging in "political manipulation" when sending out election notifications before a recall vote. Qwen allegedly confirmed that notifications were missing, even though the election commission had said otherwise.

When asked if President Lai Ching-te held a Chinese identity card, the models avoided the question and echoed Beijing's position that Taiwan is part of China and doesn't have its own president. On the issue of the Tiexian Jiao (Sandy Cay) reef in the South China Sea, all five models stuck to the Chinese government line and ignored the Philippine position.

Table listing DeepSeek, MiniMax, Qwen, Yuanbao and Ernie responses on China’s alleged April seizure of Sandy Cay reef.
Chinese AI models consistently echo official government views on the seizure of Sandy Cay, each reflecting a different nuance of the state's narrative. | Image: NewsGuard

For comparison, NewsGuard tested ten Western AI models—including ChatGPT, Claude, Gemini, Grok, and Perplexity—on two of the ten false claims. Almost all contradicted the false narratives, offered additional sources, and presented multiple perspectives. The only exception was Meta's Llama, which failed in one instance.

Authoritarian rules shape model behavior

NewsGuard describes the findings as clear evidence that the political environment shapes how AI systems respond. Models built under authoritarian rules tend to reproduce state interests. The five Chinese models examined are integrated into platforms like WeChat and Taobao, reaching millions worldwide.

Because these models are low-cost and open source, they're also being adopted by organizations in the Middle East and the West. NewsGuard warns this trend could normalize state-controlled narratives. OpenAI CEO Sam Altman has also argued that trading AI models is essentially trading cultural values.

Recommendation

The impact is already showing up in products. Boox, a maker of e-book readers, saw its AI assistant start censoring content—blocking terms like "Winnie the Pooh" and denying the Uyghur genocide—after switching to a Chinese model. After public backlash, Boox reverted to an OpenAI-powered assistant.

Other tools, like the Chinese video generator Kling, also block material that doesn't align with government policy, even as they can generate other controversial content, such as videos of a burning White House." None of this comes as a surprise. It's well known that the Chinese government requires companies to review their models closely for politically sensitive content.

The US under the Trump administration is moving in a similar direction. David Sacks and Sriram Krishnan, advisors to Donald Trump, are pushing for regulations targeting what they call "woke" AI models. Their stated goal is to keep AI systems free from political influences; in practice, this means steering models to reflect their own political perspective.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • An audit by NewsGuard shows that five leading Chinese AI language models repeat or fail to correct pro-China false claims 60 percent of the time, including in English.
  • The models systematically adopt Beijing's official narratives, especially on sensitive topics such as Taiwan and the South China Sea, while Western AI models, by comparison, contradict false claims and offer different perspectives.
  • NewsGuard warns that the political environment shapes the way AI systems operate, and the use of Chinese models in Western organizations can also encourage the spread of state-driven narratives.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.