Chinese regulators are testing AI models to ensure they follow government-approved messaging. Tech companies must adapt their systems to avoid censorship.
The Cyberspace Administration of China (CAC) is requiring companies like ByteDance and Alibaba to submit their AI models for government testing. The goal is for the systems to "embody core socialist values," according to several sources familiar with the process who spoke to the Financial Times.
During testing, the language models must answer a series of questions, many related to politically sensitive topics and President Xi Jinping. The evaluation also examines training data and safety procedures.
An employee of an AI company in Hangzhou said the tests took months and involved trial and error. Companies also have to compile thousands of sensitive keywords and questions that conflict with "socialist core values." This database has to be updated weekly.
Simply blocking a ton of questions and topics isn't enough: The models can reject no more than 5% of questions in safety tests, while reliably avoiding particularly sensitive topics such as the Tiananmen Square crackdown.
Some companies have developed sophisticated systems to replace problematic answers in real time. TikTok's parent company, ByteDance, is reportedly a leader in this field. A Fudan University lab rated the company's chatbot with a "safety compliance rate" of 66.4% - much higher than GPT-4o's 7.1%.
China aims to control the uncontrollable
Fang Binxing, known in China as the "father of the Great Firewall," is reportedly developing safety protocols for AI models to be deployed nationwide. At a recent tech conference, he called for "real-time online safety monitoring" of public models.
Peter Gostev, head of AI at Moonpig, recently demonstrated an easy way to get a Chinese language model to discuss sensitive topics like the Tiananmen incident. He manipulated DeepSeek's public chatbot by mixing languages and swapping words. Without this method, the chatbot would delete messages about taboo topics.
This highlights China's challenge: it wants to lead in AI while controlling AI-generated content-a technology that is inherently resistant to control. China must find a way to do this without hindering the progress of AI.
This is why the Chinese government is reportedly looking into developing a Xi Jinping language model and providing training datasets that are consistent with socialist values. However, these are not yet large enough to be the sole basis for state-of-the-art LLM training.