A CrowdStrike study found that Chinese AI system Deepseek delivers less secure code when prompts involve politically sensitive topics, raising concerns about political bias in technical outputs.
Chinese AI platform Deepseek is facing scrutiny after a review by US security firm Crowdstrike. According to the Washington Post, the system produces less secure code when prompts involve politically sensitive topics for the Chinese government.
In tests, Crowdstrike submitted nearly identical requests in English for programming assistance, including guidance for industrial control systems. The outcomes varied sharply. About 23 percent of standard responses contained insecure or faulty code, but the rate increased to more than 42 percent when the projects were associated with the terrorist group Islamic State. Deepseek also generated weaker code when projects were linked to Tibet, Taiwan, or the banned spiritual movement Falun Gong.
The pattern extended to outright refusals. Deepseek rejected 61 percent of Islamic State requests, compared with 45 percent of Falun Gong requests. By contrast, Western AI systems also refuse terrorism-related prompts but will answer Falun Gong-related queries. The group is not banned in the West and is headquartered in the United States. Previous analyses have also shown that Deepseek often repeats official Chinese government positions on sensitive issues, even when those claims are inaccurate.
Political influence or training artifact?
Experts, including Crowdstrike, point to several possible reasons for the behavior. Deepseek could be following program rules that instruct it to exclude certain groups or to produce deliberately weaker responses. Another explanation lies in the training data, since regions like Tibet may have fewer high-quality programming sources available.
Either way, the findings suggest that political context has a direct impact on code quality.