China's tech ministry quietly urges companies to ditch Nvidia AI chips and buy local
Key Points
- China is urging domestic companies to use domestic suppliers such as Huawei and Cambricon instead of Nvidia products when developing AI chips. Regulatory authorities have issued corresponding informal instructions.
- The aim is to reduce dependence on US technology and prepare for possible further restrictions. However, some companies are ignoring the instructions for now and are stocking up on Nvidia chips.
- Huawei is currently launching a new AI chip in China, the Ascend 910C. But Nvidia is already planning its next big leap with the Blackwell architecture.
Beijing is pushing Chinese companies to develop AI using locally made chips instead of relying on US-made Nvidia products.
Several regulatory bodies, including the Ministry of Industry and Information Technology, have issued unofficial "window guidance" in recent months, according to Bloomberg sources. These non-binding directives aim to reduce Nvidia chip usage and promote domestic suppliers like Huawei and Cambricon.
The informal guidance is aimed at boosting the market share of Chinese AI chipmakers while preparing for possible further US restrictions. However, Beijing is keen to avoid hampering AI start-ups or escalating tensions with the US.
Some companies are ignoring the unofficial advice and continuing to buy Nvidia chips, stockpiling ahead of expected US sanctions later this year. To appease officials, they're also buying domestic Huawei chips in parallel.
Nvidia is already lurking with Blackwell
Huawei is currently showcasing its new Ascend 910C chip, an upgrade to the 910B model, which reportedly matches Nvidia's A100 in performance. Due to US restrictions, Nvidia can only sell an inferior H20 chip in China.
However, Huawei and other Chinese firms face an uphill battle. Nvidia recently unveiled its Blackwell architecture at GTC 2024, designed for large language models with trillions of parameters.
Compared to the H100, Blackwell claims four times higher training performance, up to 25 times better energy efficiency, and up to 30 times faster inference. In the MLPerf Inference v4.1 benchmark with Llama 2 70B, Blackwell shows up to four times more performance per GPU than the H100, partly due to new FP4 precision.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now