Nvidia is opening up its once-closed ecosystem with a new initiative called "NVLink Fusion," announced at Computex 2025 in Taiwan.
For the first time, companies will be able to connect non-Nvidia CPUs and GPUs with Nvidia's own hardware in AI data centers, all through Nvidia's high-speed NVLink interface. Until now, NVLink was reserved exclusively for Nvidia chips.
"NV link fusion is so that you can build semi-custom AI infrastructure, not just semi-custom chips," Nvidia CEO Jensen Huang said at the event. The idea is to let enterprises combine Nvidia processors with any CPUs or application-specific chips (ASICs) they choose, while still taking advantage of the NVLink platform and its broader ecosystem.
Early partners include MediaTek, Marvell, Alchip, Astera Labs, Synopsys, and Cadence. Nvidia also said companies like Fujitsu and Qualcomm will be able to connect their own processors to Nvidia GPUs in data centers.
For Nvidia, this is a strategic move: if NVLink Fusion gains widespread adoption, it could make Nvidia the backbone of the next generation of AI infrastructure—even if those systems aren't built entirely on Nvidia chips. The company stands to expand its influence in the AI market by making its technology central to how future AI "factories" are built.
Major competitors like Broadcom, AMD, and Intel aren't yet part of the NVLink Fusion ecosystem.
More product announcements and Taiwan expansion
Nvidia also introduced "NVIDIA DGX Cloud Lepton," a new cloud platform that gives developers access to tens of thousands of GPUs across a global network of cloud providers. The platform is designed to serve as a marketplace for reliable, high-performance GPU resources throughout the Nvidia compute ecosystem. According to Nvidia, Lepton addresses a key bottleneck in AI development.
Huang also announced a new office in Taiwan. Working with Foxconn, Nvidia plans to launch a local AI supercomputer project, with support for other companies like TSMC.