AI in practice

Nvidia announces new "superchip" and collab with Hugging Face

Maximilian Schreiner

Nvidia

Nvidia announces first the GPU with HBM3e memory for AI training. The company also launches a new AI tool and partners with Hugging Face.

Nvidia has announced the new "Grace Hopper Superchip" GH200, which is a direct successor of the first variant GH100. The G200 is the first GPU with HBM3e memory. The new 141-gigabyte memory offers 1.55 times the memory bandwidth and 1.7 times the memory capacity of the previous chip.

The faster and larger memory is expected to benefit AI training and AI model inference. The GH200 and systems based on it are expected to be available in Q2 2024.

Nvidia partners with Hugging Face

Nvidia also announced a partnership with Hugging Face, one of the leading repositories of AI models. The partnership connects Hugging Face's model library to Nvidia's DGX Cloud AI infrastructure. Hugging Face users will be able to use the DGX Cloud to train or fine-tune AI models.

In addition, Hugging Face will introduce a new service called "Training Cluster as a Service", powered by DGX Cloud, to simplify the creation of custom generative AI models.

Nvidia to launch new AI toolkit

In addition to its collaboration with Hugging Face, Nvidia also announced the fourth generation of its AI Enterprise software platform. The key new feature is the integration of Nvidia NeMo, a toolkit for developing generative AI applications that provide workflows for training, tuning, and deploying large language models and other generative AI models.

Completely new, however, is Nvidia's AI Workbench, a local interface designed to simplify the development of generative AI and bundle the necessary components such as models, datasets, and computing power.

"With Nvidia AI Workbench, developers can customize and run generative AI in just a few clicks," the company said. "It allows them to pull together all necessary enterprise-grade models, frameworks, SDKs and libraries from open-source repositories and the Nvidia AI platform into a unified developer workspace."

Workbench also provides simple presets to speed development. Trained models can then be run outside the Workbench on any hardware. AI Workbench is supported on Windows and Linux systems with RTX GPUs, as well as from vendors such as Dell Technologies, Hewlett Packard Enterprise, HP Inc, Lambda, Lenovo, and Supermicro, according to Nvidia.