Ad
Skip to content

AI startup Groq raises $640 million to challenge Nvidia's dominance in AI chips

Image description
Groq

Key Points

  • AI startup Groq has raised $640 million in a funding round and is now valued at $2.8 billion. The round was led by BlackRock, with participation from Cisco Systems and Samsung Electronics.
  • Groq develops specialized hardware for AI applications, in particular LPUs (Language Processing Units) for language models. These can process hundreds of tokens per second, even for larger models such as Meta's Llama 2 70B.
  • According to the company, Groq's LPUs are more power efficient than traditional GPUs and could be an alternative to Nvidia chips for inference. However, traditional hardware such as Nvidia chips will still be required for training AI models.

AI startup Groq has raised $640 million in its latest funding round. The company develops specialized chips for AI applications such as language models.

Groq Inc., an AI chip and software startup founded in 2016 by Jonathan Ross, has raised $640 million in its latest funding round. The round was led by BlackRock Inc. funds and supported by the investment arms of Cisco Systems Inc. and Samsung Electronics Co. The company is now valued at $2.8 billion.

Ross, who previously worked on TPU chips at Google, is developing specialized hardware for AI applications with Groq, particularly those that accelerate current foundation models like language models. The company has developed so-called LPUs (Language Processing Units) optimized for running language models. These can achieve speeds of hundreds of tokens per second, even with larger models like Meta's Llama 2 70B - equivalent to hundreds of words per second. This speed makes the hardware attractive for real-time applications, although Nvidia's hardware now also achieves this speed - albeit at significantly higher prices.

Groq builds chips for inference, not training

According to Groq, LPUs are more energy-efficient than conventional graphics chips (GPUs) as they reduce the overhead for managing multiple threads and avoid underutilization of cores. Groq's chip design also allows for the connection of multiple specialized cores without the traditional bottlenecks that occur in GPU clusters.

Ad
DEC_D_Incontent-1

The hardware could thus provide an alternative to the currently high-demand chips from Nvidia, but only for inference, i.e., the execution of AI models. Companies still need conventional hardware from Nvidia or similar chips for model training. For example, Meta plans to train Llama 4 with ten times the computing power used for Llama 3.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Bloomberg