Content
summary Summary

Cerebras Systems has unveiled its third wafer-scale AI chip, WSE-3, which is expected to double the performance of its predecessor and will power an 8 exaflops supercomputer in Dallas.

Ad

Cerebras Systems has unveiled the WSE-3, the third generation of its wafer-scale AI megachip. According to the company, the chip is twice as powerful as its predecessor while consuming the same amount of power. With 4 trillion transistors and a more than 50 percent increase in transistor density thanks to the latest chip manufacturing technology, Cerebras continues its tradition of producing the world's largest single chip. The square chip, with an edge length of 21.5 centimeters, uses nearly an entire 300-millimeter silicon wafer.

Since the first megachip WSE-1 in 2019, the number of transistors has more than tripled. The latest chip, WSE-3, will be built on TSMC's 5-nanometer technology, after the 2021 WSE-2 was built on the company's 7-nanometer technology.

WSE-3-based supercomputers to enable AI training on a new scale

The computer built around the new AI chip, the CS-3, is said to be capable of training new generations of huge language models, ten times larger than OpenAI's GPT-4 and Google's Gemini. Cerebras claims that the CS-3 can train neural network models with up to 24 trillion parameters without the need for software tricks that other computers require.

Ad
Ad

Up to 2,048 systems can be combined, a configuration that could train a language model such as Llama 70B in just one day. The first CS-3-based supercomputer, Condor Galaxy 3 in Dallas, will consist of 64 CS-3s and is expected to achieve 8 exaflops of performance. Like its CS-2-based sister systems, it will be owned by Abu Dhabi's G42.

Cerebras has also entered into a partnership with Qualcomm to reduce the price of AI inference by a factor of ten. To do this, the team plans to train AI models on CS-3 systems and then make them more efficient using methods such as pruning. The networks trained by Cerebras will then run on Qualcomm's new inference chip, the AI 100 Ultra.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Cerebras Systems introduces its new WSE-3 wafer-scale AI chip, which is said to be twice as powerful as its predecessor and is used in an 8 exaflops supercomputer in Dallas.
  • The WSE-3 chip is manufactured using TSMC's 5-nanometer technology and can train neural network models with up to 24 trillion parameters without relying on software tricks.
  • Cerebras also announced a partnership with Qualcomm to reduce the price of AI inference tenfold by training AI models on CS-3 systems and then running them on Qualcomm's new inference chip, the AI 100 Ultra.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.