Content
summary Summary

Google DeepMind has revealed more details about its AI system AlphaChip, which speeds up and improves computer chip development. The chip layouts created by AlphaChip are already being used in Google's AI accelerators.

Ad

In a follow-up to its 2021 Nature study, Google DeepMind has shared additional information about its AI system for chip design. The system, now officially called AlphaChip, uses reinforcement learning to create optimized chip layouts quickly.

According to Google DeepMind, AlphaChip has been used to design chip layouts in the last three generations of Google's Tensor Processing Unit (TPU) AI accelerator. The system's performance has steadily improved: For the TPU v5e, AlphaChip placed 10 blocks and reduced wire length by 3.2% compared to human experts. For the current 6th generation called Trillium, this increased to 25 blocks and a 6.2% reduction.

DeepMind says AlphaChip uses an approach similar to AlphaGo and AlphaZero. It treats chip layout as a kind of game, placing circuit components one after another on a grid. A specially developed graph neural network allows the system to learn relationships between connected components and generalize across different chips.

Ad
Ad

Besides Google, other companies are also using this approach. Chip manufacturer MediaTek has expanded AlphaChip for developing its most advanced chips, including the Dimensity Flagship 5G for Samsung smartphones.

AlphaChip is Open-Source

Google DeepMind sees further potential to optimize the entire chip design cycle. Future versions of AlphaChip are expected to be used from computer architecture to manufacturing. The company hopes to make chips even faster, cheaper, and more energy-efficient.

As part of publishing the Nature follow-up, Google DeepMind has also provided some open-source resources for AlphaChip. The researchers say they've released a software repository that can fully reproduce the methods described in the original study.

External researchers can use this repository to pre-train the system on various chip blocks and then apply it to new blocks. Google DeepMind is also providing a pre-trained model checkpoint trained on 20 TPU blocks.

However, the researchers recommend pre-training on custom, application-specific blocks for best results. They've provided a tutorial explaining how to perform pre-training using the open-source repository.

Recommendation

The tutorial and the pre-trained model are available on GitHub.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Google DeepMind has released details of its AlphaChip AI system, which speeds up and optimises the design of computer chips. The system uses reinforcement learning to produce optimised chip layouts in a short time.
  • AlphaChip has been used to design chip layouts in the last three generations of Google's Tensor Processing Unit (TPU) AI accelerator. In the current 6th generation, called Trillium, AlphaChip placed 25 blocks and reduced line length by 6.2% compared to human experts.
  • Google DeepMind has provided open source resources on AlphaChip, including a software repository to reproduce the methods and a pre-trained model. External researchers can use these to pre-train the system on different chip blocks and apply it to new blocks.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.