Google releases Magenta RealTime, an open source AI model for live music creation
Google has released Magenta RealTime (Magenta RT), an open-source AI model for live music creation and control. The model responds to text prompts, audio samples, or both. Magenta RT is built on an 800 million parameter Transformer and trained on about 190,000 hours of mostly instrumental music. One technical limitation is that it can only access the last ten seconds of generated audio.
The code and model are available under open licenses on GitHub and Hugging Face. Users can test the model for free on Colab TPUs. Google plans to add local use, customizations, and publish a research paper soon.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe nowAI news without the hype
Curated by humans.
- Over 20 percent launch discount.
- Read without distractions – no Google ads.
- Access to comments and community discussions.
- Weekly AI newsletter.
- 6 times a year: “AI Radar” – deep dives on key AI topics.
- Up to 25 % off on KI Pro online events.
- Access to our full ten-year archive.
- Get the latest AI news from The Decoder.