Google has released Magenta RealTime (Magenta RT), an open-source AI model for live music creation and control. The model responds to text prompts, audio samples, or both. Magenta RT is built on an 800 million parameter Transformer and trained on about 190,000 hours of mostly instrumental music. One technical limitation is that it can only access the last ten seconds of generated audio.
Ad
The code and model are available under open licenses on GitHub and Hugging Face. Users can test the model for free on Colab TPUs. Google plans to add local use, customizations, and publish a research paper soon.
Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.