Paris-based AI startup Mistral has released Mixtral-8x22B MoE, a new open language model, via a torrent link. An official announcement with more details will follow later. According to early users, the model offers 64,000 token context windows and requires 258 gigabytes of VRAM. Like the Mixtral-8x7B, the new model is a mixture-of-experts model. You can try out the new model at Together AI.

Ad
Image: Together AI | X.com
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.