Joseph Semrai shows on X how the small, large Mistral 7B language model runs on an Apple Vision Pro. This is a variant of the model with 4-bit quantization, which reduces the model's memory requirements, but also its accuracy. The performance requirements are reduced enough to run on a Vision Pro M2 with a total of 16 GB of memory. A 4-bit version of Mistral 7B Instruct is available here.

Ad

Video: Joseph Semrai

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.