AI in practice

Spatial inferencing: Mistral 7B runs on Apple Vision Pro

Matthias Bastian

Joseph Semrai

Joseph Semrai shows on X how the small, large Mistral 7B language model runs on an Apple Vision Pro. This is a variant of the model with 4-bit quantization, which reduces the model's memory requirements, but also its accuracy. The performance requirements are reduced enough to run on a Vision Pro M2 with a total of 16 GB of memory. A 4-bit version of Mistral 7B Instruct is available here.

Video: Joseph Semrai