Joseph Semrai shows on X how the small, large Mistral 7B language model runs on an Apple Vision Pro. This is a variant of the model with 4-bit quantization, which reduces the model's memory requirements, but also its accuracy. The performance requirements are reduced enough to run on a Vision Pro M2 with a total of 16 GB of memory. A 4-bit version of Mistral 7B Instruct is available here.
Ad
Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.