Hugging Face has developed the highly optimized Zephyr-7B mini-language model based on Mistral 7B, an open-source model from European start-up Mistral AI. The model was refined using a method called Distilled Supervised Fine-Tuning (dSFT), which uses the output of a larger "teacher" model to train a smaller "student" model. The Distilled Direct Preference Optimization (dDPO) method uses AI feedback from a set of teacher models as preference data, significantly reducing training time and resources required. Zephyr-7B is just ahead of Mistral 7B in benchmarks and can even come close to Llama-2 with 70 billion parameters. You can test the model here in chat.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Sources
News, tests and reports about VR, AR and MIXED Reality.
Graphics upgrade and online co-op on PSVR 2: Puzzling Places gets free update
Playstation VR 2 games releasing in May 2024
PC VR games releasing in May 2024
MIXED-NEWS.com
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.