Mini-LLM Zephyr-7B keeps pace with 70 billion parameter models
Hugging Face has developed the highly optimized Zephyr-7B mini-language model based on Mistral 7B, an open-source model from European start-up Mistral AI. The model was refined using a method called Distilled Supervised Fine-Tuning (dSFT), which uses the output of a larger "teacher" model to train a smaller "student" model. The Distilled Direct Preference Optimization (dDPO) method uses AI feedback from a set of teacher models as preference data, significantly reducing training time and resources required. Zephyr-7B is just ahead of Mistral 7B in benchmarks and can even come close to Llama-2 with 70 billion parameters. You can test the model here in chat.
AI News Without the Hype – Curated by Humans
Subscribe to THE DECODER for ad-free reading, a weekly AI newsletter, our exclusive "AI Radar" frontier report six times a year, full archive access, and access to our comment section.
Subscribe nowAI news without the hype
Curated by humans.
- More than 16% discount.
- Read without distractions – no Google ads.
- Access to comments and community discussions.
- Weekly AI newsletter.
- 6 times a year: “AI Radar” – deep dives on key AI topics.
- Up to 25 % off on KI Pro online events.
- Access to our full ten-year archive.
- Get the latest AI news from The Decoder.