Mistral's new Small 4 model punches above its weight with 128 expert modules
Mistral AI has released Mistral Small 4, combining fast text responses, logical reasoning, and image processing in one model. It has 119 billion parameters, but only 6 billion are active per query - its architecture includes 128 expert modules but activates just four at a time. Users can control whether the model responds quickly or thinks more thoroughly. Mistral AI says it's 40 percent faster and handles three times more queries per second than its predecessor.

The model ships under the Apache 2.0 license and is available on Hugging Face, the Mistral API, and Nvidia platforms. Mistral AI is also joining the Nvidia Nemotron Coalition, which promotes open AI model development. The company previously released multimodal open-source models in early December with the Mistral 3 series, including the flagship Mistral Large 3 with 675 billion parameters.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now