Ad
Skip to content

Mistral's new Small 4 model punches above its weight with 128 expert modules

Mistral AI has released Mistral Small 4, combining fast text responses, logical reasoning, and image processing in one model. It has 119 billion parameters, but only 6 billion are active per query - its architecture includes 128 expert modules but activates just four at a time. Users can control whether the model responds quickly or thinks more thoroughly. Mistral AI says it's 40 percent faster and handles three times more queries per second than its predecessor.

Balkendiagramm zeigt die Benchmark-Ergebnisse von Mistral Small 4 High im Vergleich zu Magistral Medium 1.2 und Magistral Small 1.2 in den Kategorien LCR, AIME25, Collie und LiveCodeBench.
Mistral Small 4 with a high reasoning level achieves similar or better values in internal benchmarks than the specialized Magistral models.

The model ships under the Apache 2.0 license and is available on Hugging Face, the Mistral API, and Nvidia platforms. Mistral AI is also joining the Nvidia Nemotron Coalition, which promotes open AI model development. The company previously released multimodal open-source models in early December with the Mistral 3 series, including the flagship Mistral Large 3 with 675 billion parameters.

Ad
DEC_D_Incontent-1

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Mistral AI