Google has released Gemma 3 270M, a new addition to its Gemma 3 family designed for resource-efficient use in narrowly defined applications.
The model packs 270 million parameters, making it the most compact Gemma 3 variant to date. Google says Gemma 3 270M is built for developers who need a model that can be quickly fine-tuned and deployed for structured, task-specific scenarios. Instead of handling complex, open-ended conversations, it is built for clear instructions and specialized tasks.
Gemma 3 270M uses 170 million parameters for embeddings, thanks to a large vocabulary of 256,000 tokens, and 100 million for its transformer blocks. According to Google, the expanded vocabulary improves coverage of rare and domain-specific tokens, making the model a strong foundation for fine-tuning in specific languages or subject areas.
Efficient performance in tight spaces
Google highlights Gemma 3 270M's strengths in high-volume, well-defined workloads like sentiment analysis, entity recognition, query routing, and compliance checks. Despite its compact size, the model can also handle creative tasks, such as generating simple stories.
Because the model is small, developers can fine-tune it in a matter of hours instead of days. Gemma 3 270M can also run entirely on local hardware, which is useful for sensitive data. For example, this "Bedtime Story" app runs completely in the browser.
In internal tests using a Pixel 9 Pro SoC, the INT4-quantized version of the model used just 0.75 percent of the battery after 25 conversations. According to Google, this makes Gemma 3 270M the most energy-efficient model in the Gemma lineup.
Gemma 3 270M is available in two versions: an Instruct model trained to follow instructions, and a Pretrained model. Downloads are available on Hugging Face, Ollama, Kaggle, LM Studio, and Docker.
You can try the model on Vertex AI or with popular inference tools like llama.cpp, Gemma.cpp, LiteRT, Keras, and MLX. For fine-tuning, Google provides support for tools including Hugging Face, UnSloth, and JAX.