Ad
Skip to content

Google unveils Gemma 3 270M, its most compact model designed for efficient, task-specific AI use

Image description
Google

Key Points

  • Google has introduced Gemma 3 270M, a compact AI model designed for resource-efficient applications in specific use cases. It is ideal for structured tasks such as sentiment analysis, entity recognition, and compliance checks.
  • Internal tests show that Gemma 3 270M is the most energy-efficient model in the Gemma range. Due to its small size, quick adjustments and local operation are possible, even with sensitive data.
  • Gemma 3 270M is available as an Instruct and Pretrained version on platforms such as Hugging Face, Ollama, and Docker. It supports various tools for inference and training and offers comprehensive guides for finetuning.

Google has released Gemma 3 270M, a new addition to its Gemma 3 family designed for resource-efficient use in narrowly defined applications.

The model packs 270 million parameters, making it the most compact Gemma 3 variant to date. Google says Gemma 3 270M is built for developers who need a model that can be quickly fine-tuned and deployed for structured, task-specific scenarios. Instead of handling complex, open-ended conversations, it is built for clear instructions and specialized tasks.

Gemma 3 270M uses 170 million parameters for embeddings, thanks to a large vocabulary of 256,000 tokens, and 100 million for its transformer blocks. According to Google, the expanded vocabulary improves coverage of rare and domain-specific tokens, making the model a strong foundation for fine-tuning in specific languages or subject areas.

Efficient performance in tight spaces

Google highlights Gemma 3 270M's strengths in high-volume, well-defined workloads like sentiment analysis, entity recognition, query routing, and compliance checks. Despite its compact size, the model can also handle creative tasks, such as generating simple stories.

Ad
DEC_D_Incontent-1

Because the model is small, developers can fine-tune it in a matter of hours instead of days. Gemma 3 270M can also run entirely on local hardware, which is useful for sensitive data. For example, this "Bedtime Story" app runs completely in the browser.

In internal tests using a Pixel 9 Pro SoC, the INT4-quantized version of the model used just 0.75 percent of the battery after 25 conversations. According to Google, this makes Gemma 3 270M the most energy-efficient model in the Gemma lineup.

Gemma 3 270M is available in two versions: an Instruct model trained to follow instructions, and a Pretrained model. Downloads are available on Hugging Face, Ollama, Kaggle, LM Studio, and Docker.

You can try the model on Vertex AI or with popular inference tools like llama.cpp, Gemma.cpp, LiteRT, Keras, and MLX. For fine-tuning, Google provides support for tools including Hugging Face, UnSloth, and JAX.

Ad
DEC_D_Incontent-2

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Google Blog