Google's new open TranslateGemma models bring translation for 55 languages to laptops and phones
TranslateGemma shows how targeted training helps Google squeeze more performance out of smaller models: the 12B version translates better than a base model twice its size and runs on a regular laptop. With the growing Gemma family, Google is staking its claim in the race for open AI models.
The Technology Innovation Institute (TII) from Abu Dhabi has released Falcon H1R 7B, a compact reasoning language model with 7 billion parameters. TII says the model matches the performance of competitors two to seven times larger across various benchmarks, though as always, benchmark scores only loosely correlate with real-world performance, especially for smaller models. Falcon H1R 7B uses a hybrid Transformer-Mamba architecture, which lets it process data faster than comparable models.
Falcon H1R 7B scores 49.5 percent across four benchmarks, outperforming larger models like Qwen3 32B (46.2 percent) and Nemotron H 47B Reasoning (43.5 percent). | Image: Technology Innovation Institute (TII)
According to Bloomberg's sources, Meta is shifting its focus to a new AI model codenamed "Avocado," with a release potentially coming next spring. Avocado is expected to launch as a closed model, letting the company sell access directly. This marks a major shift from Meta's established open-model strategy. Internally, the open-source approach reportedly lost steam after the disappointing performance of Llama 4. Management is betting big on Alexandr Wang, who joined Meta following the company's deal with Scale AI.
The development process involves some surprising ingredients. According to Bloomberg, the team is training Avocado using several external models, including Google's Gemma, OpenAI's gpt-oss, and Alibaba's Qwen. Using Chinese technology clashes with CEO Mark Zuckerberg's previous warnings about Chinese censorship.