Microsoft-Tsinghua team trains 7B coding model that beats 14B rivals using only synthetic data
Researchers show that an AI model trained on synthetic programming tasks alone can beat larger competitors. A key finding: task variety matters more than the number of solutions.
Baidu's new AI model Ernie 5.0, which processes text, images, audio, and video in a unified architecture, is now officially available. According to the LMArena ranking from January 15, 2026, Ernie-5.0-0110 scored 1,460 points, placing 8th globally and 1st among all Chinese models. That puts it on par with OpenAI's slightly older GPT-5.1 (High) and ahead of both Google's Gemini 2.5 Pro and Anthropic's Claude Sonnet 4.5. The next best Chinese model is GLM-4.7 from Zhipu AI. In the math category, Ernie 5.0 came in second worldwide, trailing only GPT 5.2 (High).
The LMArena ranking is determined from numerous anonymous pair comparisons in which users choose the better model answer.
Under the hood, the model uses a mixture-of-experts architecture with around 2.4 trillion parameters - but less than 3 percent of those are active for any given query. For now, the model is only available at ernie.baidu.com. Unlike previous releases, Baidu hasn't published any weights yet, and there's no technical report or detailed documentation available. The company's most recent open release was Ernie-4.5-VL-28B-A3B-Thinking, a model that can manipulate images during its reasoning process - for example, zooming in on text to read it more clearly.
Ollama, the popular software for running AI models locally, now supports image generation on macOS. The feature is still experimental, with Windows and Linux support coming later. Two models are available at launch: Z-Image Turbo from Alibaba's Tongyi Lab is a 6-billion-parameter model that creates photorealistic images and can render text in both English and Chinese. The recently released Flux 2 Klein from Black Forest Labs is the German company's fastest image model yet, available in 4B and 9B variants.
Terminals such as Ghostty or iTerm2 display previews directly.
The 4B version of Flux 2 Klein runs on standard graphics cards with at least 13 GB VRAM, such as an Nvidia RTX 3090 or 4070. The smaller version is available for commercial use, while the larger version is restricted to non-commercial applications. Generated images save directly to the current directory, and users can tweak image size, step count, and seed values. Additional models and image editing features are planned.
Google's new open TranslateGemma models bring translation for 55 languages to laptops and phones
TranslateGemma shows how targeted training helps Google squeeze more performance out of smaller models: the 12B version translates better than a base model twice its size and runs on a regular laptop. With the growing Gemma family, Google is staking its claim in the race for open AI models.