Ad
Skip to content
Read full article about: Abu Dhabi's TII claims its Falcon H1R 7B reasoning model matches rivals seven times its size

The Technology Innovation Institute (TII) from Abu Dhabi has released Falcon H1R 7B, a compact reasoning language model with 7 billion parameters. TII says the model matches the performance of competitors two to seven times larger across various benchmarks, though as always, benchmark scores only loosely correlate with real-world performance, especially for smaller models. Falcon H1R 7B uses a hybrid Transformer-Mamba architecture, which lets it process data faster than comparable models.

Falcon H1R 7B scores 49.5 percent across four benchmarks, outperforming larger models like Qwen3 32B (46.2 percent) and Nemotron H 47B Reasoning (43.5 percent). | Image: Technology Innovation Institute (TII)

The model is available as a complete checkpoint and quantized version on Hugging Face, along with a demo. TII released it under the Falcon LLM license, which allows free use, reproduction, modification, distribution, and commercial use. Users must follow the Acceptable Use Policy, which TII can update at any time.

Read full article about: Meta is reportedly ditching open Llama models for Avocado, a closed model built for direct sales

According to Bloomberg's sources, Meta is shifting its focus to a new AI model codenamed "Avocado," with a release potentially coming next spring. Avocado is expected to launch as a closed model, letting the company sell access directly. This marks a major shift from Meta's established open-model strategy. Internally, the open-source approach reportedly lost steam after the disappointing performance of Llama 4. Management is betting big on Alexandr Wang, who joined Meta following the company's deal with Scale AI.

The development process involves some surprising ingredients. According to Bloomberg, the team is training Avocado using several external models, including Google's Gemma, OpenAI's gpt-oss, and Alibaba's Qwen. Using Chinese technology clashes with CEO Mark Zuckerberg's previous warnings about Chinese censorship.

Read full article about: OpenAI releases gpt-oss-safeguard open source models for flexible AI safety

OpenAI has launched gpt-oss-safeguard, a new set of open source models built for flexible security classification. The models come in two sizes, 120b and 20b, and are available under the Apache 2.0 license for anyone to use and modify. Unlike traditional classifiers that need to be retrained whenever safety rules change, these models can interpret policies in real time, according to OpenAI. This lets organizations update their rules instantly, without retraining the model.

The models are designed to be more transparent as well. Developers can see exactly how the models make decisions, making it easier to understand and audit how security is enforced. gpt-oss-safeguard is based on OpenAI's gpt-oss open source model and is part of a larger collaboration with ROOST, an open source platform focused on building tools and infrastructure for AI safety, security, and governance.