Ad
Skip to content

Arcee AI spent half its venture capital to build an open reasoning model that rivals Claude Opus in agent tasks

US start-up Arcee AI spent roughly half its total venture capital to train Trinity-Large-Thinking, an open reasoning model with 400 billion parameters designed to take on Claude Opus in agent tasks.

Read full article about: Meta plans to open-source parts of its new AI models

Meta is planning to release versions of its new AI models as open source, according to Axios. These would be the first models developed under the leadership of Alexandr Wang, who joined Meta in 2025 as part of a nearly $15 billion deal with Scale AI.

Unlike its approach with the Llama models, though, Meta plans to keep some components proprietary and review safety risks before releasing anything. The largest models won't be made publicly available either.

According to the report, Wang sees Meta as a counterweight to Anthropic and OpenAI, which focus more heavily on government and enterprise customers. Meta's strategy instead centers on consumer reach through WhatsApp, Facebook, and Instagram. Axios's sources say Meta already knows the new models won't match the competition in every area.

Read full article about: Hume AI open-sources TADA, a speech model five times faster than rivals with zero hallucinated words

Hume AI has open-sourced TADA, an AI system for speech generation that processes text and audio in sync. Unlike previous systems that generate significantly more audio frames per text token, TADA maps exactly one audio signal to each text token. The result, according to Hume AI: TADA is over five times faster than comparable systems and produced zero transcription hallucinations—no made-up or skipped words compared to the source text—across tests with more than 1,000 samples. In human evaluations, the system scored 3.78 out of 5 for naturalness.

Hume AI says TADA is compact enough to run on smartphones, though longer texts can cause the voice to occasionally drift. The system comes in two sizes—1B and 3B parameters—both based on Llama. The smaller model supports English, while the 3B version covers seven additional languages. All code and models are available on GitHub and Hugging Face under the MIT license, and the full technical details can be found in the paper.

Hallucinated references are passing peer review at top AI conferences and a new open tool wants to fix that

Fake citations are slipping past peer review at top AI conferences, and commercial LLMs can’t spot the fakes they generate. A new open-source tool called CiteAudit allegedly catches what GPT, Gemini, and Claude miss.

Read full article about: OpenAI offers open-source maintainers six months of free ChatGPT Pro and Codex access

OpenAI is launching a new support program for open-source developers. Core maintainers of public software projects can apply for six months of free access to ChatGPT Pro with Codex, API credits, and Codex Security. Codex Security, a new AI tool for code security checks, will be reviewed on a case-by-case basis and only granted selectively due to the capabilities of GPT-5.4, according to OpenAI.

Developers who prefer other programming tools like OpenCode, Cline, or OpenClaw can also apply. Projects that don't meet all the criteria but play an important role in the broader software ecosystem are also welcome to apply. The program builds on OpenAI's existing Codex Open Source Fund, which the company has backed with one million dollars.

Google's new open TranslateGemma models bring translation for 55 languages to laptops and phones

TranslateGemma shows how targeted training helps Google squeeze more performance out of smaller models: the 12B version translates better than a base model twice its size and runs on a regular laptop. With the growing Gemma family, Google is staking its claim in the race for open AI models.

Read full article about: Abu Dhabi's TII claims its Falcon H1R 7B reasoning model matches rivals seven times its size

The Technology Innovation Institute (TII) from Abu Dhabi has released Falcon H1R 7B, a compact reasoning language model with 7 billion parameters. TII says the model matches the performance of competitors two to seven times larger across various benchmarks, though as always, benchmark scores only loosely correlate with real-world performance, especially for smaller models. Falcon H1R 7B uses a hybrid Transformer-Mamba architecture, which lets it process data faster than comparable models.

Falcon H1R 7B scores 49.5 percent across four benchmarks, outperforming larger models like Qwen3 32B (46.2 percent) and Nemotron H 47B Reasoning (43.5 percent). | Image: Technology Innovation Institute (TII)

The model is available as a complete checkpoint and quantized version on Hugging Face, along with a demo. TII released it under the Falcon LLM license, which allows free use, reproduction, modification, distribution, and commercial use. Users must follow the Acceptable Use Policy, which TII can update at any time.