Ad
Skip to content
Read full article about: China pushes OpenClaw "one-person companies" with millions in AI agent subsidies

The AI agent hype around OpenClaw has hit China hard. At least seven local governments rolled out funding programs within days, SCMP reports. The sheer pace suggests Beijing sees AI agents built on OpenClaw and similar frameworks as a potential driver for economic growth.

Hefei's tech district in Anhui province is offering up to 1.4 million dollars in subsidies for housing, offices, and computing power, partly to promote "one-person companies" where a single founder works with AI agents as employees. Shenzhen matched with up to 1.4 million dollars, Wuxi with around 700,000 dollars plus computing resources, Changshu with roughly 830,000 dollars, and Changzhou with about 700,000 dollars plus an extra 280,000 dollars for computing power. Nanjing is providing free office space and computing resources.

"Having AI work for [users], taking care of tasks on their behalf, offers an experience that goes beyond mere talk surrounding the technology," says Li Zhi, head of the Intelligent Institute at Analysys International. "It has tapped into a social sentiment and vision of productivity, ultimately fueling a nationwide craze that has swept up everyone, from tech geeks to ordinary users."

Comment Source: SCMP
Read full article about: Hume AI open-sources TADA, a speech model five times faster than rivals with zero hallucinated words

Hume AI has open-sourced TADA, an AI system for speech generation that processes text and audio in sync. Unlike previous systems that generate significantly more audio frames per text token, TADA maps exactly one audio signal to each text token. The result, according to Hume AI: TADA is over five times faster than comparable systems and produced zero transcription hallucinations—no made-up or skipped words compared to the source text—across tests with more than 1,000 samples. In human evaluations, the system scored 3.78 out of 5 for naturalness.

Hume AI says TADA is compact enough to run on smartphones, though longer texts can cause the voice to occasionally drift. The system comes in two sizes—1B and 3B parameters—both based on Llama. The smaller model supports English, while the 3B version covers seven additional languages. All code and models are available on GitHub and Hugging Face under the MIT license, and the full technical details can be found in the paper.

Ad
Read full article about: Ai2 releases new robotics models trained entirely in simulation to skip real-world data collection

AI research institute Ai2 has released new robotics models trained exclusively in simulations. The models are designed to work directly on real robots without any manually collected data or fine-tuning, what researchers call zero-shot sim-to-real transfer. The approach could significantly accelerate development: with conventional training, researchers typically needed months of teleoperated real-world demonstrations to make simulation-trained robots reliable.

The two new open-source systems are called MolmoSpaces and MolmoBot. MolmoSpaces includes over 230,000 indoor scenes, more than 130,000 curated objects, and over 42 million physics-based robotic grasping annotations. MolmoBot builds on this foundation and can pick up and place objects, open drawers, and operate doors, all without ever seeing real training data for these tasks.

According to Ranjay Krishna, director of the PRIOR team at Ai2, the gap between simulation and reality shrinks when researchers dramatically increase the variety of simulated environments, objects, and camera conditions. All models and tools are openly available, and technical details can be found in the paper.

Comment Source: Ai2 | Paper
Read full article about: Meta reportedly plans to cut up to 20 percent of its workforce as $600 billion AI bet drives need to offset costs

AI could trigger massive layoffs at Meta, but not how you'd expect. The company's main goal is offsetting soaring AI infrastructure costs, Reuters reports, while also "preparing" for efficiency gains from AI-assisted work. Managers are reportedly planning to cut up to 20 percent of the workforce, roughly 16,000 of nearly 79,000 employees. No date or final number is set. Meta spokesperson Andy Stone dismissed the report as "speculative reporting about theoretical approaches."

CEO Mark Zuckerberg is betting big on generative AI: $600 billion for AI technology, infrastructure, and workforce expansion through 2028, aggressive poaching of AI researchers, and acquisitions like Chinese startup Manus. In January, he said projects that once required large teams can now be handled by individuals.

Amazon and Block have made similar cuts recently, reportedly tied to AI. Amazon is already tightening guardrails on AI-generated code after too many errors slipped through, and while Block's mass layoffs may be partly AI-related, they're almost certainly not driven by AI alone.

Ad
Read full article about: Ex-Anthropic researchers launch AI startup Mirendil to tackle scientific research

Another neo-lab enters the scene, this time from Anthropic's ranks: Mirendil wants to use AI to advance research in fields like biology and materials science. Founders Behnam Neyshabur (CEO) and Harsh Mehta (CTO) left Anthropic in December and are currently negotiating a $175 million funding round at a $1 billion valuation, according to The Information. Andreessen Horowitz and Kleiner Perkins are reportedly co-leading the round, though terms haven't been finalized yet.

Neyshabur led a scientific AI reasoning team at Anthropic and previously spent more than five years at Google DeepMind. Mehta served as a Senior Research Scientist at Anthropic. The founding team also includes Shayan Salehian (previously at xAI) and Tara Rezaei (previously an intern at OpenAI).

Mirendil joins a growing wave of so-called neo-labs: specialized AI startups founded by researchers who left major AI companies. These startups zero in on specific areas like office productivity or try to find fundamentally new AI development approaches that address the weaknesses of current systems, for example through continuous learning.

Read full article about: Anthropic drops the surcharge for million-token context windows, making Opus 4.6 and Sonnet 4.6 far cheaper

Anthropic is making Claude's extra-large context window a lot cheaper. The Opus 4.6 and Sonnet 4.6 models now offer a context window of one million tokens at the standard price. Until now, Anthropic charged a surcharge of up to 100 percent for requests exceeding 200,000 tokens. The context window determines how much text an AI model can process in a single request.

Opus 4.6 still costs $5/$25 per million tokens (input/output), and Sonnet 4.6 runs $3/$15. But whether a prompt contains 9,000 or 900,000 tokens no longer matters for pricing. On top of that, the media limit jumps from 100 to 600 images or PDF pages per request. The new pricing applies to Claude Code (Max, Team, and Enterprise) and is available through Amazon Bedrock (except for the media limit), Google Cloud Vertex AI, and Microsoft Foundry.

The GraphWalks BFS benchmark measures how well AI models handle logical reasoning across large amounts of text. Opus 4.6 reportedly shows almost no drop in performance even at full context length. | Image: Anthropic

According to Anthropic, both models achieve the highest accuracy among comparable models at full context length in benchmark tests. That said, the broader problem of declining precision as context windows fill up is still far from solved.

Read full article about: Elon Musk admits xAI "was not built right first time around," launches full restructuring

Elon Musk's AI company xAI is going through a major shake-up. Musk acknowledged on X that the company "was not built right first time around" and is now being rebuilt from the ground up. Six of the twelve co-founders have left xAI since January, most recently Guodong Zhang and Zihang Dai. Only Manuel Kroiss and Ross Nordeen have stayed on alongside Musk.

via X

At a recent conference, Musk admitted that Grok is falling behind competitors like Google, Anthropic, and OpenAI when it comes to coding - but said the company aims to close the gap by mid-2026. To get there, xAI has hired two senior executives from the AI coding startup Cursor: Andrew Milich and Jason Ginsberg, both reporting directly to Musk. According to the Financial Times, Musk has also brought in "problem solvers" from SpaceX and Tesla to help restructure xAI.

Ad

Google explains the differences between its three Nano Banana image generation models

A new guide from Google breaks down the three Nano Banana image models and when to use each one. The cheaper Nano Banana 2 reportedly delivers 95 percent of Pro’s capabilities and can search the web for reference images on its own before generating output.