Ad
Short

Wired reports that OpenAI is stepping back into robotics, with new hiring pointing toward work on humanoid machines.

According to job postings, the company is assembling a team focused on training robots through teleoperation and simulation. OpenAI is also seeking engineers specializing in sensing and prototyping. The listings describe the team’s mission as building "general-purpose robots" that could help push progress toward AGI.

It’s not confirmed that the effort targets humanoids, but signs point that way. New hires include Stanford researcher Chengshu Li, who worked on benchmarks for humanoid household robots. That makes it likely OpenAI’s robotics push could center on humanlike systems.

OpenAI shut down its robotics work in 2020, citing a lack of training data. But the company began posting robotics roles again in January, signaling a renewed focus on physical AI after a five-year pause.

Short

According to Bloomberg, Chinese regulators say Nvidia violated conditions of its 2020 acquisition of Mellanox.

China’s State Administration for Market Regulation (SAMR) announced Monday that the deal had been approved only on the condition that Nvidia would not discriminate against Chinese firms. The agency now claims Nvidia failed to comply. The original approval was granted during trade talks between the US and China in Madrid. Nvidia’s stock slipped about 2 percent in premarket trading after the news.

At the same time, Beijing launched an anti-dumping investigation into US-made semiconductors from companies including Texas Instruments. The move comes against the backdrop of US restrictions on the export of Nvidia’s most advanced AI chips to China. Regulators did not say what new penalties Nvidia might face.

Ad
Ad
Short

OpenAI chairman Bret Taylor sees strong echoes between today's AI boom and the dotcom era.

"I think there are a lot of parallels to the internet bubble," Taylor said in a conversation with The Verge. "If you look at the internet, some of the world's biggest companies like Amazon and Google came out of it. At the same time, a lot of big failures like Pets.com and Webvan happened right alongside them. Both existed together - massive winners and dramatic losses."

For Taylor, the key point is that AI will reshape the global economy in the same way the internet did, but it's also going to produce plenty of failed bets. "I think it's absolutely true both at once - that AI will transform the economy, and that we're in a bubble where a lot of people are going to lose a lot of money."

Short

Google’s "Nano Banana" image editing model has gone viral, pushing the Gemini app to the top of the app store charts. In the US, Canada, the UK, and Germany, Google Gemini now holds the number one spot, ahead of ChatGPT at number two.

Gemini in first place in Germany on September 15, 2025 | Image: THE DECODER

According to Google, Gemini reached nearly 450 million monthly active users in July, a number that has likely increased since then. During this time, the "Nano Banana" model, also known as "Gemini 2.5 Flash Image Generation," was used more than 500 million times.

Ad
Ad
Short

OpenAI plans to give Microsoft a much smaller share of its revenue going forward, according to a report from The Information.

The company has reportedly told some investors that Microsoft's cut — currently just under 20 percent — will drop to around 8 percent by 2030. That shift would let OpenAI hold on to more than $50 billion in additional revenue to cover its massive computing costs. Under the original deal, Microsoft was guaranteed 20 percent through 2030.

In return, sources told The Information that Microsoft will get one-third of the restructured OpenAI entity, with another portion going to the nonprofit side. Microsoft still will not have a board seat. The two companies are also said to be negotiating over server expenses and contract terms around the potential use of artificial general intelligence (AGI).

It's not yet clear whether the recently announced, non-binding agreement between the two companies already reflects these revenue changes.

Short

Google DeepMind has introduced a new language model called VaultGemma, designed with a focus on privacy. It is the largest open model to date trained from scratch with differential privacy, containing 1 billion parameters.

Normally, large language models can memorize parts of their training data, including sensitive information like names, addresses, or entire documents. Differential privacy avoids this by adding controlled random noise during training, making it statistically impossible to trace the model's outputs back to specific examples. In theory, even if VaultGemma were trained on confidential documents, those documents could not be reconstructed later.

According to Google, early tests confirm that the model does not reproduce training data. The tradeoff is performance: its output is roughly comparable to non-private LLMs released about five years ago.

The model weights are openly available on Hugging Face and Kaggle.

Ad
Ad
Google News