Ad
Short

Nvidia and OpenAI have not yet signed their planned 100 billion dollar deal. Nvidia CFO Colette Kress confirmed this on Tuesday during a conference in Arizona. Even though both companies announced plans in September to provide 10 gigawatts of Nvidia systems for OpenAI, the arrangement is still only a memorandum of understanding. Kress said the two sides are still working toward a final agreement.

The holdup raises new questions about the circular business structures that have become common in the tech industry, where large companies invest in startups that then spend the money on the investor's own products. Any future revenue from the OpenAI deal is not included in Nvidia's current 500 billion dollar forecast. A separate 10 billion dollar investment in competitor Anthropic also remains pending.

Short

Nvidia used the NeurIPS conference to debut new AI models for autonomous driving and speech processing. The company introduced Alpamayo-R1, a system designed to handle traffic situations through step-by-step logical reasoning. Nvidia says this approach helps the model respond more effectively to complex real-world scenarios than previous systems. The code is public, but the license limits it to non-commercial use.

Nvidia also showed new tools for robotics simulation. In speech AI, the company unveiled MultiTalker, a model that can separate and transcribe overlapping conversations from multiple speakers.

Ad
Ad
Short

Google is in talks with Meta and several other companies about letting them run Google's TPU chips inside their own data centers, according to a report from The Information. One person familiar with the discussions said Meta is considering spending billions of dollars on Google TPUs that would start running in Meta facilities in 2027. Until now, Google has only offered its TPUs through Google Cloud.

The new TPU@Premises program is Google's attempt to make its chips a more appealing alternative to Nvidia's AI hardware. According to someone with knowledge of internal comments, Google Cloud executives have said the effort could help the company reach ten percent of Nvidia's annual revenue. Google has also built new software designed to make TPUs easier to use.

Ad
Ad
Short

Arm and Nvidia plan closer collaboration. Arm says its CPUs will be able to connect directly to Nvidia's AI chips using NVLink Fusion, making it easier for customers to pair Neoverse CPUs with Nvidia GPUs. The move also opens Nvidia's NVLink platform to processors beyond its own lineup.

The partnership targets cloud providers like Amazon, Google, and Microsoft, which increasingly rely on custom Arm chips to cut costs and tailor their systems. Arm licenses chip designs rather than selling its own processors, and the new protocol speeds up data transfers between CPUs and GPUs. Nvidia previously tried to buy Arm in 2020 for 40 billion dollars, but regulators in the United States and the United Kingdom blocked the deal.

Short

According to the Wall Street Journal, Amazon, Microsoft, and AI startup Anthropic are backing a US law that would further restrict Nvidia's chip exports to China. The proposed Gain AI Act would require semiconductor companies to satisfy US demand first before shipping chips to countries under arms embargos. The law would give tech giants like Amazon and Microsoft priority access to chips.

Nvidia opposes the plan, warning it would create unnecessary market interference. Some government officials question whether the law is even needed, pointing out that the Commerce Department already has the authority to enforce export controls. Meta and Google haven't commented on the proposal. The Gain AI Act could be attached to the defense budget as an amendment.

Ad
Ad
Short

Nvidia turns to synthetic data to tackle robotics’ biggest challenge: the lack of training data.

"We call this the big data gap in robotics," a Nvidia researcher said at the Physical AI and Robotics Day during GTC Washington. While large language models train on trillions of internet tokens, robot models like Nvidia’s GR00T have access to only a few million hours of teleoperation data, gathered through complex manual effort - and most of it is narrowly task-specific.

Nvidia’s answer is to rethink what it calls the "data pyramid for robotics." At the top sit real-world data - small in quantity and expensive to collect. In the middle lies synthetic data from simulation - theoretically limitless. At the base is unstructured web data. "When synthetic data surpasses the web-scale data, that's when robots can truly learn to become generalized for every task," the team states. With Cosmos and Isaac Sim, Nvidia aims to turn robotics’ data shortage into a compute challenge instead.

Google News