Hub Nvidia
Nvidia is taking over software provider SchedMD to expand its presence in open-source technology. On Monday, the company confirmed it will continue to distribute SchedMD's "Slurm" software as an open-source product. The platform helps plan large-scale computing tasks in data centers, ensuring server capacity is used efficiently.
Nvidia views the technology as critical infrastructure for generative AI, noting that developers rely on it to train models. Financial terms of the deal were not disclosed. Founded in California in 2010, SchedMD employs around 40 people and serves clients like cloud provider CoreWeave and the Barcelona Supercomputing Center.
Nvidia and OpenAI have not yet signed their planned 100 billion dollar deal. Nvidia CFO Colette Kress confirmed this on Tuesday during a conference in Arizona. Even though both companies announced plans in September to provide 10 gigawatts of Nvidia systems for OpenAI, the arrangement is still only a memorandum of understanding. Kress said the two sides are still working toward a final agreement.
The holdup raises new questions about the circular business structures that have become common in the tech industry, where large companies invest in startups that then spend the money on the investor's own products. Any future revenue from the OpenAI deal is not included in Nvidia's current 500 billion dollar forecast. A separate 10 billion dollar investment in competitor Anthropic also remains pending.
Nvidia used the NeurIPS conference to debut new AI models for autonomous driving and speech processing. The company introduced Alpamayo-R1, a system designed to handle traffic situations through step-by-step logical reasoning. Nvidia says this approach helps the model respond more effectively to complex real-world scenarios than previous systems. The code is public, but the license limits it to non-commercial use.
Nvidia also showed new tools for robotics simulation. In speech AI, the company unveiled MultiTalker, a model that can separate and transcribe overlapping conversations from multiple speakers.
Google is in talks with Meta and several other companies about letting them run Google's TPU chips inside their own data centers, according to a report from The Information. One person familiar with the discussions said Meta is considering spending billions of dollars on Google TPUs that would start running in Meta facilities in 2027. Until now, Google has only offered its TPUs through Google Cloud.
The new TPU@Premises program is Google's attempt to make its chips a more appealing alternative to Nvidia's AI hardware. According to someone with knowledge of internal comments, Google Cloud executives have said the effort could help the company reach ten percent of Nvidia's annual revenue. Google has also built new software designed to make TPUs easier to use.
Arm and Nvidia plan closer collaboration. Arm says its CPUs will be able to connect directly to Nvidia's AI chips using NVLink Fusion, making it easier for customers to pair Neoverse CPUs with Nvidia GPUs. The move also opens Nvidia's NVLink platform to processors beyond its own lineup.
The partnership targets cloud providers like Amazon, Google, and Microsoft, which increasingly rely on custom Arm chips to cut costs and tailor their systems. Arm licenses chip designs rather than selling its own processors, and the new protocol speeds up data transfers between CPUs and GPUs. Nvidia previously tried to buy Arm in 2020 for 40 billion dollars, but regulators in the United States and the United Kingdom blocked the deal.