OpenAI and Nvidia have signed a letter of intent for a strategic partnership to deliver at least 10 gigawatts of computing power for OpenAI's next-generation AI data centers.
For context, 10 gigawatts is about the same output as ten typical nuclear reactors, since a single reactor usually generates around 1 GW. Running at full capacity, that's roughly enough electricity to power 8 to 9 million US households for a year. By comparison, the largest single AI campuses announced so far are in the 1 to 2 GW range.
Nvidia is backing this with a planned investment of up to $100 billion, depending on how much capacity is built out. The first gigawatt is set to come online in the second half of 2026, powered by Nvidia's new Vera Rubin platform. Both companies say they'll coordinate closely on hardware and software development.
This partnership builds on OpenAI's current projects with Microsoft, Oracle, SoftBank, and Stargate partners. More details are expected in the coming weeks.
AI compute as a competitive moat
Through these moves, OpenAI is signaling that infrastructure is now its main competitive edge, especially as differences between top AI models start to shrink. The company has repeatedly said that the next big leap in AI will come from letting models "think" for much longer periods—running for hours or even days at a time.
By scaling up its compute clusters to an unprecedented level, OpenAI is positioning itself to unlock extended reasoning and experimentation that smaller competitors simply can't match. The result is a powerful infrastructure moat that makes it even harder for others to catch up.
In parallel with the Nvidia announcement, OpenAI CEO Sam Altman posted on X that the company will launch several new compute-heavy products in the coming weeks. Due to the high cost, some features will roll out to Pro subscribers first, and some new offerings will include extra fees.
Altman wrote, "Our intention remains to drive the cost of intelligence down as aggressively as we can and make our services widely available, and we are confident we will get there over time. But we also want to learn what's possible when we throw a lot of compute, at today's model costs, at interesting new ideas."
Alongside the Nvidia deal, OpenAI is also working on its own AI chip. According to Reuters, the design for OpenAI's first in-house chip is nearly finished and should be handed off to TSMC in the coming months. Initial production trials will take several months, with mass production targeted for 2026 using TSMC's advanced 3 nm process.
OpenAI wants to rely less on Nvidia but doesn't plan to sell its chips to outside customers. Google has followed a similar path with its TPUs, using them as a strategic advantage for its own AI cloud. This approach lets Google reduce its dependence on Nvidia while offering something competitors can't match.