Nvidia achieved new record numbers in the third quarter of fiscal year 2025, with revenue rising to $35.1 billion. According to CEO Jensen Huang, the company is just at the beginning of two fundamental developments that should drive further growth in the long term.
"The tremendous growth in our business is being fueled by two fundamental trends that are driving global adoption of NVIDIA computing," Huang said during the earnings call. The first trend is the modernization of global IT infrastructure.
Huang sees a transformation of unprecedented scale: The world's trillion-dollar CPU-based IT infrastructure is being modernized to support machine learning and AI. "The computing stack is undergoing a reinvention, a platform shift from coding to machine learning, from executing code on CPUs to processing neural networks on GPUs."
Huang expects this transformation to take several years as companies worldwide retrofit their data centers. "The $1 trillion installed base of traditional data center infrastructure is being rebuilt for Software 2.0, which applies machine learning to produce AI."
AI factories for "digital intelligence"
The second fundamental trend, according to Huang, is the production of digital intelligence in "AI factories" that run around the clock. "The age of AI is in full steam. Generative AI is not just a new software capability but a new industry with AI factories manufacturing digital intelligence, a new industrial revolution that can create a multi-trillion-dollar AI industry."
Nvidia's Hopper and Blackwell architectures, along with platforms like Omniverse, play a key role in this development. Demand for Hopper chips is "exceptional," according to Nvidia, with H200 chip revenue more than doubling quarter-over-quarter. Blackwell is in mass production, with demand far exceeding supply.
Huang sees several reasons for the enormous demand: "There are more foundation model makers now than there were a year ago. The computing scale of pretraining and post-training continues to grow exponentially. There are more AI-native start-ups than ever, and the number of successful inference services is rising. And with the introduction of OpenAIs o1, a new scaling law called test time scaling has emerged. All of these consume a great deal of computing."
New markets emerge alongside cloud providers
Beyond major cloud providers, a new market is emerging with "Sovereign AI": Countries and regions are building independent AI infrastructures to meet regional requirements. According to Nvidia, India plans to increase its number of Nvidia GPUs tenfold by year's end. Japan is building one of the most powerful supercomputers with SoftBank, based on Nvidia's DGX Blackwell. European countries are also working on regional clouds and AI factories, Huang said.
OpenAI's o1 model shows new scaling dimension
Nvidia also benefits from new optimization techniques like post-training or test-time scaling, which further increase demand for computing power. Test-time scaling uses additional resources during runtime to deliver smarter answers in real-time. OpenAI uses this technique for its new o1 model. " It's a little bit like us doing thinking in our head before we answer your question," Huang said. "And as a result of that, the demand for our infrastructure is really great."
These trends create a new dimension of scalability, while the scalability of training foundation models remains intact, according to Huang: "As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale. What we're learning, however, is that it's not enough, that we've now discovered two other ways to scale." With this statement, Huang also addressed recent reports suggesting that training scaling is reaching its limits. This could particularly affect Nvidia, as the company is the undisputed market leader in AI training. However, in the AI inference segment, there is significantly more - albeit young - competition.