Ad
Skip to content

Nvidia CEO Jensen Huang: The idea that AI will destroy software is "ridiculous"

Image description
Salesforce

Key Points

  • Jensen Huang explains on the Lex Fridman podcast why AI agents will use existing software instead of replacing it. Even a humanoid robot would use the microwave rather than beam microwaves from its fingers - and instantly become an expert by reading the manual online.
  • Huang sees Nvidia's OpenClaw framework as a turning point for agentic AI, calling it the "iPhone of tokens." He predicts premium token prices of up to $1,000 per million tokens, with data centers transforming into token factories.
  • Nvidia has redesigned its rack architecture accordingly: the new Vera Rubin platform consists of five specialized rack types built specifically for running AI agents rather than language model inference.

Jensen Huang explains why AI agents will use software rather than replace it. Nvidia has redesigned its entire rack architecture accordingly.

"A lot of people would say, 'You know AI is gonna completely destroy software. We don't need software anymore. We don't even need tools anymore.' That's ridiculous," Jensen Huang says on the Lex Fridman Podcast.

His counterargument is a thought experiment: even the most impressive agent we can imagine in the next ten years - a humanoid robot - would most likely just use the existing microwave rather than beam microwaves out of its fingers. The first time it walks up to the microwave, it probably doesn't know how to use it. "But that's okay. It's connected to the internet. It reads the manual of this microwave, reads it, instantly becomes an expert." With that, Huang says, he had essentially described "almost all of the properties of OpenClaw." He says he sketched the concept for such agents two years earlier on the GTC stage.

Huang compares OpenClaw's impact to ChatGPT

Huang sees OpenClaw as a turning point on par with ChatGPT. According to Huang, the framework "did for agentic systems what ChatGPT did for generative systems." He explains the breakthrough in practical terms: OpenClaw went viral "because consumers could reach it." He calls it "the iPhone of tokens" and "the fastest-growing application in history."

Ad
DEC_D_Incontent-1

Behind this lies a broader economic argument. According to Huang, tokens are becoming a commodity with differentiated price tiers, from free tokens to premium tokens. The idea that someone will be willing to pay $1,000 per million tokens is "just around the corner. It's not if, it's only when," he says. In his view, data centers are transforming from warehouses for data into factories for tokens whose revenue directly correlates with token production.

Nvidia's new rack architecture is built for agents, not just LLMs

For Nvidia, this conviction has real consequences. The Grace Blackwell racks were still purely optimized for LLM inference. The new Vera Rubin platform consists of five specialized rack types instead. These include dedicated Vera CPU racks for agent sandboxing, BlueField-4 storage racks for massive KV cache context, and the Groq 3 LPX rack for ultra-low-latency inference. "This entire rack system is completely different than the previous one," Huang says. The last one was designed to run MoE large language models for inference. This one is built to run agents. And agents, as Huang puts it, "bang on tools."

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Lex Fridman