Ad
Skip to content
Read full article about: Nvidia CEO Jensen Huang says he'd be "deeply alarmed" if a $500K developer spent less than $250K on AI tokens

Nvidia CEO Jensen Huang believes that if a developer earns $500,000 a year, their token budget should be at least half that amount. On the All-In podcast at Nvidia's GTC conference, Huang laid out a "thought experiment:" If a developer or AI researcher earned $500,000 a year and only used $5,000 in tokens by year's end, he would "go ape something else." If their token budget wasn't at least $250,000, he'd be "deeply alarmed."

To Huang, it's no different "than one of our chip designers who says, guess what, I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools." The statement has at least as much meme potential as Huang's legendary "The more you buy, the more you save" line from GTC 2018.

On the AI industry's revenue potential, Huang says Anthropic CEO Dario Amodei is "very conservative" with his forecast of hundreds of billions in AI usage revenue by 2027/28 and a trillion dollars by 2030. His reasoning: every enterprise software company will eventually act as a "value-added reseller" of tokens from Anthropic or OpenAI, dramatically expanding the market.

Beijing approves Nvidia's H200 chip sales as the company builds a China-ready version of its Groq inference chip

Nvidia has received long-awaited approval from Beijing to sell its second-most-powerful AI chip, the H200, to Chinese customers, Reuters reports. The company had halted production of the chip last year due to regulatory hurdles on both sides of the Pacific.

GTC 2026: With Groq 3 LPX, Nvidia adds dedicated inference hardware to its platform for the first time

At GTC 2026, Nvidia expanded the Vera Rubin platform it introduced at CES with custom CPU racks, dedicated inference chips, a new storage architecture, an inference operating system, open model alliances, and agent security software.

Nvidia steps into the open-source AI gap that OpenAI, Meta, and Anthropic left behind

An SEC filing reveals that Nvidia plans to spend $26 billion on open-weight AI models over the next five years. The move doubles as a strategic response to the growing dominance of Chinese open-source models – and a way to keep developers locked into Nvidia’s hardware ecosystem.

Read full article about: Nvidia and Mira Murati's Thinking Machines Lab announce long-term AI partnership

Nvidia and Thinking Machines Lab, the AI startup founded by former OpenAI executive Mira Murati, are entering a long-term partnership. Thinking Machines will receive at least one gigawatt of compute power through Nvidia's new Vera Rubin systems to train its own AI models. Deployment is set to begin early next year.

Nvidia has also taken a financial stake in Thinking Machines, though the exact amount wasn't disclosed. The startup had already raised around $2 billion in a seed round led by Andreessen Horowitz, at a valuation of $12 billion. Nvidia was an investor in that round as well. Most recently, Thinking Machines is reportedly seeking another funding round. The startup has also seen some departures - co-founders Barret Zoph and Luke Metz returned to OpenAI.

Together, the two companies plan to develop training and deployment systems for Nvidia hardware and make frontier AI models available to businesses and researchers. Murati left OpenAI in 2024 and co-founded Thinking Machines Lab.

Read full article about: Meta signs multi-billion dollar deal to rent Google's TPUs in a direct challenge to Nvidia's AI chip dominance

Meta has signed a multi-year, multi-billion dollar contract with Google to rent its AI chips—Tensor Processing Units (TPUs)—for developing new AI models. That's according to The Information. Meta is also looking into buying TPUs outright for its own data centers starting next year.

The deal takes direct aim at Nvidia, which dominates the AI chip market and has been Meta's go-to GPU supplier for AI training. Just days earlier, Meta had announced plans to buy millions of GPUs from Nvidia and AMD. Internally, Google Cloud executives have set a goal of capturing up to ten percent of Nvidia's annual revenue—roughly $200 billion—through TPU sales. Google has also launched a joint venture with an investment firm to lease TPUs to other customers.

Here's where it gets complicated: Google itself is one of Nvidia's biggest customers, since cloud customers still expect access to GPU servers. So Google has to keep buying Nvidia's latest chips to stay competitive in the cloud market, while simultaneously trying to eat into Nvidia's market share with its own silicon. OpenAI reportedly managed to negotiate 30 percent lower prices from Nvidia simply because TPUs exist as an alternative.