Nvidia's Double Beat Saves The AI Trade
“There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different.”
Over the past few days, the AI narrative started to wobble. Stocks softened, sentiment cooled, and even shiny new partnerships and fresh datacenter ribbon-cuttings couldn’t shake the market’s mounting malaise.
Then NVIDIA reported earnings and politely suggested we stop being dramatic.
When the company that supplies ~90% of the world’s advanced AI chips reports results, it doubles as a wellness check for the entire sector. And for now, the pulse looks strong, caffeinated, and entirely convinced of its own destiny.
NVIDIA reported $57B in quarterly revenue, +62% YoY, and about $10B sequentially. This was comfortably above consensus (~$54.9B) and well ahead of the ~55% growth expectation. Net income soared ~65% to ~$31.9 billion.
Forward guidance for Q4: around $65 billion in revenue. Analysts had expected less, but analysts have been expecting less for seven straight quarters now.
As always with Nvidia, the earnings call was even more revealing than the results. Jensen shared a lot:
(1) On Talks of a Bubble
Everyone’s looking for froth in the AI cycle, but Nvidia’s numbers paint a different picture: demand remains structural, not speculative.
CFO Colette Kress put it plainly:
“Demand for AI infrastructure continues to exceed our expectations. The clouds are sold out, and our GPU installed base… is fully utilized.”
Nvidia now has visibility to $500B in Blackwell and Rubin revenue through 2026 - and management expects that number to grow, citing new sovereign and enterprise buildouts. There are additional orders not yet booked, including the Saudi/KSA agreement for 400K-600K GPUs and Anthropic committing up to 1GW of compute capacity.
When every available GPU is already spoken for and customers are queuing up for the next generation, it’s hard to call that a bubble.
(2) On the Rise of CapEx
If there’s a single chart that defines the AI era, it’s hyperscaler CapEx.
Nvidia cited a $600B run rate for 2026, up $200B from forecasts at the start of this year. Jensen Huang offered a framework that cuts through the hysteria:
AI CapEx isn’t one amorphous blob of hype - it’s 3 distinct economic motives:
Cost Saving: Accelerated computing is now the cheapest way to lower cost per compute unit after a decade of Moore’s Law stagnation. GPUs are replacing CPUs in a post-Moore world.
Revenue Boosting: Generative AI is replacing classical ML in recommender and ad systems. Meta’s new GenAI ad models increased conversions +5% on Instagram and +3% on Facebook Feed - billions in incremental ad revenue, no new users required.
Net-new markets: Agentic AI - the fastest-growing class of applications in history - is spawning new categories from code copilots to digital biology labs.
The takeaway: this is the most economically grounded compute cycle in decades.
(3) On Full Stack
Nvidia is no longer a component supplier; it’s an end-to-end compute economy.
Kress explained:
“Our networking business… generated revenue of $8.2 billion, up 162% year over year… We are winning in data center networking as the majority of AI deployments now include our switches.”
Compute used to be about chips. Now it’s about systems: GPUs, interconnects, compilers, and orchestration. Nvidia owns them all - silicon, networking, software, and even financing.
With CUDA, TensorRT, NVLink, and Spectrum X, it’s effectively a vertically integrated monopoly on AI infrastructure - and that integration is compounding.
(4) On Blackwell and Rubin
The GB300 now accounts for 2/3 of all Blackwell revenue - surpassing the GB200 within months of launch. The ramp has been “seamless” across hyperscalers, cloud GPU providers, and sovereign builds.
Meanwhile, Rubin, arriving in 2026, is Nvidia’s next act: 7-chip rack-scale platform, 10× better performance per watt, full backward compatibility with Grace-Blackwell, supply chain and manufacturing already trained.
Jensen’s flexed:
“Nvidia’s architecture is the singular platform in the world that runs every AI model. We run OpenAI, we run Anthropic, we run xAI, we run Gemini…”
Management noted their conviction that Nvidia will be the superior choice for the ‘$3-$4 trillion in annual AI infrastructure build’ they estimate by the end of the decade.
(5) In Defense of Circular Deals
Critics see Nvidia’s investments in OpenAI, Anthropic, Mistral, and others as “circular” - funding customers who then buy Nvidia chips.
Management calls it CUDA distribution with equity upside in “once in a generation” assets. In the most recent example, Nvidia invested $10B in Anthropic in exchange for equity and a compute commitment up to 1GW with Grace Blackwell and Vera Rubin systems.
Jensen notes “All of the investments that we’ve done… are associated with expanding the reach of CUDA, expanding the ecosystem” and at the same time, he fully expects that investment to “translate to extraordinary returns”.
The best line of the earnings call came in the Q&A when an analyst asked Jensen about bottlenecks and constraints. His reply had this banger:
“How could anything be easy? What NVIDIA is doing has never been done before.”
It’s the line that perfectly captures the AI era - equal parts inevitability, exhaustion, and trillion-dollar ambition.






