Individually Rational, Collectively Fragile
On AI’s Capex Arms Race and the Rationality of Over-Investment
The conversation around AI is starting to feel predictable. Every quarter, the tech giants step up to the mic, clear their throats, and drop another CapEx number that sounds like GDP. Analysts furrow their brows. Commentators mutter “overheated.” Twitter declares a bubble. From a distance, the whole thing looks unsustainable - a bonfire of money built on hope, hype, and a handful of GPU purchase agreements.
But something interesting happens when you zoom in. The behavior of any individual company doesn’t feel irrational at all. If you are Microsoft, Amazon, Google, Meta - or OpenAI, Anthropic, xAI - the logic is straightforward:
You fear missing the platform shift.
You see early signs of real demand.
You know compute scarcity can become existential.
You don’t want to be the only one who bet small.
Your competitors are moving aggressively, so restraint is punished.
Given those incentives, yes, front-loading capex is rational.
If you believe that we are seeing a paradigm shift, then under-investing isn’t prudence; it’s strategic negligence. Being early and wrong costs money. Being late and wrong costs the company. This is not a “wait and see” environment. This is a “build or die” environment.
Venture capital operates with a similar internal logic, although it expresses itself differently. From the outside, the behavior looks manic - prices rising, rounds stacking, capital chasing momentum. But inside the system, the incentives are governed by asymmetry. If even a tiny chance exists that one of these AI companies becomes the generational winner - the next canonical platform - you cannot afford to sit it out. Missing the outlier is far more damaging than overpaying for the nine that don’t work. The cost of a type-I error is a write-off. The cost of a type-II error is missing a generational platform shift. On the outside, this looks irrational. Within the power-law framework, it is simply the only mathematically defensible strategy.
The tension is that when everyone behaves rationally according to their own incentives, the collective outcome can start to look irrational.
Because it is also true that:
No one actually knows future AI margins.
Demand is not proven yet for frontier-scale inference.
Training cycles may plateau or dramatically compress.
Infrastructure may get commoditized faster than expected.
A large portion of today’s spend is option value, not revenue-backed.
This creates a system where even if each participant’s behavior is individually rational, the aggregate may be capital-destructive.
Competitive markets often produce more capacity than they need because the penalty for under-building is catastrophic while the penalty for over-building is survivable. Bubbles form not because actors are delusional, but because the upside is convex, the downside is shared, missing out is catastrophic, and the players are rich enough to pay tuition.
This is how you get bubbles you can justify in a spreadsheet.
The telecom industry did it in the 90s. They built enough fiber to wrap the Earth in glowing glass spaghetti. They were right about the future and still wrong about the timing. Companies went bankrupt while the infrastructure they overbuilt became the foundation of the modern internet.
AI is following that playbook. The technology is real. The opportunity is enormous. The shift is inevitable. And the spending is, by any normal measure, absurd.
The truth is uncomfortably nuanced: we are watching a rational arms race inflate what may eventually be judged, in hindsight, as a spectacular overshoot. Not because these companies are dumb, but because they’re smart in exactly the same way at exactly the same time.
No hyperscaler wants to be the one that invested conservatively only to find themselves starved of compute while rivals accelerate. No VC wants to be the firm that passed on the breakthrough company because the valuation felt frothy. Every rational actor is trapped in a game that produces an irrational outcome.
What you get, as these incentives compound, is something that resembles a bubble, but it’s not one born of delusion. It’s a rational bubble. A coordinated overreaction produced by uncoordinated actors responding to the same risks, the same uncertainty, the same possibility of a once-in-a-generation shift.
It’s not pure FOMO, the underlying shift is real.
It’s not pure rationality, the economics are highly speculative.
It’s not pure mania, there is genuine technological inevitability.
It’s not pure efficiency, over-capacity will emerge.
This is a game with positive expected value for the winners and negative ROI for the collective.
So we are left with a sobering but coherent picture. The AI buildout is not a frenzy of irrational actors. It’s a deeply rational arms race undertaken by companies that understand the cost of hesitation. That does not absolve the system of waste, nor does it guarantee stable returns. It simply recognizes that in moments of technological discontinuity, optimal strategies at the firm level and sensible outcomes at the macro level often diverge. Today’s capex surge reflects that divergence.
Bubbles aren’t proof of stupidity; they’re proof of uncertainty. In every technological transition, the market over-funds the system to guarantee the new platform emerges at all. We don’t get railroads, electrification, the internet, smartphones, or AI without periods of excess.
AI will change everything. But the road there will be paved with too much capex, too many models, too many chips, and too many venture rounds priced for perfection. There will be winners - massive, generational winners. But the total investment will dwarf the realized returns. That’s what happens when the market aligns around a single narrative: the future is coming, and if you’re not building for it, someone else is building it for you.



The rational bubble concept is brillant for understanding this moment. Each firm's decision makes perfect sense in isolation, but the agreggate creates fragility. The telecom comparison is especially apt, we got the infrastructure we needed but destroyed enormous value getting there. What strikes me is how the power law dynamics in VC actually reinforce this pattern rather than correct it. Missing the outlier is such an asymmetric risk that everyone has to play, even knowing collective overinvestment is likely.