Demis & Dario go to Davos
Two of the people closest to AGI wish it were coming more slowly. Neither thinks slowing down is possible.
In most industries, an annual Davos check-in is ceremonial. In AI, it’s a progress report from a different epoch.
Demis Hassabis and Dario Amodei shared the stage again yesterday, one year after their last public conversation. In that twelve-month gap, the frontier moved, the pecking order reshuffled, and the business of building intelligence became far more real.
What came through most clearly was not hype, but tension: two people who believe something enormous is underway, who would prefer it to unfold more slowly, and who no longer think slowing it down is actually on the table.
My takeaways from the conversation:
Frontier AI leadership is not durable
When these two last sat down, OpenAI dominated mindshare. DeepMind and Anthropic were seen as challengers trying to catch up.
Over the past year, the leaderboard completely reset. Asked about DeepMind’s resurgence, Hassabis replied with almost unnerving calm: he was always confident Google would return to the top because it has the “deepest and broadest research bench.”
In frontier AI, there are no incumbents. There are only current leaders. Advantage decays. Leadership rotates. The compounding asset is not a model, it’s an organization that can repeatedly re-solve a moving problem.Independent labs can self fund but only at the frontier
Amodei was asked about a growing concern: whether independent model labs can survive long enough to break even - a risk increasingly discussed (sometimes loudly) in relation to OpenAI.
Dario noted that he’s observed an exponential relationship not only between how much compute you put into the model and how cognitively capable it is, but between how cognitively capable it is and how much revenue it's able to generate. Anthropic has grown revenue from $0 to $100m in 2023, $100m to $1b in 2024 and $1b to $10b in 2025. He presents this as an argument that frontier intelligence itself is becoming directly monetizable at extraordinary scale. Not eventually - already. If you can produce the best models in the domains you focus on, he believes the economics work.What’s implied but unstated is the real constraint: this model only works at the frontier. There is no comfortable middle. Either your models are good enough to fund unprecedented capex - or you become structurally dependent on a larger platform.
Amodei then delivered a clean, unmistakable subtweet, noting that what Anthropic and DeepMind share is that they’re “led by researchers who focus on the models… with hard scientific problems as a north star.” Wonder which other major labs cannot say the same..
AGI timelines: disagreement on speed, not destination
Amodei reaffirmed his aggressive view: human-level (or better) cognitive performance across domains could arrive in ~1–2 years, driven by near-complete automation of coding and AI accelerating AI research itself. From his perspective, the bottlenecks are no longer intelligence, but chips, factories, and training time.
Demis remains more cautious, sticking to his original estimate of ~50% chance of AGI by end of the decade. He argued current progress overstates readiness for scientific creativity - forming new hypotheses and theories - and for domains where validation is slow, physical, or expensive. He emphasized missing ingredients in world-modeling and theory formation.
They’re converging in direction, not tempo. Amodei thinks the exponential bites imminently. Hassabis thinks reality introduces more friction than the exponent would like.
Both agree on the real accelerant: if AI can autonomously design, train, evaluate, and improve successor systems, everything speeds up.Jobs: near-term calm, medium-term shock, long-term existential questions
Both agreed there’s no major macro labor disruption yet, though early signs are emerging in software engineering and entry-level white-collar roles.
Amodei thinks ~50% of entry-level white-collar jobs at risk within 1-5 years and worries that exponential progress will outpace society’s ability to adapt.
Hassabis is more optimistic near-term, framing AI as a force multiplier for individuals. His advice to students was practical: become deeply fluent with these tools - they may be more valuable than a traditional internship.
Longer term, the concern shifts beyond redistribution to meaning.
Hassabis noted that work provides identity and purpose, not just income - and suggested that redistributing abundance may be easier than reinventing purpose in a post-work world.The doomer debate
Dario explicitly rejects fatalism. “I’m not a doomer.” But he’s also not offering comfort. He named concrete risks - bioterrorism, authoritarian misuse, loss of control over autonomous systems, economic destabilization, and unknown unknowns.
Interestingly, both he and Hassabis referenced the same movie, Contact, as their favorite - the most on-brand overlap imaginable. In it, humanity receives instructions from an advanced civilization to build a mysterious machine-forcing a reckoning with transformative technology, uncertainty, and our place in the universe. See any parallels?
Amodei anchored his thinking in one line from the movie: “How did you manage to get through this technological adolescence without destroying yourselves?” That word choice is deliberate. Adolescence is not apocalypse. It’s volatility: power without wisdom, capability without maturity, risk without governance.
Hassabis put the same idea in more engineering terms. He said he believes the technical safety problem is tractable. Then he added the clause both of them kept returning to: “If you have the time.”Everyone wants to slow down. No one thinks slowing is rational.
Perhaps the most revealing shared sentiment was regret. Amodei said plainly that he would prefer Hassabis’ slower timeline. Hassabis agreed a slower pace would be “better for the world.” And yet, neither believes slowing is feasible.
Why? Because this is not just a company race - it’s a nation-state race. If geopolitical adversaries are moving at similar speed, unilateral restraint looks indistinguishable from losing. This is a textbook prisoner’s dilemma - except the stakes are civilizational.
Amodei was especially forceful in opposing the sale of advanced chips to China, likening it to selling nuclear weapons to North Korea. The argument that this merely locks in U.S. tech standards, he argued, badly misreads the scale of what’s at stake.
The people closest to AGI believe we are moving faster than is socially optimal, slower than is technically possible, and without institutions that understand the magnitude of what’s changing. Happy Wednesday, I guess.



Insightfull analysis of the Davos conversation! The prisoners dilemma framing really captures the paradox here. I remmember when people thought AI safety meant just making sure chatbots didn't say mean things, but now we're talking about geopolitical races that can't be slowed even when the leaders want to. The gap between what's socially optimal and what's strategically rational is honestly terrifying.