Superintelligence, Superclusters, Supremacy
Meta’s new formula is brutal and effective: compute + talent + secrecy
Meta just hit Command + Zuck on its AI strategy - shredding the open-source playbook and replacing it with one that reads: Compute. Talent. Secrecy.
The vibe is no longer “open source for all.” It’s “closed doors, infinite compute, elite team, existential stakes.”
Let’s break it down:
(1) Compute: Zuck’s Manhattan Project
Meta is building multiple AI data centers so massive they rival Manhattan in size. Prometheus comes online with 1 GW in 2026; Hyperion scales to 5 GW soon after. That’s the energy footprint of millions of homes.
For context, a typical hyperscaler data center is ~30–100 megawatts (MW). Prometheus is ~10x larger, and Hyperion is ~50–100x larger than the average.
Iceland’s total electricity consumption is ~2.4 GW, Cambodia is at ~4 GW. Meta’s Hyperion cluster alone could out-consume entire nations.
And they’re building it fast. SemiAnalysis reports the company is deploying tents, gas turbines, and bypassing traditional construction - all to get a gigawatt-scale cluster online before anyone else.
These clusters are built to train frontier-scale models, think GPT-4-class and beyond. FLOPS per researcher is the new KPI, and Meta is going from GPU-constrained to GPU-rich on a per-head basis. Each researcher now has more compute than anyone else in the industry. That’s not just good for performance - it's a hell of a recruiting pitch.
This isn’t cloud as we know it. It’s sovereign-grade infrastructure for a new kind of intelligence.
(2) Secrecy: From Open Arms to Closed Labs.
Meta won developer hearts with open-source models like LLaMA. But it also accidentally became the free R&D department for its own competitors. DeepSeek, for example, built on Meta’s models and vaulted ahead.
Now Meta is reportedly shelving its most powerful open model, Behemoth, due to both internal underperformance and external regret. The company is shifting toward a closed frontier model, aligning more with OpenAI and Google.
This isn’t just a change in architecture. It’s a philosophical reversal. Meta is moving from “open wins” (as Yann LeCun would say) to “closed dominates.”
(3) Talent: Just Buy Everyone
In parallel, Meta is running the most aggressive recruiting campaign in modern tech history. Reports suggest pay packages of $200M–$1B+ for AI leads. The strategy is not just hiring, it’s talent consolidation: absorbing entire teams or firms at once.
Meta has consolidated all AI teams under one new org: Meta Superintelligence Labs. The org is run by Alexandr Wang (ex-Scale AI), who joined Meta after a $14.3B investment that gave Meta 49% ownership of Scale. Also brought in: Nat Friedman (ex-GitHub CEO) and top talent from OpenAI, Apple, and Google. This elite team is small (~12 engineers) and works in a separate, high-security building next to Zuckerberg himself.
Forget beanbags and 10xers. This is a DARPA-style moonshot with a trillion-dollar company behind it.
Zuckerberg has said, basically, “Look, we make a lot of money. We don’t need to ask anyone’s permission to spend it.” He’s not wrong.
While OpenAI, Anthropic, and xAI rely on outside capital to fund their ambitions, Meta runs on a $165B/year ad engine.
And unlike Google and Microsoft - who have boards, activist investors, and share classes that allow for the occasional dissent - Zuckerberg controls Meta, structurally and operationally.
Meta’s dual-class share structure gives Zuckerberg over 50% of the voting power, even though he owns less than 15% of the company. He doesn’t need anyone’s approval, he can build whatever he wants.
This turns Meta into a kind of founder-led sovereign AI lab - one with the cash flows of Big Tech and the strategic flexibility of a startup. That governance structure is giving Meta an edge in executing bold, long-term bets at breathtaking speed.
So yes - Meta is building a vertically integrated empire for frontier AI, powered by ads and protected by absolute control. Here’s what their playbook signals about the future:
The model doesn’t matter if you don’t have compute.
Meta is ensuring compute sovereignty - so that no matter what model architecture wins (transformers, SSMs, agents), they have the firepower to train it.Open source is losing its biggest champion.
Meta’s openness helped companies like Mistral and DeepSeek build on LLaMA. That faucet may be shutting off, pushing the ecosystem further toward closed control.GPU per researcher is now a competitive advantage.
By hoarding compute and talent, Meta increases productivity per researcher. It's turning FLOPS into IP faster than anyone else.Energy is now core to AI strategy.
Meta’s datacenters will pull more power than many cities. We're entering an era where whoever controls energy, controls intelligence.
Zuckerberg once said Meta’s mission was to give people the power to build community. Today, the company’s mission is simpler, sharper, and more centralized: Build the smartest entity on Earth. Own the means to do it. Don’t let anyone else catch up.
Meta’s pivot marks the end of its open-source experiment. The next act is closed, compute-heavy, and capital-intensive. Less GitHub, more Los Alamos.
Learning from you again 😉
Did Meta secure their energy supply? Such quantity of electricity requires to build next to the electricity provider AND secure supply for a long term. As you know, price of energy is elastic so Meta may end up forking a huge bill (or simply not getting the electricity supply they need).
What’s very odd about how this is presented is the business case. Who is willing to spend billions without an ROI use case? Is there a bigger plan behind the official PR?