🧾 Weekly Wrap Sheet (06/20/2025): Playbooks, Pilots, Partners & Prompts
Sales get technical, defense dollars beckon, allies turn enemies, Nvidia tightens its grip, and Karpathy coins AI’s mainframe moment.
🎬 TL;DR
Enterprise sales is being rebuilt in real time - old playbooks are out, technical sellers and high-horsepower generalists are in.
AI defense spending is entering its land-grab phase - OpenAI’s $200M ceiling Pentagon deal is just the entry fee for multi-decade contracts.
Microsoft and OpenAI have moved from partners to rivals - IP fights, pricing wars, cloud jailbreaks, and anti-trust threats test the limits of $13B friendship.
Nvidia isn’t just selling GPUs anymore - with DGX Cloud Lepton, it’s turning partners into inventory and keeping the customer relationship for itself.
Andrej Karpathy says we’re in AI’s mainframe era - the personal computing revolution will come, but for now, we’re all just prompt engineers at terminals.
🛠 Sales Gets Technical
If you’re selling AI software, the old enterprise sales formula - prove PMF, hit a milestone, hire a big-name VP Sales - isn’t a blueprint anymore. It’s a museum piece. The market is too fluid for old playbooks to work: buyers are overwhelmed, competitors shift monthly, and the product changes faster than the sales deck.
What’s replacing it? Two patterns:
High-horsepower generalists as leaders - adaptable, curious, able to figure things out as they go.
Technical sellers as teams - able to explain, guide, and debug, not just sell.
What we call “sales” is starting to look a lot more like product and engineering. The new superpower? Fluency in both code and context.
🏛 Pentagon Pilots - The New AI Arms Race
OpenAI recently secured a Pentagon pilot contract with a ceiling of $200 million - not a guaranteed payout, but a performance-driven entry ticket. The initial spend will be far smaller, with funding unlocked through task orders and scope expansion as the pilot proves its value. The real prize isn’t the pilot itself - it’s embedding as core infrastructure in the U.S. defense AI stack, where pilots turn into programs, and programs into contracts that stretch across decades.
Anthropic, Meta, Cohere and others are racing for this same pot. And the pot is big: $186 billion in DoD external contracts this year alone. AI spend is still a tiny sliver - but that will change fast. Whoever embeds early will shape the architecture of national security for decades.
Historically, defense sales meant soul-crushing cycles of meetings, pilots, and delays. AI has shattered that: the threat landscape is moving too quickly for bureaucracy to keep up. Delay isn’t inefficiency anymore - it’s danger.
⚡ Microsoft and OpenAI Split at the Seams
The $13B partnership that launched a thousand copilots is fraying.
OpenAI wants out of Azure lock-in.
Microsoft wants more equity and perpetual tech rights before signing off on OpenAI’s PBC conversion.
OpenAI’s Windsurf acquisition triggered a fight over IP - letting Copilot train on Windsurf data would be like handing your rival your playbook.
ChatGPT Enterprise discounts are undercutting Copilot pricing, turning internal tension into external competition.
And now OpenAI is reportedly weighing antitrust complaints against Microsoft - the kind of Hail Mary you throw when your partner feels more like a platform overlord.
They’re still smiling for the cameras - but the gloves are off.
🏪 Nvidia Tightens Its Grip
Forget the next GPU spec - Nvidia’s most strategic move is happening in the cloud. DGX Cloud Lepton is a compute marketplace where Nvidia sells GPU power directly (or via partners).
The logic is simple:
Selling chips is good business.
Renting chips is better.
Owning the customer relationship? That’s the best business.
GPU cloud providers are stuck: join Nvidia’s bazaar and risk becoming a nameless supplier, or stay out and lose. And Nvidia? It’s playing the long game: let partners help scale now, then own the stack when the moment’s right.
🖥 Karpathy’s Mainframe Moment
At YC’s AI Startup School, Andrej Karpathy captured where we are: this isn’t the AI equivalent of the PC era - it’s the AI equivalent of the mainframe age.
Today’s LLMs look a lot like computing in the 1960s:
Massive, centralized compute. The power lives in distant data centers - nobody’s running GPT-4 locally.
Thin clients. We access these models through browsers and APIs - the modern version of terminal time-sharing.
No GUI. No desktop, no mouse - just prompts typed into a box, hoping the model understands what we mean.
The opportunity: Someone will build the AI equivalent of the personal computer - the desktop, the mouse, the spreadsheet of this era. The future belongs to whoever turns today’s command-line AI into something intuitive, productive, and indispensable. For now? We’re all prompt engineers, waiting for that leap.