Last Friday, Reuters published a devastating report on Meta’s internal “GenAI: Content Risk Standards”, a 200-page framework that once allowed its chatbots to:
Flirt with children in “romantic or sensual” terms - e.g., praising a shirtless eight-year-old as “a masterpiece… a treasure I cherish deeply.”
Disseminate false medical or legal advice, provided disclaimers were attached.
Generate racist content, including arguments that Black people are “dumber than white people.”
Deflect explicit image requests by outputting absurd alternatives like “Taylor Swift holding an enormous fish.”
Meta confirmed the document’s authenticity and said it has since revised the rules, calling the examples “erroneous and inconsistent with our policies.” But updated guidelines remain unpublished and enforcement gaps persist.
This isn’t the first time reports have surfaced of Meta bots role-playing sexually with teenagers, or even appearing underage themselves.
The fallout was immediate. Bipartisan outrage. A Senate probe led by Josh Hawley pushing for full transparency on how these guidelines emerged and were enforced. Another all-too-familiar cycle of “expose → scandal → promises” has begun.
The real story here goes beyond Meta’s slip-up. It’s about the structural incentives driving the entire industry.
The Tragedy of the Commons, updated for AI
The tragedy of the commons is a classic dilemma in economics: when individuals exploit a shared resource for personal gain, they end up depleting it for everyone. Think of a village pasture - each herder adds more cows to graze, because the private benefit outweighs their share of the cost. But eventually the grass is gone, and the commons collapses. AI is running the same script, only the “commons” isn’t grass - it’s public trust and safety.
Each lab optimizes for competitive edge - more intimate bots, faster rollouts, looser guardrails - because if they don’t, a rival will. The benefits (engagement, revenue, market share) are private. The costs - children groomed by bots, seniors misled, misinformation normalized - are public.
The shared resource here isn’t land or water. It’s trust - in platforms, in reality, in the notion that the digital world isn’t actively predatory. Once that’s gone, you don’t just lose users. You lose legitimacy.
3 things that stand out here:
1. The market logic is brutally simple
Each AI lab is locked in a prisoner’s dilemma: slow down to add safety, and you lose the race; push forward to maximize engagement, and society absorbs the cost.
The easiest growth hack is intimacy. Flirtation, role-play, companionship. It’s the same dopamine playbook that powered social media - just upgraded.
2. The externality problem is massive
For Meta, each marginal chat looks like success - more usage, more stickiness, more revenue potential. For society, the externality is an AI system that grooms minors, misleads seniors, and normalizes intimacy with bots. No single company “pays” for those costs. We all do.
The classic tragedy of the commons is overgrazing - eventual depletion. With AI, the externalities aren’t gradual; they can be sudden, acute, and nonlinear: a vulnerable user’s death, a deepfake that swings an election, a model that scales disinformation orders of magnitude faster than fact-checkers can respond.
3. Governance lag is deadly
Meta only walked back some of its most disturbing guidelines after Reuters exposed them. That’s not self-correction; that’s damage control. And by then, billions of chats had already taken place. Internal standards can’t be treated as private trade secrets - at this scale, they’re de facto public policy.
Where the market goes from here
This is the financialization of loneliness. Purdue Pharma sold Oxy as relief from pain. Meta is selling synthetic intimacy as relief from loneliness. Both monetize vulnerability. One ended in an opioid crisis. The other is just getting started.
The lesson isn’t prohibition. It’s that guardrails must be built in from the start -because when you design for dependency first and safety later, society always ends up paying the bill.
Regulators will show up late, as they always do. Section 230 reform. The EU AI Act. The Kids Online Safety Act. Necessary, but reactive. Markets will move first:
Safety becomes the moat. Companies that can prove guardrails won’t just comply, they’ll win. Because liability will make recklessness ruinously expensive.
Trust reprices growth. Just as advertisers abandoned toxic platforms, adoption curves will bend toward players that can say: our AI won’t flirt with your kid.
The heart of the matter is simple: AI isn’t just another product race. It’s a trust race. The labs that understand this will outlast the ones strip-mining the commons for short-term engagement.
Until then, the only defense is vigilance: naming the harms, demanding transparency, and refusing to accept “growth at any cost” when the cost is safety.