Tone, Trust, and the Trouble with “AI-First”
When bold bets meet bad messaging - what Klarna and Duolingo taught us about building (and breaking) trust in public
There’s been a lot of schadenfreude lately - over Klarna backtracking on its AI rollout, over Duolingo’s “AI-first” messaging misfire, over companies that ran headfirst into the future... and stumbled.
It’s easy to gloat - harder to learn. But let’s move past the punchlines and come at this with curiosity and humility.
Because here’s the truth: Klarna and Duolingo didn’t have the wrong idea. The problem wasn’t the tech - It was the tone.
They did the rest of us a favor - by testing the boundary conditions of AI in the wild, at scale, and in public. They’ve given the entire ecosystem a live demo of what works, what breaks, and where the line between ambition and alienation really is.
The Klarna saga is a case study in how not to message transformation.
Their recent arc - lay off 700 employees, claim AI replaced them, face backlash, rehire humans - wasn’t just a tech fumble. It was a trust fumble.
Klarna partnered early with OpenAI, proudly announcing that its chatbot was handling 75% of support interactions - “doing the work of 700 people.” Cost savings were front and center. So was a vision to reduce headcount by 60% through AI-enabled attrition. But as service quality declined, customer frustration grew, and the brand’s humanity eroded, Klarna hit pause.
CEO Sebastian Siemiatkowski admits they “went too far.” That cost-cutting drove the strategy, and that - shocker - it hurt quality. The company is now rehiring human agents and piloting an “Uber-style” remote model to bring the human touch back in.
He emphasized the importance of being “clear to your customer that there will always be a human if you want.” (Does this remind anyone else of Dumbledore? No? Just me? ok)
Being vocal about AI transformation is brave - but it’s also risky. The louder you are, the more scrutiny you invite. And if you're not careful, ambition starts to sound like arrogance. That’s what happened here: a fundamentally reasonable shift executed with clinical detachment and PR overconfidence.
Meanwhile, at Duolingo…CEO Luis von Ahn declared the company was going “AI-first.” They cut contractors and paused hiring for any job AI might do.
The result? Subscriptions canceled. Social backlash. Duolingo wiped its Instagram like it was a crime scene.
The CEO eventually clarified: we’re augmenting, not replacing.
These are not failures. They are high-stakes experiments. And they hold important lessons for every leader navigating transformation in real time.
So instead of dunking on the companies bold enough to go first, let’s ask: what does good AI leadership look like?
Here are a few things that stand out to me from reflecting on these events:
⚙️ AI ≠ Headcount Reduction. AI = Leverage.
If the headline is “X jobs eliminated,” you’ve already lost the room. AI is not a tool for cutting people, it’s a tool for amplifying your best ones.
Audit for leverage, not replacement: Ask which parts of your org create outsize value when given more time or better tools - then use AI to scale them.
Map the "Force Multiplier Effect": Identify your top 10% performers in each function. How can AI free them up or scale their output?
Never announce AI by headcount savings: Lead with how it empowers your team and improves experience - not how many roles it eliminates.
🧠 People + Process + Product > Just Product
AI adoption isn’t a software install. It’s an organizational rewire.
People: Train, upskill, and communicate the “why” of AI - not just the “what.” Treat your people as participants, not obstacles.
Process: Layer AI into existing workflows before rearchitecting org charts. Redesign roles after you observe how workflows evolve.
Product: Don’t chase demos. Pilot in high-friction, low-risk areas and build muscle memory first.
🤝 Build Trust Before You Break Things
Trust is your AI transformation rate-limiter.
Over-communicate the journey: Tell your people where you’re testing AI, what you’re learning, and how it affects them.
Create opt-in pilots: Let employees volunteer to experiment with AI-enhanced workflows - then turn them into evangelists.
Use AI to augment, not automate, customer interaction: Make sure humans are accessible in high-emotion or high-stakes moments.
🧭 Run Two Operating Systems in Parallel
Think of your org in “Dual OS” mode.
OS1: The current business - stable, proven, slow to change
OS2: The experimental layer - agile, AI-integrated, focused on learning
Isolate your AI experiments: Assign specific teams to prototype AI-infused workflows without disrupting core operations.
Budget for redundancy: Avoid overcommitting to unproven tech. Build safety nets that allow you to course-correct without losing face or trust.
Institutionalize learning: Create weekly or monthly rituals to document and share AI learnings across teams.
🔍 If It Touches People, Lead With Empathy
AI decisions are cultural decisions.
Design for dignity: Even if a role is evolving or being phased out, message it with respect and long-term support.
Preserve human touch in high-trust functions: Legal, HR, customer experience - don’t cut corners where nuance and care are irreplaceable.
Narrate the transition well: Be honest about what’s changing, why, and how you’ll support people through it. Messaging can make or break adoption.
AI is a mirror. It reflects your org’s strengths — and weaknesses. Before you scale AI, ask:
Is your org culture ready to absorb fast change?
Are your people empowered to experiment and learn?
Do you know what not to automate?
In the end, AI is just leverage. It multiplies what’s already there - trust or fear, clarity or chaos, empathy or ego.
Klarna and Duolingo took the first swing. Not perfect, but brave. The rest of us now get to move smarter. More human. Still bold.
Headcount reduction is inevitable whether people will like it or not, however, this transition should have lots of empathy and some sort of planning before execution. It is like we humans moved from Farming to Industrial age.