Your Brain on ChatGPT: Are We Building Cognitive Debt or Cognitive Leverage?
AI isn’t making us dumber. It’s making us choose.
A new study caused a bit of a stir last week - not because its conclusions were shocking, but because it sparked a conversation that feels older than AI itself.
The study asked: What happens to our cognitive effort when we use LLMs for writing - and what happens when the crutch is removed?
Researchers gave participants an essay-writing task across multiple sessions:
One group wrote unaided (brain-only)
One used search engines
One used LLM assistance (ChatGPT)
In the final session, they flipped conditions for some participants - e.g., LLM users had to write unaided, and vice versa.
They didn’t just look at essay outputs - they tracked neural engagement (EEG) and linguistic signals like topic diversity and vocabulary breadth.
The findings
Cognitive effort declined over time for LLM users - not surprising, since the tool does the heavy lifting.
When LLM users switched to unaided writing, they struggled more than those who started brain-only. This was visible both in their behavior and neural signatures.
The authors dubbed this effect “cognitive debt”: we offload mental work now, but pay the price later when we need those skills.
Much of the commentary has framed this as some revelatory “dark side of AI” moment. But really, the study confirms something we’ve always known: If you don’t use a muscle, you lose it. Technology has always asked us to choose what to offload and what to keep.
Wheels made us weaker walkers, but gave us speed, reach, and trade.
Calculators atrophied mental arithmetic, but enabled math that would’ve been otherwise out of reach.
GPS weakened our internal navigation, but gave us the confidence to explore new places.
This isn’t a reason to panic. It’s a reason to be deliberate.
You don’t blame the wheel for weaker legs - you go to the gym. You don’t boycott GPS - you go hiking now and then to remind yourself what north looks like.
AI is no different. It removes certain cognitive workouts, and it’s up to us to design new ones. Personally, I’d rather do yoga than have to chase down my dinner.
It’s easy to claim LLMs pose a greater risk because they touch higher-order reasoning and creativity. But that assumes those are static qualities, threatened by tools, rather than dynamic ones, shaped by how we use tools. Every major technology has reshaped the very definition of “higher-order work”. The real risk isn’t creating a generation that can prompt but not think - it’s creating a culture that sees prompting and thinking as either-or, rather than intertwined.
In my experience, LLMs free me to focus on substance over form. If they help with boilerplate or structure, that’s not decay - that’s leverage. It’s like moving from walking out of necessity to walking because you want to. AI lets us choose our cognitive workouts.
The real risk isn’t cognitive debt. It’s mindless cognitive outsourcing.
The opportunity isn’t avoiding AI. It’s using it to focus human effort where we add unique value.
And so the questions worth asking aren’t “Is AI bad?” but:
What parts of thinking are worth preserving through intentional effort?
What cognitive load are we ready to let go of to pursue bigger, better questions?
Every major leap in technology has forced this reckoning. The people who thrive aren’t the ones who cling to old muscles - they’re the ones who build new ones fit for the world as it is.