One of the most under-appreciated risks of artificial intelligence is not what it does to us, but what it allows us to stop doing.
A few months ago, Twitter/X exploded when the MIT paper on how ChatGPT engenders cognitive debt dropped. It feels almost intuitive that this would happen. Because let’s face it, if a problem is solved, why should we waste cognitive energy on said problem?
In Thinking, Fast and Slow, Daniel Kahneman draws a now-famous distinction between two modes of human thought:
- System 1, which is fast, automatic, emotional, effortless.
- System 2, which is slow, reflective, rational, mentally taxing.
This dual-processing model of cognition mirrors the architecture of our own bodies (functionally, not anatomically). System 1 is like the autonomic nervous system – the part of us that keeps the heart beating, pupils dilating, and muscles flinching before we’re even aware of a threat. System 2, on the other hand, relies on the central nervous system, especially the brain’s prefrontal cortex, which governs logic, planning, self-control, and complex reasoning.
These systems evolved for good reason. However, thinking is metabolically expensive. So once we find a process that works, we naturally shift it to the automatic mode of habit, intuition, or instinct. This frees up mental bandwidth to deal with new or difficult challenges. It’s a deeply adaptive survival feature. But it also creates a dangerous blind spot, especially with artificial intelligence.
The Path of Least Resistance
AI is not just a tool, it’s a pattern engine, trained to offer efficient answers, fluent predictions, and increasingly humanlike decisions. It is designed to feel intuitive. And that’s exactly why it’s so seductive.
It speaks directly to our System 1 by offering a shortcut around the effortful thinking of System 2. When we no longer need to wrestle with complexity, ambiguity, or contradiction, we start losing the habits of critical thought that make us truly human. We delegate decisions to the machine not necessarily because it’s more accurate, but because it’s easier. The effects of this cognitive surrender are compounded in a post-truth world.
Kahneman warned about this in his discussion of the “illusion of validity” - our tendency to trust information that feels coherent, even if it’s wrong. System 1 doesn’t ask, “What am I missing?” It only reinforces what feels right. And AI, when it’s fluent and fast, feels very right.
So What Do We Do?
It’s easy to imagine a future where old institutions fail and new ones emerge, powered by data and AI. Our current institutions are obviously failing. But what if the imagined “new institutions” aren’t institutions at all, just automated processes we mistake for wisdom? DOGE?
At the cultural cum normative level, I think what we need is a new ethic of cognitive vigilance to balance out the new and emergent institutions shaping up around RSPs, auditing, etc. The challenge of AI isn’t just keeping it aligned with our values; it’s keeping ourselves awake long enough to notice when we’ve gone numb. We must build not just smarter machines, but stronger habits of thought. Because in a world optimised for speed and efficiency, slow thinking is an act of resistance.
My proposed path to building a “stronger habit of thought” is through reading.
Why Should We Read?
- For Cognitive Value
Reading strengthens how we think. Every paragraph requires inference, memory, and synthesis — connecting what came before with what’s coming next. This slow, deliberate process builds comprehension and pattern recognition, abstract reasoning, and metacognition (thinking about how we think)
- For Linguistic Value
Reading enlarges the boundaries of expression. Every new word or phrase adds a new concept to consciousness. The more your vocabulary expands, the more precisely you can think and feel.
- For Moral Value
Reading cultivates empathy and conscience. When you read, you enter other minds. You experience other ways of feeling, deciding, and justifying.
By living other lives imaginatively, you soften the barriers of ego and recognise the shared vulnerability of being human.
We are here today as members of the Effective Altruism movement because we are able to share the imagined world of Peter Singer’s moral philosophy.
- For Aesthetic Value
Good writing, in prose or poetry, trains our aesthetic sense: rhythm, proportion, metaphor, silence. It attunes us to patterns, resonance, and form – the same faculties you use in art, design, and even moral reasoning. I, for instance, see this beauty in the writings of Steven Pinker and Richard Dawkins, despite being non-fiction.
- For Psychological Value
In an always-on world, reading protects the sacred space of the mind – the inner room of the self. It gives you a place to retreat into for reflection without distraction.
- For Civic Value
Reading sustains democracy and dialogue. A reading public is a reasoning public.
Citizens who read are harder to manipulate, more capable of empathy, and better able to imagine the common good. Reading develops the capacities democracy depends on: attention, interpretation, scepticism, and moral imagination. A society that stops reading stops reasoning altogether.
Conclusion
The human brain makes up about 2% of body mass but consumes around 20% of the body's energy – mostly in the form of glucose and oxygen. That’s an enormous metabolic cost, especially for early humans who often faced food scarcity.
This energy constraint shaped our cognitive architecture. System 1 is cheap – it uses well-worn neural pathways and pattern recognition. System 2 is expensive – it requires sustained attention, working memory, and error-checking, all of which burn more glucose.
So there's a continuous energy-efficiency trade-off where we can’t afford to use System 2 all the time or we’d exhaust ourselves; and we can’t rely only on System 1, else we’d make too many dangerous mistakes.
This is why:
- You feel mentally tired after deep reasoning or reading difficult material.
- You prefer familiar tasks and habits because they cost less energy.
- Attention is a finite resource because it’s literally the allocation of metabolic energy to specific neural circuits.
Thinking is metabolically expensive, so the brain will outsource thinking whenever it can. Every technological leap, from writing to calculators to AI, represents an energy optimisation at the civilisational level. We build tools that take over energy-intensive cognitive labour so we can redirect our limited mental resources elsewhere. But AI accelerates this logic to an extreme. It externalises not just computation, but also judgment – one of the most energy-intensive human functions.
That’s both a feature and a risk:
- AI frees up human cognitive energy from drudgery.
- It also weakens our ability (and habit) to think deeply, since we no longer need to expend energy to reach conclusions.
It, in essence, lowers the cost of thinking – and that’s exactly why it’s both potentially a gift and a curse.
The inherent values of reading in modern life are intellectual, moral, aesthetic, and civic.
It shapes not only what we know, but who we are capable of becoming. Alignment paths that combine reading with the use of AI may be the best equilibrium that avoids the catastrophic risk of AI.
(This article is an edited version of my AI Safety Collab Governance project.)
