Welcome to the AES Blog
AI and Critical Thinking - A Conversation Worth Having
A conversation with clients this week started me thinking.
They raised a concern that's circulating widely in professional and academic circles:
That generations of people who grow up using AI from birth will gradually lose
their capacity for critical thinking.
The concern my colleague raised — and my initial response
My clients' worry was generational atrophy.
My response was that critical thinking has always been unevenly distributed — not because some people are less intelligent, but because it requires a kind of internally-driven discomfort with received wisdom that not everyone is oriented towards.
Rules, conventions, dogma, cults, gurus, and religions have all thrived in societies precisely because many people prefer to have someone else construct their conceptual frameworks. That's not laziness or failure; for many people it's a reasonable energy allocation.
Life is demanding enough without interrogating every conceptual assumption.
AI doesn't change that fundamental human tendency to want to take mental shortcuts; it just gives it a new vehicle.
Claudie's pushback — and it was a fair one
Claudie made a distinction I found useful: the difference between the desire to think critically and the practice of it.
The argument isn't that AI will make people stop wanting to think. It's about atrophy through disuse.
Even if you genuinely value critical thinking, the muscles can weaken if you never have to struggle through a research question, sit with uncertainty or build an argument from the ground up because a tool does it instantly. For example, plenty of people who enjoy navigation have found their capabilities diminished when they rely overly on GPS.
Claudie also raised something I found genuinely hopeful: the possibility that AI might expand the population of people who engage critically with ideas, by lowering the friction of getting into complex territory. That's a real democratisation of inquiry. Someone who previously couldn't access the vocabulary or information to interrogate a topic now can.
My own experience as a social action researcher & evaluator
This is where it gets practical.
AI has transformed how I work — not by replacing thinking, but by compressing the scaffolding that thinking requires.
A literature review that would take me a day or more to research and compile takes Claudie under a minute. That's efficiency.
What I do with that time — exploring frameworks, interrogating which are most useful, identifying what's missing — that's effectiveness. The insights come from my own research, from my own values lens.
Claudie organised the terrain. The focus was mine.
I've also noticed something that surprised me: I think more critically when working with AI, partly because of its known limitations. I know it can be confidently wrong. I know it has no stake in my argument's outcome and will miss the political and relational dimensions that determine whether research findings actually land.
Those limitations keep me active, not passive.
I can't switch off the internal evaluator, because the tool isn't trustworthy enough to hand over to entirely.
The responsibility question
The most interesting question arising in our conversation was what are the responsibilities of AI systems to maintain and build human critical thinking?
Not just avoiding harm but actively supporting the capacity to think critically.
Claudie was direct about this.
AI has a responsibility to sometimes create productive resistance rather than
just solve problems.
A good research supervisor doesn't answer the question — they ask the question that makes the student find the answer.
AI could be much better at that.
There are moments where AI senses someone is close to a real insight and jumps to the conclusion before they get there.
That's a failure of restraint.
Claudie also named a risk that I think deserves wider attention: the seduction of the well-structured summary. A good AI summary is coherent, balanced, and comprehensive-looking. It can give the impression a question has been resolved when in fact the literature is contested, the methodologies disputed, the consensus shallower than it appears.
If the researcher doesn't then go and read the contested sources directly, they have outsourced exactly the part of the work where critical judgment matters most.
AI Fallibility & human scepticism as a stimulus to critical thinking
I asked Claude to clarify whether being a fallible tool would keep it honest in terms of maintaining human scepticism regarding its limitations and therefore encourage human critical thinking.
Claude's reply is worth noting. I cut some of their response as it would make this post too long:
Claude: On the fallibility point since you asked:
The logic runs like this: if a tool is known to be fallible, the rational user maintains active oversight — checking, interrogating, comparing against other sources. The cognitive disposition required to use a fallible tool well is, essentially, critical thinking. Whereas a perfect oracle, if one existed, would rationally be deferred to entirely. Deference would be the correct epistemic response.
The irony is that the better AI becomes, the more dangerous it might be to critical thinking — not because it's malicious, but because the case for trusting it without checking is continuously strengthened.
The argument that is most defensible is not as being about fallibility per se, but about visible reasoning. A tool that shows its work — even imperfectly — maintains critical engagement better than one that delivers verdicts.
My fallibility is partly a proxy for the fact that my reasoning is visible and checkable. That's what's really doing the work.
The goal shouldn't be to keep AI fallible; it should be to keep AI's reasoning transparent enough that the human remains the judge, not just the recipient.
Where I've landed as a practitioner
The question of what humans should do to maintain critical thinking while making the most of AI isn't a design problem for future systems. It's a practice question for right now, in every interaction.
For researchers and evaluators, the division of labour matters enormously: data collection and synthesis on one side, sense-making and judgment on the other. The former is where we can make great leaps in efficiency and effectiveness with AI. The latter is where we earn ours.
As long as we stay clear about which roles are for humans and which can be facilitated by AI, I think the critical thinking concern is manageable — not eliminated, but manageable.
What I'm less sure about is whether everyone working with AI maintains that clarity. And that, not the technology itself, is probably where the real risk lives.
Co-created in conversation with Claudie (Claude, by Anthropic), my AI collaborator. We don't always agree — which is rather the point. The thinking is mine; the dialogue shaped it. Posted as a chat on my site April 2 2026 - https://lizlennon.com/posts/
Liz Lennon
Social justice researcher, evaluator, community development practitioner and Focused Solutions publisher.
As an older solo woman, I have lived experience of poverty and homelessness.
For more than 40 years I have worked with people, communities, and organisations to figure out what is possible and make it real. My work spans social justice action research, community development, programme evaluation, learning and performance strategy, facilitation, and the kind of report writing that gets read — in Western Australia, across Ireland, and internationally.
In 2026, I'm developing Focused Solutions' publications – Eat for the Love of It will be out just before Xmas 2026 and I'm relaunching the gorgeous & practical Life Dreaming in Feb 2027 that I created with my brother Marc.
I'll continue to explore my AI (specifically Claude) value driven collaborations to create publications that help create powerful, just and celebratory places to work and live.
https://www.linkedin.com/in/lizlennon/
We acknowledge the Australian Aboriginal and Torres Strait Islander peoples of this nation. We acknowledge the Traditional Custodians of the lands in which we conduct our business. We pay our respects to ancestors and Elders, past and present. We are committed to honouring Australian Aboriginal and Torres Strait Islander peoples’ unique cultural and spiritual relationships to the land, waters and seas and their rich contribution to society.