EDMF translation interpreting proofreading editing
Douglas Arnott

Douglas Arnott

Owner and Founder of EDMF Language Services Kft.

Artificial Intelligence has become a powerful force in our daily lives, shaping how we access information, learn new skills, and even complete creative tasks. From generating essays and summarising articles to answering complex questions in seconds, AI promises speed and convenience. But as these tools become more deeply woven into our routines, one crucial question arises: Are we sacrificing our ability to think critically?

The allure of effortless answers

It’s easy to see why AI-generated content is so appealing. With just a couple of prompts, you can receive well-structured, articulate responses that often sound as if they were written by an expert. This convenience is especially tempting for students and professionals facing tight deadlines or information overload. Why spend hours researching and writing when a tool like ChatGPT, Gemini or Claude can do it in moments?

Yet this very convenience is at the heart of a growing concern. As we become more accustomed to accepting AI outputs without question, we risk losing our instinct to scrutinise, analyse and verify information. The danger lurks not just in the quality of the answers, but in the gradual erosion of our own cognitive abilities.

New evidence

A groundbreaking study from MIT, published in 2025, sheds light on the real cognitive costs of relying on AI for learning and writing. Researchers asked students to complete essay-writing tasks, dividing them into groups: some used ChatGPT to assist with their writing, while others worked unaided or with traditional search engines. Using EEG technology, the team monitored the students’ brain activity during these tasks and then switched the groups’ methods for a follow-up session.

The findings were striking:

  • Reduced brain engagement: Students who used ChatGPT from the outset exhibited significantly weaker brain connectivity, particularly in regions associated with executive function and deep memory processing. By contrast, those who wrote without AI showed much stronger neural activity and memory recall.
  • Cognitive debt: When students who had relied on AI were later asked to write without it, they struggled to recall or reference their own previous work. This phenomenon, which the researchers termed “cognitive debt,” suggests that overreliance on AI can lead to long-term reductions in critical thinking, creativity and independent learning.
  • Loss of ownership: Students reported feeling less connected to their AI-assisted essays. Teachers involved in the study also noted that these assignments often lacked depth, consistency, and personal voice – sometimes feeling almost “soulless”.

These results are a wake-up call: while AI can make writing and research easier, it may also make us less engaged and less capable of deep, meaningful learning.

Challenging your brain

As a student at Heriot-Watt University in Edinburgh, sitting in the simultaneous interpreting booth we were often put under stress – deliberately – by our lecturers to imitate real-life situations as a professional interpreter. On one such occasion, we had to interpret 3 or 4 minutes of a speech before answering questions about what the speaker had been talking about. Then, back in the booth, we interpreted for another 3 or 4 minutes, but this time, the speaker delivered the speech at double the pace, barely stopping between sentences.

Interestingly, after getting our breath back and when asked the second time what the speaker had been talking about, we could barely remember a thing. Our brains had gone into survival mode, and focused solely on the job at hand (following the Usain Bolt of speakers and giving a faithful rendition into English of their speech). This left no brain capacity to actually store away what we were talking about. But it trained us to work at speed, and when we were confronted with a normal rate of delivery again, it felt much easier by comparison.

Compare this with the findings of the MIT study: 83% of AI users failed to quote from their own essays that they had just written. Not because they had been challenged and had been writing under severe time pressure, quite the opposite in fact. The use of AI tools had bypassed the brain’s normal process for memory formation, and the “creative process” had no impact on the AI user. It was as if the essay had been written by someone else, so apart from completing the task, what was the actual benefit to the student?

Risks of uncritical acceptance

The MIT study is just one piece of a larger puzzle. As AI-generated content proliferates, so too do the risks associated with accepting it uncritically. Here are some of the most pressing concerns:

AI systems are only as good as the data they’re trained on. They can inadvertently perpetuate biases, reinforce stereotypes, or even “hallucinate” – confidently producing false or misleading information (you may remember Perplexity doing its best to give me a false legislative reference, as described in the blog post last week). Without critical thinking, users can pass along these errors, amplifying misinformation on a massive scale.

When you let AI do the heavy lifting, you miss out on the cognitive benefits that come from wrestling with ideas, synthesising information, and articulating your own arguments. The MIT study’s findings on brain activity and memory recall highlight the risk of becoming passive consumers rather than active learners, leading to a decline in deep learning.

Teachers are already noticing the impact of AI on student work. Assignments generated or heavily assisted by AI often lack the depth, nuance and personal engagement that come from genuine effort. This not only undermines the educational process but also raises ethical questions about authorship and academic integrity. And don’t forget, the students of today are the working population of tomorrow…

A recent article by Thomas Claburn compares the launch of ChatGPT and the explosion of generative AI to the environmental contamination caused by atomic bomb tests, warning that AI models are now “polluting” the world’s data supply by generating synthetic content that future AI models may be trained on. This “data contamination” risks leading to “AI model collapse,” where models trained on AI-generated data become less reliable and creative over time. Academics argue that access to clean, human-generated data – analogous to pre-atomic “low-background steel” – will become increasingly valuable and could give early market entrants a significant advantage, potentially stifling competition and innovation. But for the average user, it raises questions about what we can trust and what we can’t.

So perhaps the most concerning issue is the gradual loss of agency over our own thinking. If we become accustomed to outsourcing our reasoning and creativity to machines, we risk losing confidence in our own abilities – and with it, the drive to question, challenge and innovate.

So how do we reclaim our critical thinking?

Ask probing questions: Whenever you use AI to generate content or answer a question, take a moment to assess the response. Where did this information come from? Does it align with what you already know? What assumptions might be baked into the answer?

Cross-check and verify: Don’t rely on a single source, especially not if it is AI-generated. Cross-check your facts and arguments with reputable sources, and be alert to inconsistencies or gaps in reasoning.

Engage with your material: Use AI as a starting point, not a substitute. Read original texts, take notes, and try to explain concepts in your own words. The MIT study shows that unaided writing and active engagement strengthen neural pathways and improve retention.

Discuss and reflect: Share your findings and ideas with others. Debating and discussing topics helps uncover flaws in reasoning and deepens your own understanding. The teachers in the MIT study found this particularly lacking in AI-assisted work.

Promote AI literacy: Understanding how AI models work, their limitations and their potential for error is essential. The more you know about the technology, the better equipped you’ll be to use it wisely and critically.

In short

The age of AI offers incredible opportunities, but it also demands greater vigilance. The MIT study serves as a powerful reminder that convenience comes at a cost – what the researchers call “cognitive debt.” If we’re not careful, we may find ourselves trading deep understanding and independent thought for the illusion of effortless knowledge.

Let’s not lose sight of what makes us human: our curiosity, our scepticism, and our capacity for critical thought. Next time you use an AI tool, ask yourself – am I truly understanding, or just accepting what I’m given? Am I using this technology to enhance my thinking, or to avoid it?

The future of learning – and indeed, of society – depends on our willingness to remain active and engaged, and critical thinking could well be our most trusted tool.