Is artificial intelligence making us stupider?

THERE is only so much thinking most of us can do in our heads.
Try dividing 16,951 by 67 without a pen, paper, or calculator.
Try doing the weekly shopping without a list.
By relying on these devices to make life easier, are we getting smarter or dumber? Have we traded efficiency for creeping idiocy?
This question is particularly relevant to generative AI, such as ChatGPT, an AI chatbot used by 300 million people weekly.
A recent study by researchers from Microsoft and Carnegie Mellon University suggests AI might be affecting critical thinking—but the reality is more complex.
Thinking well
The study examined how users perceive AI’s impact on critical thinking.
Critical thinking involves assessing our thought processes against norms such as precision, clarity, accuracy, depth, and relevance. It is also shaped by cognitive biases, worldviews, and mental models.
The researchers used a 1956 model by educational psychologist Benjamin Bloom, which categorises cognitive skills like recall, comprehension, application, analysis, synthesis, and evaluation.
However, this hierarchy has been discredited, as higher-order skills do not always depend on lower-order ones.
Evaluation, for example, can be the starting point of inquiry.
A flaw in the study is that many AI systems use this outdated model to structure their responses.
This raises the question: was the study measuring AI’s effectiveness in shaping users’ perception of critical thinking rather than critical thinking itself?
Also missing from Bloom’s model is the key element of critical thinking: an overarching concern for truth. AI does not have this concern.
Higher confidence in AI equals less critical thinking
A previous study found a negative correlation between frequent AI tool usage and critical thinking. The new study expands on this by surveying 319 knowledge workers who discussed 936 tasks performed with AI.
Participants felt they used less critical thinking during task execution but engaged it more during verification and editing. In high-stakes work environments, the need for accuracy and fear of errors encouraged critical oversight.
Overall, users believed efficiency gains outweighed the effort needed to review AI outputs.
However, those with higher confidence in AI displayed less critical thinking, while those with confidence in their own abilities displayed more.
This suggests AI does not harm critical thinking—if users possess it to begin with.
However, the study relied heavily on self-reporting, which introduces biases. Furthermore, participants defined critical thinking as “setting clear goals, refining prompts, and assessing AI-generated content to meet criteria and standards.” These standards were often task-oriented rather than related to critical thinking itself.
Becoming a critical thinker
The study implies that exercising critical thinking at the verification stage is better than blind reliance on AI.
The authors suggest AI developers add features to encourage user oversight—but is that enough?
Critical thinking should happen at every stage: when formulating questions, testing hypotheses, and scrutinising AI outputs for bias and accuracy.
The best way to ensure AI does not harm critical thinking is to develop it before using AI. This means challenging assumptions, evaluating diverse perspectives, and practising systematic reasoning.
Chalkboards improved our mathematical skills. Can AI improve our critical thinking? Maybe—but only if we use it to challenge ourselves rather than letting it do the thinking for us.

Advertisement
Advertisement