Some knowledge workers risk becoming over-reliant on generative AI and their problem-solving skills may decline as a result, according to a study penned by researchers from Microsoft Research and Carnegie Mellon University.
In a paper titled “The Impact of Generative AI on Critical Thinking”, the seven researchers report and analyze a survey in which they asked 319 knowledge workers who use generative AI at least weekly how and if they apply critical thinking when using tools such as Copilot and ChatGPT.
The research found that workers who are confident tackling a task are more likely to apply critical thinking to the output of a generative AI service, while those less comfortable with a task often assume generative AI produced adequate answers and don’t bother to think about what the brainbox delivered.
Confidence in AI is associated with reduced critical thinking effort, while self-confidence is associated with increased critical thinking
The researchers suggest their findings point to a need for a re-think about the design of enterprise AI tools.
“Confidence in AI is associated with reduced critical thinking effort, while self-confidence is associated with increased critical thinking,” the paper states, adding “This duality indicates that design strategies should focus on balancing these aspects.”
AI tools, the team suggests, should incorporate mechanisms to support long-term skill development and encourage users to engage in reflective thinking when interacting with AI-generated outputs.
“This aligns with the goals of explainable AI,” the researchers said, referring to the practice of having AI outline how it delivered its output. The call for AI to show its workings is good news for the latest chain-of-thought AI models from DeepSeek and OpenAI – but merely explaining AI’s reasoning isn’t enough.
Good AI tools should foster critical thinking through proactive design strategies that encourage user reflection and provide assistance when necessary, the researchers wrote.
That might seem like criticism of current AI tools, but the paper doesn’t go there. The authors also stop short of recommending that knowledge workers reduce AI use to avoid “cognitive offload” and the potential “deterioration of cognitive faculties that ought to be preserved.”
The authors didn’t respond to questions from The Register.
But please don’t stop using our enterprise AI products
The paper concludes that we should adapt to an AI-infused world by applying critical thinking to verify AI outputs and how they can be used in daily work. Which may be what one would expected given six of seven authors work at the company that sells Copilot.
Yes, the researchers admit, knowledge workers should be taught to “maintain foundational skills in information gathering and problem-solving [to] avoid becoming over-reliant on AI,” just not too much. Those working with systems like ChatGPT, Copilot, and other generative AI tools should be trained “on developing skills in information verification, response integration and task stewardship.”
This isn’t the only study to conclude that more reliance on AI is having a negative impact on critical thinking skills, but previous work has concluded that we need to preserve our current critical thinking skills – not offload them to AI and change the way we engage those crucial faculties to simply validate and integrate AI output.
“When using genAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship,” the authors conclude.
The paper will be presented at the 2025 Conference on Human Factors in Computing Systems which starts in late April. ®