Artificial intelligence may one day make humans obsolete—just not in the way that you’re thinking. Instead of AI getting so good at completing tasks that it takes the place of a person, we may just become so reliant on imperfect tools that our own abilities atrophy. A new study published by researchers at Microsoft and Carnegie Mellon University found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do, making it more difficult to call upon the skills when they are needed.
The researchers tapped 319 knowledge workers—a person whose job involves handling data or information—and asked them to self-report details of how they use generative AI tools in the workplace. The participants were asked to report tasks that they were asked to do, how they used AI tools to complete them, how confident they were in the AI’s ability to do the task, their ability to evaluate that output, and how confident they were in their own ability to complete the same task without any AI assistance.
Over the course of the study, a pattern revealed itself: the more confident the worker was in the AI’s capability to complete the task, the more often they could feel themselves letting their hands off the wheel. The participants reported a “perceived enaction of critical thinking” when they felt like they could rely on the AI tool, presenting the potential for over-reliance on the technology without examination. This was especially true for lower-stakes tasks, the study found, as people tended to be less critical. While it’s very human to have your eyes glaze over for a simple task, the researchers warned that this could portend to concerns about “long-term reliance and diminished independent problem-solving.”
By contrast, when the workers had less confidence in the ability of AI to complete the assigned task, the more they found themselves engaging in their critical thinking skills. In turn, they typically reported more confidence in their ability to evaluate what the AI produced and improve upon it on their own.
Another noteworthy finding of the study: users who had access to generative AI tools tended to produce “a less diverse set of outcomes for the same task” compared to those without. That passes the sniff test. If you’re using an AI tool to complete a task, you’re going to be limited to what that tool can generate based on its training data. These tools aren’t infinite idea machines, they can only work with what they have, so it checks out that their outputs would be more homogenous. Researchers wrote that this lack of diverse outcomes could be interpreted as a “deterioration of critical thinking” for workers.
The study does not dispute the idea that there are situations in which AI tools may improve efficiency, but it does raise warning flags about the cost of that. By leaning on AI, workers start to lose the muscle memory they’ve developed from completing certain tasks on their own. They start outsourcing not just the work itself, but their critical engagement with it, assuming that the machine has it handled. So if you’re worried about getting replaced by AI and you’re using it uncritically for your work, you just might create a self-fulfilling prophecy.