A new paper from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can “result in the deterioration of cognitive faculties that ought to be preserved.”
“[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the researchers wrote.
The researchers recruited 319 knowledge workers for the study, who self reported 936 first-hand examples of using generative AI in their job, and asked them to complete a survey about how they use generative AI (including what tools and prompts), how confident they are the generative AI tools’ ability to do the specific work task, how confident they are in evaluating the AI’s output, and how confident they are in their abilities in completing the same work task without the AI tool. Some tasks cited in the paper include a teacher using the AI image generator DALL-E to create images for a presentation about hand washing at school, a commodities trader using ChatGPT to “generate recommendations for new resources and strategies to explore to hone my trading skills,” and a nurse who “verified a ChatGPT-generated educational pamphlet for newly diagnosed diabetic patients.”
Overall, these workers self-reported that the more confidence they had in AI doing the task, the more they observed “their perceived enaction of critical thinking.” When users had less confidence in the AI’s output, they used more critical thinking and had more confidence in their ability to evaluate and improve the quality of the AI’s output and mitigate the consequences of AI responses.
“The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI,” the researchers wrote. “Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.”
The researchers also found that “users with access to GenAI tools produce a less diverse set of outcomes for the same task, compared to those without. This tendency for convergence reflects a lack of personal, contextualised, critical and reflective judgement of AI output and thus can be interpreted as a deterioration of critical thinking.”
The researchers also noted some unsurprising conditions that make workers use more or less critical thinking and pay attention to the quality of the AI outputs. For example, workers who felt crunched for time used less critical thinking, while workers in “high-stakes scenarios and workplaces” who were worried about harm caused by faulty outputs used more critical thinking.
So, does this mean AI is making us dumb, is inherently bad, and should be abolished to save humanity's collective intelligence from being atrophied? That’s an understandable response to evidence suggesting that AI tools are reducing critical thinking among nurses, teachers, and commodity traders, but the researchers’ perspective is not that simple. As they correctly point out, humanity has a long history of “offloading” cognitive tasks to new technologies as they emerge and that people are always worried these technologies will destroy human intelligence.
“Generative AI tools [...] are the latest in a long line of technologies that raise questions about their impact on the quality of human thought, a line that includes writing (objected to by Socrates), printing (objected to by Trithemius), calculators (objected to by teachers of arithmetic), and the Internet,” the researcher wrote. “Such consternation is not unfounded. Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved.”
I, for example, am old enough to remember a time when I memorized the phone numbers of many friends and family members. The only number I remember now that all those contacts are saved on my phone is my own. I also remember when I first moved to San Francisco for college I bought a little pocket map and eventually learned to navigate the city and which Muni busses to take where. There are very few places I can get to today without Google Maps.
I don’t feel particularly dumb for outsourcing my brain’s phonebook to a digital contacts list, but the same kind of outsourcing could be dangerous in a critical job where someone is overrelying on AI tools, stops using critical thinking, and incorporates bad outputs into their work. As one of the biggest tech companies in the world, and the biggest investor in OpenAI, Microsoft is pot committed to the rapid development of generative AI tools, so unsurprisingly the researchers here have some thoughts about how to develop AI tools without making us all incredibly dumb. To avoid that situation, the researchers suggest developing AI tools with this problem in mind and design them so they motivate users to use critical thinking.
“GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques,” the researchers wrote. “The tool could help develop specific critical thinking skills, such as analysing arguments, or cross-referencing facts against authoritative sources. This would align with the motivation enhancing approach of positioning AI as a partner in skill development.”