Back in 2008, The Atlantic sparked controversy with a provocative cover story: Is Google Making Us Stupid?
In that 4,000-word essay, later expanded into a book, author Nicholas Carr suggested the answer was yes, arguing that technology such as search engines were worsening Americans’ ability to think deeply and retain knowledge.
At the core of Carr’s concern was the idea that people no longer needed to remember or learn facts when they could instantly look them up online. While there might be some truth to this, search engines still require users to use critical thinking to interpret and contextualize the results.
Fast-forward to today, and an even more profound technological shift is taking place. With the rise of generative AI tools such as ChatGPT, internet users aren’t just outsourcing memory – they may be outsourcing thinking itself.
That raises a critical question: Is ChatGPT making us stupid?
As a professor of information systems who’s been working with AI for more than two decades, I’ve watched this transformation firsthand. And as many people increasingly delegate cognitive tasks to AI, I think it’s worth considering what exactly we’re gaining and what we are at risk of losing.
Generative AI is changing how people access and process information. For many, it’s replacing the need to sift through sources, compare viewpoints and wrestle with ambiguity. Instead, AI delivers clear, polished answers within seconds. While those results may or may not be accurate, they are undeniably efficient. This has already led to big changes in how we work and think.
But this convenience may come at a cost. When people rely on AI to complete tasks and think for them, they may be weakening their ability to think critically, solve complex problems and engage deeply with information. Although research on this point is limited, passively consuming AI-generated content may discourage intellectual curiosity, reduce attention spans and create a dependency that limits long-term cognitive development.
To better understand this risk, consider the Dunning-Kruger effect. This is the phenomenon in which people who are the least knowledgeable and competent tend to be the most confident in their abilities, because they don’t know what they don’t know. In contrast, more competent people tend to be less confident. This is often because they can recognize the complexities they have yet to master.
In other words, what matters isn’t whether a person uses generative AI, but how. If used uncritically, ChatGPT can lead to intellectual complacency. Users may accept its output without questioning assumptions, seeking alternative viewpoints or conducting deeper analysis. But when used as an aid, it can become a powerful tool for stimulating curiosity, generating ideas, clarifying complex topics and provoking intellectual dialogue.
It’s often said that AI won’t take your job, but someone using AI will. But it seems clear to me that people who use AI to replace their own cognitive abilities will be stuck at the peak of Mount Stupid. These AI users will be the easiest to replace.
It’s those who take the augmented approach to AI use who will reach the path of enlightenment, working together with AI to produce results that neither is capable of producing alone. This is where the future of work will eventually go.
This essay started with the question of whether ChatGPT will make us stupid, but I’d like to end with a different question: How will we use ChatGPT to make us smarter? The answers to both questions depend not on the tool but on users.
[Abridged]







No Comments