The Conversation

How does AI affect how we learn?

Brian W. Stone,
Boise State University

When OpenAI released “study mode” in July 2025, the company touted ChatGPT’s educational benefits. “When ChatGPT is prompted to teach or tutor, it can significantly improve academic performance,” the company’s vice president of education told reporters at the launch. But any dedicated teacher would be right to wonder: is this just marketing, or does research really support such claims?

While generative AI tools are moving into classrooms at lightning speed, robust research hasn’t kept pace. Early studies have shown benefits for certain groups such as computer programming students and English language learners.

Optimistic studies, including one published in Nature in May 2025, suggest chatbots may aid learning and higher-order thinking. But many of these papers have significant methodological weaknesses. Other research paints a grimmer picture, suggesting that AI may impair performance and skills such as critical thinking. One paper showed that the more a student used ChatGPT while learning, the worse they performed later without it.

In other words, early research is only beginning to scratch the surface. Where else can we look for clues? As a cognitive psychologist, I’ve found that my field offers valuable guidance for identifying when AI can be a brain booster and when it risks becoming a brain drain.

Cognitive psychologists have long argued that our thoughts and decisions result from two processing modes: System 1 and System 2. System 1 is fast and automatic, requiring little conscious effort – like getting dressed or making coffee. System 2 is slow and deliberate, requiring more attention and sometimes painful effort, but often yielding more robust results. Mastering new skills depends heavily on System 2.

Struggle, friction and effort are crucial to learning, remembering and strengthening connections in the brain. Once a cyclist learns to ride, they rely on System 1 pattern recognition, but only after hours of System 2 strain. Mastery never comes without that initial effort.

I tell my students the brain is like a muscle: it takes genuine hard work to see gains. Without challenging that muscle, it won’t grow.

Research shows offloading cognitive tasks can impair learning and memory while causing metacognitive errors – misreading one’s own understanding. Habitual GPS use can weaken spatial memory.

Using Google to answer questions can inflate confidence in personal knowledge. Similarly, one study found students using ChatGPT to research produced worse reasoning than those using traditional web searches. Another found students who used AI to revise essays scored higher but gained no more knowledge, showing “metacognitive laziness.” Short-term boosts came at the expense of long-term skills.

Offloading can be useful once foundations are built. But those foundations require doing the initial work. AI should be used more like a personal trainer, pushing students to work harder, rather than like a robot doing the workout for them. AI has great potential as a scalable tutor – guiding, prompting and scaffolding learning – but only if used carefully.

Early results are mixed. High school students reviewing math with ChatGPT performed worse than those who didn’t use it. Even tutor versions offering hints didn’t improve performance and left students overconfident. In short, AI didn’t help and sometimes harmed self-assessment.

Better design may fix some flaws. But the temptation to use default AI to avoid effort will remain, just as with earlier technologies. Ultimately, learning demands work. Deep knowledge and mastery will always require a genuine cognitive workout – AI or not. [Abridged]

Categories Opinion The Conversation