Cover photo

They Say AI Makes You Dumber. Here's What They Got Wrong.

MIT’s latest study on LLMs tells a deeply misleading story.

Eric P. Rhodes

Eric P. Rhodes

I read this so-called “important” AI study from MIT by Nataliya Kosmyna, Ph.D, and others, and it seems like it was mostly set up, intentionally or not, to guarantee the outcome the researchers are now pushing with clickbait headlines like:

"No, your brain does not perform better after LLM or during LLM use."

But the study didn’t test how people typically use AI to think or write. It tested what happens when you hand over the task and check out completely. Of course you don’t remember what you didn’t take part in. This isn’t new to cognitive science. And it’s not a flaw in ChatGPT. It’s a setup that invites passivity and then blames the result on LLM technology more broadly, even though the study only tested a narrow, constrained use of ChatGPT.

I don’t find this work as insightful as some folks in academia seem to think it is, judging by the comments I’ve seen here on LinkedIn. A lot of teachers are still trying to make sense of how to incorporate AI tools and understand how this changes their pedagogical approach. So this study feels more like it confirms a pre-existing bias than offers real insight.

In fact, it's wildly misleading. The research is built on a framework of bias, from how it defines “typical” ChatGPT use, to how the task was designed, to how far the conclusions stretch to support a narrative about cognitive decline linked to LLMs. It says more about the limits of the study than about the limits of human cognition or AI.

What I’d like to see is two more groups added to the three already in the study: brain only, internet only, and ChatGPT only.

One should mimic what happens when you ask a tutor or a friend to write the essay for you. That would give us a baseline for full handoff. If you're going to claim cognitive decline, you need to show it’s worse than that.

The other should reflect how people actually use AI when it has access to real sources. Let ChatGPT pull from the internet or supporting material. Let people search, revise, compare, and actually think with the tool in context. That is the in-between space where thinking and offloading mix. And it is how a lot of people use these tools in practice.

I suggested this to the researchers. But their response to this and other similar criticism has been some blanket version of "we state this in the limitations section of the paper."

This study shows what happens when you treat AI like a friend who writes your paper for you. You skip the thinking, so nothing sticks. That’s fine to observe, but let’s not pretend it tells us something deeper about how people think with AI.


Who comes to mind when you read this post?

Send it their way. It might say what they’ve been needing to hear.


Don’t let the next one slip by.


Follow Eric: X | Farcaster | SuperRare

They Say AI Makes You Dumber. Here's What They Got Wrong.