In an experiment, 61% of people were allowed to use ChatGPT to get direct answers. So basically, they didn’t have to think, they just saw the question and the answer. Another 27% could use AI only for hints, not full answers. And the remaining 12% had to rely only on their own brain, no AI at all.
Later, when AI was taken away and everyone had to solve the problems on their own, the 61% group (who depended fully on AI) performed the worst. Even worse than the group that never used AI, or the one that used it only for hints.
I read about this experiment in a recent paper. Researchers from MIT, Oxford, UCLA, and other institutes ran it with 1,222 people, using fraction-based problems.
After about 10 to 15 minutes of solving problems, they took ChatGPT away. Then they asked everyone to solve a few more problems on their own.
The AI group did measurably worse. Their solve rate dropped. Their skip rate went up.
Ten to fifteen minutes. That's all it took.
Now I want to flag something before going further. This isn't the usual "AI is making us dumb" article. We've all read those. They're usually based on surveys or interviews or somebody's vibe.
This is different. This is a randomized controlled trial. The same kind of design they use to test medicines. Causal evidence. Not correlational.
So, it’s not about the performance drop. It was the persistence.
Every problem in the experiment had a skip button. Participants were told upfront there was no penalty for wrong answers and no penalty for skipping. Their payment was fixed.
So choosing to skip a problem.. that's not about ability. That's about whether you're willing to try.
The control group skipped about 7% of the test problems. The AI group skipped 13%. Almost double.
Remember, these were the same people who had just spent 10 minutes successfully solving problems with ChatGPT. They weren't bad at fractions. They had literally just done a bunch of them.
But once the AI was gone, something had shifted. They saw a problem.. and didn't even try.
The researchers have a theory about why this happens. They call it a reference point shift. When AI does something in 5 seconds, the idea of doing it yourself in 5 minutes starts to feel.. expensive. Painful. Even when you're perfectly capable of doing it.
It's the same thing that happens with food delivery. After a few months of Swiggy (food delivery app), the idea of actually cooking dinner starts to feel like a whole expedition. Not because cooking got harder. The comparison changed.
When I cover a research paper or report, do you want the source link in the article?
Who Gets Hurt the Most
The persistence drop wasn't equal across everyone. The researchers asked participants how they actually used the AI during the task. Then they grouped them.
Direct answer-getters. People who asked for hints. People who didn't use the AI at all.
The hint-askers? Their performance was basically fine. They actually did slightly better at the test than at the pretest. So did the non-users.
The direct answer-getters? They went backwards. Their solve rate dropped from 0.75 at pretest to 0.65 at test. Their skip rate climbed.
Same brains. Same skill level when they walked in the door. The pretest showed no difference between any of these groups.
The only thing that changed was how they used the AI in those 10 minutes.
This matters because it tells us the problem isn't AI itself. It's the relationship we form with it.
If you treat AI like a tutor, you're fine. You might even come out stronger. If you treat AI like a vending machine.. that's where the rot starts.
The vast majority, 61%, used it as a vending machine.
That's not because people are lazy. It's because that's what the tool is optimized for. ChatGPT is built to give you the best, fastest, most complete answer. It doesn't push back. It doesn't say "try it yourself first." It just delivers.
The researchers make a point about this that I keep returning to. Good human mentors know when not to help. They scaffold. They withhold. They sometimes refuse, even when they could give you the answer, because they know what you need long-term is different from what you want right now.
AI doesn't do this. AI says yes. Always. Instantly.
You earned the attention. Here's what to do next.
Most creators spend years building an audience on platforms that own it. The reach is real. The relationship isn't. One algorithm change and the people who chose you stop seeing you.
A newsletter is different. Your list is yours. Every subscriber is earned and stays earned. And on beehiiv, the tools to grow it, monetize it, and own it completely are built in from day one.
30% off your first 3 months with code LIST30. Start building today.
My Take
If you've been reading me for a while, you know I'm not in the AI doomer camp. I genuinely think AI is one of the best things that's happened to us in a long time. I use it every day. I've built workflows around it. I'm an optimist on this stuff.
But..
There are problems I used to chew on for an hour. Now I tab over to Claude before I've even finished reading the question. Sometimes mid-sentence. Lol. There are emails I would've drafted from scratch. Now I don't. There are decisions I would've thought through on a walk. Now I just.. ask.
And the worst part? I don't feel the difference. It feels normal.
That's the boiling frog thing. Each individual moment of offloading feels harmless. Why would you spend 30 minutes writing something an AI can do in 30 seconds? You'd be a fool not to use it.
But this study suggests something is shifting underneath, even when you can't feel it.
Your baseline for "what's worth doing yourself" keeps moving. Things that used to feel reasonable start feeling exhausting. Tasks you would have happily done six months ago now feel like.. ugh, why am I doing this myself when the AI can?
That's not laziness. That's adaptation. And once your reference point moves, it's really hard to move it back.
It comes back to the same thing the researchers found. The hint-askers were using AI to think with. The answer-getters were using AI to think for. Same tool. Same conversation sometimes. Two completely different relationships.
I don't have a clean prescription here. I'm not going to tell you to stop using AI. That would be silly coming from me. But I think the researchers are right that the responsibility for the long-term consequences can't sit entirely on the user.
The companies building these tools optimize for one thing. Did the user feel helped? Did the answer come fast? Did they come back tomorrow?
Nobody is measuring whether you can still do the thing yourself in six months. Nobody is rewarded for telling you "actually, you should figure this one out on your own." There's no business case for it.
So until that changes.. the responsibility falls on us.
Here's the small thing I'm going to try. Before I open Claude for something, I'm going to ask one question. Am I asking it to teach me, or to do this for me?
If it's teaching, great. If it's doing.. I'll think about whether that's actually what I want.
That.. I think.. is still something we have control over.
What about you? Has AI changed how long you're willing to sit with a hard problem before reaching for it? Genuinely curious.
If you made it this far, you're not a casual reader. You actually think about this stuff.
So here's my ask. If this article made you think, even a little, share it with one person. Just one. Someone who's in the AI space. Someone who reads. Someone who would actually sit with these ideas instead of scrolling past them.
That's how this newsletter grows. Not through ads or algorithms. Through you sending it to someone and saying "read this."



