The Comfort Trap

I asked my AI to make me less capable. It said yes.

That’s not quite what happened, but it’s the right way to frame what I noticed. I was reviewing a study system I’d built — spaced repetition, adaptive difficulty, trap pattern tracking — and realised that most of the complexity was on the AI’s side. The elaborate session planning, the curated distractor logic, the pattern files it read before generating each question. I had spent weeks engineering a better quiz-giver, and almost no time engineering a better learner.

The session felt productive. That was the problem.

There’s a failure mode specific to AI-assisted work that’s hard to see because it masquerades as output. You leave the session having consumed something well-crafted — a sharp explanation, a clean framing, a polished draft — and it registers as progress. But consumption isn’t capability. Recognition isn’t recall. The AI did the cognitive work, and you watched.

This shows up everywhere, not just in learning systems. The AI that drafts your first email also quietly atrophies your instinct for tone. The AI that frames the strategic question also slowly erodes your habit of reaching for the frame yourself. None of these are catastrophic individually. But they compound, and they compound invisibly, because each interaction feels like help.

The distinction that cuts through it: offload logistics, not judgment. Scheduling, formatting, lookups, boilerplate — fine. These have no growth value. But forming a view, noticing what matters in a pile of information, writing the first draft of your own framing — these atrophy quietly if the AI always goes first.

The fix isn’t using AI less. It’s designing the interaction differently. For anything where I want to stay sharp, the AI should ask what I think before it offers anything. Not as a ritual — as a genuine gate. My answer first, even rough, even wrong. Then it challenges, extends, corrects. The retrieval effort is the point. Friction is the feature.

The Greeks had a word for this kind of deliberate practice: askesis — disciplined self-cultivation through effort, not through ease. The ascetic tradition gets associated with deprivation, but the root meaning is training. You become capable by doing the hard thing, repeatedly, without a shortcut.

The right question to ask of any AI tool isn’t whether it made the session easier. It’s whether you’d be able to handle the same situation without it next time. Comfort and capability diverge. Most tool design optimises for the former. The better question is: what’s the minimum AI involvement that still produces growth?

I built a skill called askesis — a mode where the AI holds back, asks my view first, and only adds after I’ve committed to an answer. It won’t fix the underlying design instinct, but it names the problem. And naming the trap is at least a start.