The Skipping-Leg-Day Effect: Cognitive Risks of Over-Relying on AI

Using AI assistance for cognitively demanding tasks without staying engaged can cause your own skills to atrophy. Here's the runaway pattern, and how to fight it.

Sean Robinson

There is a hazard that I've noticed since around 2024, when it comes to using the larger models to completely answer questions or solve issues. This effect is easy to dismiss but worth naming explicitly: using AI assistance for cognitively demanding tasks without staying genuinely engaged with the work can cause your own skills to atrophy.

And again, asking an AI model directly results in a fairly non-useful answer: "Use LLMs to accelerate and augment your work, not to bypass the understanding that makes you effective. When you ask a model to produce something, make sure you can evaluate its output critically. If you cannot tell whether the answer is good, that is a signal you need to engage more deeply with the material yourself — not hand it off more completely."

Ugh. But you may have felt this effect already. You've been busy. You know you could do the task at hand manually, or at least do a careful job of guiding the AI, of giving it the "how". But everyone is busy, and you could also just give a short, simple request and then work with what comes out. And you do just that, and it works… sort-of. But something feels wrong — you now have code (or an email, or a report, etc) that you don't really know anything about.

And the next time you need to do anything with or around that code, you'll need to rely even more heavily on the model to help, and you'll be less capable of specifying "the how". Iterate this a few times and the task of learning and understanding what you're actually doing will start to feel heavy… even "beyond reasonable." But then, why would you really need to know "everything," when the model is right there.

By now I'm sure you get the point. Fight this "cognitive runaway" effect as hard as you can. Remind yourself about the patterns you're working with often, and work with those patterns as part of your prompts. Resist the urge to "just let the AI handle the specifics".

Frequently asked

Common questions on this topic.

Yes, if you use AI to bypass the understanding of a task rather than augment it. When you stop specifying 'the how' and simply accept outputs you cannot critically evaluate, you enter a cognitive runaway loop where you eventually become unable to perform the work without the tool.
What this piece resolves
Stage 01 · CuriosityStage 02 · ProjectsSolo scaleGrowth scaleClimb enabler