Three Productive Ways to Use Modern LLMs: Tutor, Capable Employee, Recon Tool
There are several patterns that maintain and grow your capabilities while using AI. The framing changes how you prompt — and what you get back.
There are several great patterns that maintain and grow your own capabilities while using modern LLMs effectively.
As a Tutor: LLMs are actually excellent at this. The trick is to purposefully know you're going after knowledge (not yet output), and ask in those terms. Useful patterns include: Explaining unfamiliar concepts ("I don't understand how X works — can you explain it from first principles?"). Testing your understanding ("I think X works like Y — is that right? What am I missing?"). Exploring a topic interactively ("What are the main trade-offs between approach A and approach B for this problem?").
Note that all of these can, and should, apply to questions about your code. This can precede the context pre-filling stage. There's nothing wrong with using AI to understand about existing patterns, or to learn about any unfamiliar systems. But do learn about the relevant systems and patterns before letting the system handle them, or it's likely to make changes you didn't want and aren't aware of.
Important caveat (mid-2026): Model accuracy on factual and technical questions continues to improve, but errors still occur — particularly in fast-moving domains, niche topics, and anything requiring real-time information. Treat model explanations as a strong first pass.
Tutor mode combines particularly well with context pre-filling, e.g.: "Here is my current understanding of how our authentication service works: [your description]. Is this accurate? What am I missing or getting wrong?" This loads the conversation with the relevant frame before you start asking operational questions, and surfaces any misunderstandings early.
As a Capable Employee: Here, we think of the model as a capable junior employee who needs good direction. Fill the context carefully. Front-load the relevant background, constraints, and goal. Don't assume the model knows your codebase, your organization, or your standards if you haven't actually told it via prompting, context pre-filling or agents files.
Specify the output format and the patterns you want. Ambiguity in instructions produces ambiguity in results. You must be able to evaluate the output. This is non-negotiable if you want code that doesn't bloat and get confused in the long run, and I'm sorry, because this is the exact rule we all wish we could just break. But if you could not, in principle, do the task yourself (or at least check it), you cannot reliably use a model to do it. In which case, purposefully use the model in "tutor mode" for a bit instead, and you'll fix that fast.
Good prompts are like air in your bike tires. Time spent crafting a precise prompt almost always pays back in reduced back-and-forth and better results. Treat prompt design as part of the work, not as overhead. Test outputs systematically. Do not treat the first response as final. For code, run it. For analysis, probe the assumptions. For writing, read it critically.
Doing all of the above, you may find that you spend some appreciable amount of time writing careful prompts, and retreating/re-prompting/tuning up prompts. This is okay, and honestly a good thing. In my experience, these systems have still sped up my output of production-level code by around a factor of 5 to 10. That's a fantastic paradigm shift, but I have also met otherwise-good engineers who would claim a factor of 50, but who also ship bugs and confusing code an appreciable fraction of the time.
As a Recon/Research tool: Most modern agent systems can also do web research, which often means they can verify, find and explain API contracts and other such things. This is often a good use on the way to change or add to existing patterns.