Purposefully Pre-Filling Context: The AI Prompting Pattern You're Not Using
Don't tell the AI what to do — first have it tell YOU how the system works. Then negotiate the change. The two-step context-prefilling pattern that produces consistent results.
Engaging with an agent-powered coding system means that a lot of additional prompt language is injected "up front" along with your input. This contains the creators' language about searching out relevant context first, then planning out what to do, spawning off sub-agents and determining what context each one will get, etc. And you would much rather have it find and assemble a compact context that holds just what you want.
One way to accomplish this is to purposefully pre-fill the context by requesting a rundown of the system or systems you want to work on. An example: say you want to do something with respect to an API function. You know basically what it does, and you want to change that pattern, perhaps adding an additional facility. So you can "prefill" the context by requesting a report on how this setup works.
Example context-prefilling prompt: "In this API, there is an endpoint called /api/data/request_data. I believe it queries the database to get information about user data and provides that, with some user-input specifics about what data is sent back. Could you investigate this and report about how this system works?"
The results from this should be a fairly narrowly-targeted search, a returned context that mostly has just what you wanted, and a report (which now also has entered the agent context) that explains the specific thing you wanted to work on.
After this, the next step is to describe how you would like that pattern to change, and ask for the system to create a plan to compactly accomplish this.
Example plan-generating prompt: "Thanks. I need to add something to the request_data endpoint. I need it to only return data that is connected to that user. When the user sends in the input, the database query downstream should check the data against that ID. However, there's also an admin user that should have access to everyone's data, and that user is in a table called admins in the database. Don't write code yet, but can you make a minimal plan to accomplish this?"
Note the "how" information contained in this prompt. This should basically get you to a plan where everything that is about to actually change is clearly laid out and can be discussed or modified before implementation.
To recap, the main flow is: efficiently pre-fill a context, make the AI express the main existing patterns, and negotiate for a change in that pattern rather than just describe what you want at the end. Then get the system to write a plan, reaffirm what is changing if there's any confusion, then implement the plan, and if things go wrong then rewrite stuff you like back into the context prefill prompt and try again.
Anti-patterns to avoid: Just telling the system stuff like "X doesn't work in the product, please fix" is a recipe for headaches down the road — better to undo whatever just happened, re-tool the initial prompt to contain the stuff you now know, and restart a new agent session. Overly broad requests, like "Can you make a way to only send the data that belongs to a user to that user," with none of the "how" details, is rolling some heavily-loaded dice against success, and will often lead to at least one unexpected behavior. Not paying attention to everything the AI says it's about to do, or did. Even handling the context like this, it's still possible that "helpful additions" can creep in that are not as intended.