Author

Sean Robinson

Articles by Sean Robinson (10)

AGENTS.md and CLAUDE.md: Writing Guardrails for AI Coding Agents

Modern AI coding systems read project-level instruction files. Done well, they enforce the patterns you actually want. Done badly, they're prompt-tax with no return.

Read Article →

How Modern LLMs Work: From RNNs to Transformer Attention

Before today's AI, language models were dominated by recurrent neural networks. The shift to Transformer architecture and attention mechanisms changed what context costs — and why your prompt design suddenly matters.

Read Article →

Know-What vs Know-How: The AI Task Taxonomy That Saves You From Disasters

Tasks that require describing WHAT you want behave differently from tasks that require describing HOW you want it done. Getting the category wrong is how you end up with 100k lines of confused code in a day.

Read Article →

Advanced LLM Prompting: Exhaust Categories, Positive Wording, and One Simple Answer

Specific prompting techniques that come from understanding how attention-based models actually choose tokens. Why "do this" beats "don't do that" — and why offering multiple choices hurts quality.

Read Article →

Purposefully Pre-Filling Context: The AI Prompting Pattern You're Not Using

Don't tell the AI what to do — first have it tell YOU how the system works. Then negotiate the change. The two-step context-prefilling pattern that produces consistent results.

Read Article →

RAG, Agents, Memory, and Tool Calls: The AI Infrastructure Stack

Retrieval-augmented generation, agentic systems, persistent memory, structured tool calls — the architectural layer under modern AI deployments. What each one actually does.

Read Article →

The Skipping-Leg-Day Effect: Cognitive Risks of Over-Relying on AI

Using AI assistance for cognitively demanding tasks without staying engaged can cause your own skills to atrophy. Here's the runaway pattern, and how to fight it.

Read Article →

Three Productive Ways to Use Modern LLMs: Tutor, Capable Employee, Recon Tool

There are several patterns that maintain and grow your capabilities while using AI. The framing changes how you prompt — and what you get back.

Read Article →

Training-Time vs Context-Injected Knowledge: How LLMs Actually "Know" Things

Modern LLMs have two distinct sources of knowledge with very different reliability profiles. Understanding the split changes how you prompt — and how you spot hallucinations.

Read Article →

What NOT to Do With Modern LLMs

Certain interaction patterns reliably produce poor results — or worse, results that look plausible but are subtly wrong. Two anti-patterns that come from rushing.

Read Article →