AI agents give more weight to information that appears at the start and end of the context window than to content buried in the middle — a well-documented phenomenon called the “lost in the middle” problem. The order you present information directly affects how reliably the agent follows it.
Not All Context Is Equal
When you put information into an AI agent’s context, you might assume it reads and weights everything equally — like a person reading a document carefully from start to finish. In practice, that is not quite right. Research on large language models consistently shows that they pay more attention to content at the beginning and end of long context windows than to content in the middle. Information sandwiched between large blocks of other text tends to have less influence on the output, even when it is directly relevant.
Think of it like a long meeting agenda. The first item and the last item tend to get the most attention. The items in the middle of a packed agenda often get rushed through or overlooked. Your AI agent has a similar attentional bias — not because it is lazy, but because of how the attention mechanisms in large language models actually work.
Practical Implications for How You Brief Your Agent
Put your most critical instructions at the very beginning of your system prompt — before any background context, before any examples, before any reference material. Tell the agent who it is, what it does, and what rules it must follow first, before anything else. Then include your background information and reference material. If there are any non-negotiable constraints — things the agent must always do or never do — repeat them briefly at the end of the system prompt as well. Beginning and end, not buried in the middle.
The same principle applies to how you structure a request. If you are asking your agent to analyze a long document and answer a specific question, put your question before the document rather than after it. “Answer this question: [question]. Here is the document: [document]” outperforms “Here is a document: [document]. Answer this question: [question]” — because the question is in the agent’s attention at the moment it is reading the relevant content, rather than being a faint memory from before the document started.
What This Means for Educators
For coaches and consultants building campus agents, this means your system prompt structure matters as much as its content. Audit your system prompts by asking: what is in the first 200 words, and what is in the last 100 words? Those positions carry the most weight. Your agent’s core identity, its primary job, and its most important constraints should all be in those positions — not buried after three paragraphs of background context.
The Simple Rule
Critical instructions at the top. Supporting context in the middle. Key constraints repeated at the bottom. Never bury something important in the middle of a long prompt and assume the agent will apply it consistently. Position is influence.
