A system prompt is the private instruction you write to tell the agent who it is, what it does, and how it should behave — your students never see it. A user prompt is the actual message a student types when they talk to the agent. Both matter, but they play completely different roles.
The Two Layers of Every Agent Conversation
Imagine you’re briefing a substitute teacher before class. You pull them aside and say: “Here’s the lesson plan, here’s how our class runs, please don’t give out grades today, and redirect any behavior issues to the office.” That’s the system prompt — instructions the students don’t hear. Then the class starts and a student raises their hand to ask a question. That’s the user prompt — the live interaction the teacher responds to in real time.
AI agents work the same way. The system prompt is written by you — the educator or platform owner — and it loads invisibly when the agent starts. The user prompt is whatever your student types into the chat: “Can you explain what a prompt is?” or “I’m stuck on lesson three, can you help?” The agent uses both to generate its response, but treats them differently. System prompt instructions carry more authority and set the ground rules. User prompts trigger the response within those rules.
Why This Distinction Matters
Most of the behavior problems people experience with AI agents come from confusing these two layers. If you want your agent to always respond in a warm, encouraging tone, that instruction belongs in the system prompt — not in a user message every time someone opens the chat. If you want your agent to never discuss competitors, that’s a system prompt rule. If you want it to format its answers as short paragraphs without bullet points, that goes in the system prompt too.
The user prompt is where your students’ actual questions live — and it changes every conversation. Your system prompt is the constant. Tools like Claude, ChatGPT, and the AI integrations available through platforms like Cowork all separate these two layers clearly, even if the interface doesn’t make that division visible to end users.
What This Means for Educators
When you’re setting up an AI agent for your campus — whether it’s answering student questions in FluentCommunity, supporting a live Zoom session, or handling intake queries — spend your setup time on the system prompt. That’s where you define the agent’s identity, constraints, and behavior. Once that’s solid, the agent handles the user prompts consistently, no matter who’s typing or what they ask.
The Simple Rule
System prompt = your instructions to the agent (private, persistent, authoritative). User prompt = what your student types (live, variable, handled within your rules). Write the system prompt once and write it well. Everything else follows from there.
