AI generates the best exercises when they have a clear structure and a definable output. Scenario-based application tasks, fill-in-the-framework worksheets, structured reflection sequences, and case study analyses are all strong. Where AI struggles is exercises that require genuine personal voice, original storytelling, or judgment calls that only someone with deep domain experience can evaluate fairly. Know the difference and you will use AI well for exercise design.
The Five Exercise Types AI Does Well
Scenario-based application exercises are where AI performs best. Give Claude a realistic professional situation — a difficult student conversation, a course module that isn’t working, a pricing decision — and ask it to design an exercise where students analyse the scenario and recommend a course of action. Claude can produce these quickly and can vary the scenario details to create multiple versions for different cohorts.
Fill-in-the-framework worksheets are the second strong category. If you have a model or process you teach — a five-step curriculum planning method, a client intake framework — Claude can take that framework and build a structured worksheet where students apply it to their own situation. The framework is yours; Claude just creates the fill-in-the-blank scaffolding around it.
Structured reflection sequences — the “look back, look at now, look forward” format described in relation to reflection prompts — are also reliably good. Claude understands adult learning theory well enough to produce reflection sequences that avoid the trap of surface-level responses.
Case study analyses work when you provide the case material. Claude can design the analysis questions and the evaluation criteria. You supply the real-world example; Claude builds the exercise framework around it.
Comparative ranking exercises — “rank these five approaches from most to least appropriate for this situation and explain your reasoning” — are the fifth strong category. They are quick to generate, force genuine analysis, and produce discussion-ready student outputs.
Where AI Falls Short
AI is not the right tool for designing exercises that require students to surface original creative work, articulate something deeply personal, or demonstrate professional judgment in a way that only an experienced practitioner can evaluate. Exercises like “write the opening to your signature talk” or “describe a moment that changed how you teach” need your human assessment — AI cannot reliably judge whether the output is authentic, resonant, or genuinely the student’s own perspective.
Also be cautious with role-play exercises in sensitive professional contexts — coaching ethics scenarios, client boundary situations, crisis response protocols. Claude can draft these, but the nuance of what makes a response correct in real professional practice needs your domain expertise to verify.
What This Means for Educators
The division of labour is clean: use AI for the structured, output-based exercises where quality is measurable, and reserve your own design energy for the exercises that require professional judgment to evaluate. This approach lets you build a rich exercise library quickly while making sure the most human-centred parts of your course still carry your expertise.
The Simple Rule
If you can write a clear rubric for it, AI can design it. If the quality of the response depends entirely on your professional eye, design that one yourself.
