Yes. Tell Claude or ChatGPT the lesson objective, then ask for 5-8 quiz questions in a specific style (multiple choice, matching, short answer). Edit them to match your lesson’s difficulty and remove generic distractors.
Good Quiz Questions Are Harder Than They Look
A good quiz question tests understanding, not memorization. A bad quiz question tests whether someone remembered one weird detail. AI tends to generate questions that are too easy or too vague because it doesn’t know what your students actually struggle with. But that’s fine — you know what your students struggle with. You just need the AI to generate the structure fast, then you refine it based on your real classroom experience.
Think of it like a carpenter’s template. The carpenter can sketch a template fast. You use the template to build the real piece. The template saves time. The craft is in the details.
The Question-Writing Workflow
Here’s what works: Step 1 — For each lesson section, write out the learning objective in one sentence. “Students should understand why ChatGPT sometimes generates confident-sounding incorrect answers.” Step 2 — Tell the AI: “I’m writing a quiz for this objective: [objective]. Create 5 multiple-choice questions. Make each question test understanding of the core concept, not memorization of details. For each question, include 4 plausible answers — 1 correct, 3 distractors that represent common misunderstandings.” What you get back are five questions that test understanding. Step 3 — Read them. Fix them. Make the distractors match mistakes you’ve actually seen students make, not generic wrong answers. If the correct answer is too obvious, edit it. If the question is too long, shorten it.
That editing step is crucial. It’s where the AI draft becomes your actual quiz.
What This Means for Educators
As a teacher or coach, you know what understanding looks like in your field. You know where students get stuck. You know what mistakes they repeat. AI doesn’t. AI generates competent questions fast. You make them brilliant by connecting them to real student struggles. Together, you get a quiz that actually tests learning instead of just testing memory.
The Non-Negotiable Edit: Mistake-Based Distractors
The difference between an okay quiz and a great quiz is the wrong answers. A great quiz includes wrong answers that represent actual student mistakes. An okay quiz includes generic wrong answers. When you’re editing the AI-generated questions, replace the generic wrong answers with real mistakes. “I’ve seen students think X, Y, and Z about this concept. Make those the three wrong answers.” That one edit makes the quiz actually useful — it shows you what misconceptions your students have.
