Use a Simple Three-Part Test
When you read an AI response and are not sure whether to use it, edit it, or discard it, run it through this quick mental checklist:
- Is it accurate? Does anything in this response contradict what you know to be true about your subject?
- Does it sound like it could come from you? Is the tone, vocabulary, and framing consistent with how you communicate?
- Does it actually help the reader? Is this genuinely useful to the audience it is meant for, or does it only sound useful?
If all three are yes, use it. If one or two are "mostly yes with minor issues," edit it. If any answer is a clear no, either revise the prompt or start fresh.
The Most Common Quality Issues and What to Do
It is accurate but sounds generic. This is the most common AI output problem. The information is correct, but it sounds like it could have been written for anyone. Fix: add your voice, your examples, and references to your specific audience. A few edits to the first and last sentences usually fixes this.
It is confident about something that is wrong. AI can state incorrect facts with total confidence. Fix: check any specific claim you are not certain about before publishing. This is especially important for statistics, dates, and names.
It is too long. AI tends to pad responses with summaries, caveats, and transitions that your audience does not need. Fix: cut anything that does not add new information. The rule of thumb is: if removing a sentence makes no difference, remove it.
It misses the actual point. Sometimes AI gives a technically correct response to a slightly different question than the one you asked. Fix: re-read your original prompt and compare it to the output. If there is a mismatch, refine the prompt rather than fixing the response.
A Useful Mental Benchmark
Ask yourself: "Would I be comfortable if my most critical long-term student read this and knew it came from AI?" If yes, it is ready. If you would be embarrassed to have it traced back to the tool, it needs more of your hand in it.
