Skip to content
WORKFLOW

How to QA a course before SME review

Pre-review checks that catch the issues SMEs would have caught — but in minutes instead of hours, before anyone external sees a draft.

2025-04-01ASCENT LABS

The review cycle most teams follow looks like this. The ID drafts content, sends it to the SME, the SME finds problems, the ID revises, repeat. Each loop costs an SME meeting and several days of back-and-forth.

The fix is not to make the SME loop faster. The fix is to catch the issues that would burn an SME loop before any SME sees the content. That is what preflight QA is for, and it is the part of the workflow most teams skip.

What preflight QA actually catches

Most issues SMEs flag fall into a handful of categories. The same categories surface across industries, modules, and audiences:

  • Accuracy — outdated references, version-skew between policy and content, wrong terminology.
  • Clarity — sentences that assume context the learner does not have, jargon that breaks the flow, unclear pronouns.
  • Scaffolding — concepts introduced in the wrong order, prerequisites missing, examples that do not match the explanation.
  • Engagement — long stretches of text that should be broken up, missed opportunities for activity or check-for-understanding.
  • Accessibility — missing alt text, color-only signals, transcript gaps.
  • Compliance — missing required language, outdated regulatory references.

Almost none of these require subject-matter expertise to catch. Most require a different reader's perspective and a careful pass.

A 30-minute preflight checklist

This is the version we run on every storyboard before SME review:

  1. Read it once at speed. Do not stop to fix anything. Just read. Note any moment where you got confused — that is the moment to investigate.
  2. Check the version stamp on every external reference. SOPs, policies, regulatory citations. Newer than the content? Stop and update before the SME flags it.
  3. Read it aloud. Your ear catches what your eye misses. Awkward phrasings, ambiguous pronouns, sentences that are technically correct but confusing — they all surface the moment you say them out loud.
  4. Read it as the audience. Pretend you are the most junior person who will take this course. Where do you get lost? Where do you skim because the prose is too dense?
  5. Read it as the audience again, but skeptical. Where would a learner who does not believe you push back? Those are the moments that need stronger evidence or examples.
  6. Check the activities. Do they match the stated learning objective? Or are they generic exercises pasted into the slot?
  7. Test the prerequisites. If a learner skips module 2, does module 3 still make sense? It should not — but if it does, you have a scaffolding issue.
  8. Audit the alt text. Real alt text. Not "image of a slide".
  9. Look at the transitions. Between sections, between activities, between modules. The places where the audience's attention is most fragile.
  10. Spot-check examples. Are they relevant to the audience? Or are they generic stock examples that do not match the context the learner is in?

A 30-minute pass through this checklist on a 90-minute storyboard catches roughly 60 percent of what an SME would otherwise flag — and crucially, the easy 60 percent. That leaves the SME's time for what they are actually qualified to judge: domain accuracy, edge cases, and audience-specific nuance.

Why teams skip this

Two reasons. First, it feels like work that "should not need to happen" — the SME is the reviewer, after all. Second, it is hard to do well at scale because it requires sustained attention to a specific persona's perspective, which is exhausting.

Both reasons are real, and both are solvable. The first by making preflight a non-optional step in your workflow, baked into your project plan. The second by using preflight tooling.

What CourseReady does

CourseReady runs synthetic learner personas against your content. Each persona reads the asset from a defined perspective — role, prior knowledge, learning style — and produces a list of findings: places where the content fails the persona's needs. Severity, category, and a citation back to the exact span of the asset.

You get the same kind of pass an attentive ID would do, but across five or ten personas at once, in minutes. The findings flow directly into StorySync as feedback items, so they go through the same revision-execution workflow as feedback from a real SME.

The point is not to replace the SME loop. The point is to make sure the SME loop is spent on the questions only the SME can answer, not on issues an attentive reader could have caught.

That is what good preflight QA does. It buys back time at the most expensive part of the workflow.

CONTINUE READING