Skip to Content

Evaluation

Collect feedback after delivery, log changes and approvals, and use data to improve curriculum for the next cohort.

What you need before starting

  • Course delivered; access to Canvas analytics, survey results, instructor reports, and IDQA observation scores.
  • Update log or change-tracking doc (Google Doc or Sheet — see format below).

What you need to produce

Feedback summary with actionable findings; update log with changes, approval status, and recommendations; any revised assets and version notes for the next run.

What to do

  1. Collect learner feedback (end-of-module survey in Canvas or Google Forms) and instructor feedback (debrief notes, implementation observations).
  2. Pull assessment data: SBA pass rates, KBA averages, ICP scores, ticket completion rates.
  3. Review IDQA observation scores for patterns in facilitation quality.
  4. Summarize findings: what worked, what didn’t, and what to change — with evidence.
  5. Log each change request with date, module, what changed, why, and who approved.
  6. Implement approved changes; update materials and version info in the update log.
  7. Store recommendations and dependencies for the next cohort cycle.

Exit criteria

Evaluation cycle is complete when: feedback is collected and summarized; changes are logged and approved; revisions are applied and versioned; recommendations are documented for next time.

Common mistakes

  • Feedback collected but not summarized or acted on.
  • Changes made without logging (date, version, approval) so the next person can’t trace history.
  • No link between feedback source and change (hard to improve systematically).
  • Relying only on surveys — combine with assessment data and observation scores.

Where feedback comes from

SourceTool / methodCadenceWhat it tells you
Learner surveysCanvas survey or Google Form at end of each modulePer moduleContent clarity, pacing, engagement, relevance
Assessment dataCanvas gradebook — SBA pass rates, KBA averagesPer moduleWhich objectives learners are meeting vs. struggling with
ICP scoresInstructor tracking (spreadsheet or Canvas)Daily / weeklyPreparation, participation, professional behavior trends
Instructor debriefsDebrief notes doc after each cohort runPer cohortWhat worked in facilitation, what didn’t, pacing issues
IDQA observationsObservation Tool scoresPer observation cycleSimulation fidelity, facilitation quality, coaching effectiveness
Ticket completionSimulation ticket tracker (if used)WeeklyTeam output, collaboration patterns, handoff quality

Don’t use a single source. Cross-reference: if SBA pass rates are low and learners report unclear lab instructions and the instructor flagged pacing, that’s a pattern worth acting on. One low survey score on its own might be noise.


Update log format

Every change to curriculum after initial delivery gets logged. Use a Google Doc or Sheet with these columns:

DateModuleChange typeWhat changedWhy (evidence)Approved byVersion
2026-02-15301.2ContentRewrote GLAB 301.2.1 scenario to clarify steps 3-53 learners flagged unclear instructions; SBA pass rate 62% (target 80%)J. Martinez1.1
2026-02-20301.4AssessmentAdded rubric to SBA 301.4Instructor reported inconsistent grading across sectionsJ. Martinez1.1
2026-03-01302.1TimingSplit lab into 2 sessions (45 min each)Instructor debrief: learners couldn’t finish in 60 min; confirmed by Canvas submission timestampsR. Patel1.2

Rules:

  • Every entry links a change to evidence (survey data, assessment scores, observation notes, or instructor feedback).
  • Version numbers follow UCI conventions: 1001.1, 1001.2, etc.
  • “Why” column must be specific enough that someone reviewing the log 6 months later understands the decision.

What to measure after delivery

These are the metrics that tell you whether the curriculum is working. Pull them from Canvas, ICP tracking, and simulation records.

MetricSourceTargetWhat it signals
SBA pass rateCanvas gradebook80%+ first attemptWhether learners can demonstrate the skill, not just recall it
KBA averageCanvas gradebook75%+Baseline content comprehension
ICP score distributionInstructor trackingNo learner below 70% by week 3Preparation and engagement trends
Ticket completion rateSimulation tracker90%+ of assigned tickets closed with documentationTeam output and handoff quality
Learner satisfactionEnd-of-module survey4.0+ / 5.0 averagePerceived relevance and clarity (use alongside hard data)
Instructor facilitation scoreIDQA Observation ToolMeets expectations on all rubric domainsWhether the simulation is being facilitated as designed
When to escalate vs. iterate

Iterate (normal process): SBA pass rates between 70-80%, minor pacing issues, a few unclear instructions. Fix in the update log, revise, and version.

Escalate to PM / stakeholders: SBA pass rates below 60%, systemic learner complaints about content relevance, instructor unable to facilitate simulation as designed, or objectives no longer aligned with job market needs. These may require a planning-level revision, not just a content fix.

Last updated on