Evaluation
Collect feedback after delivery, log changes and approvals, and use data to improve curriculum for the next cohort.
What you need before starting
- Course delivered; access to Canvas analytics, survey results, instructor reports, and IDQA observation scores.
- Update log or change-tracking doc (Google Doc or Sheet — see format below).
What you need to produce
Feedback summary with actionable findings; update log with changes, approval status, and recommendations; any revised assets and version notes for the next run.
What to do
- Collect learner feedback (end-of-module survey in Canvas or Google Forms) and instructor feedback (debrief notes, implementation observations).
- Pull assessment data: SBA pass rates, KBA averages, ICP scores, ticket completion rates.
- Review IDQA observation scores for patterns in facilitation quality.
- Summarize findings: what worked, what didn’t, and what to change — with evidence.
- Log each change request with date, module, what changed, why, and who approved.
- Implement approved changes; update materials and version info in the update log.
- Store recommendations and dependencies for the next cohort cycle.
Exit criteria
Evaluation cycle is complete when: feedback is collected and summarized; changes are logged and approved; revisions are applied and versioned; recommendations are documented for next time.
Common mistakes
- Feedback collected but not summarized or acted on.
- Changes made without logging (date, version, approval) so the next person can’t trace history.
- No link between feedback source and change (hard to improve systematically).
- Relying only on surveys — combine with assessment data and observation scores.
Where feedback comes from
| Source | Tool / method | Cadence | What it tells you |
|---|---|---|---|
| Learner surveys | Canvas survey or Google Form at end of each module | Per module | Content clarity, pacing, engagement, relevance |
| Assessment data | Canvas gradebook — SBA pass rates, KBA averages | Per module | Which objectives learners are meeting vs. struggling with |
| ICP scores | Instructor tracking (spreadsheet or Canvas) | Daily / weekly | Preparation, participation, professional behavior trends |
| Instructor debriefs | Debrief notes doc after each cohort run | Per cohort | What worked in facilitation, what didn’t, pacing issues |
| IDQA observations | Observation Tool scores | Per observation cycle | Simulation fidelity, facilitation quality, coaching effectiveness |
| Ticket completion | Simulation ticket tracker (if used) | Weekly | Team output, collaboration patterns, handoff quality |
Don’t use a single source. Cross-reference: if SBA pass rates are low and learners report unclear lab instructions and the instructor flagged pacing, that’s a pattern worth acting on. One low survey score on its own might be noise.
Update log format
Every change to curriculum after initial delivery gets logged. Use a Google Doc or Sheet with these columns:
| Date | Module | Change type | What changed | Why (evidence) | Approved by | Version |
|---|---|---|---|---|---|---|
| 2026-02-15 | 301.2 | Content | Rewrote GLAB 301.2.1 scenario to clarify steps 3-5 | 3 learners flagged unclear instructions; SBA pass rate 62% (target 80%) | J. Martinez | 1.1 |
| 2026-02-20 | 301.4 | Assessment | Added rubric to SBA 301.4 | Instructor reported inconsistent grading across sections | J. Martinez | 1.1 |
| 2026-03-01 | 302.1 | Timing | Split lab into 2 sessions (45 min each) | Instructor debrief: learners couldn’t finish in 60 min; confirmed by Canvas submission timestamps | R. Patel | 1.2 |
Rules:
- Every entry links a change to evidence (survey data, assessment scores, observation notes, or instructor feedback).
- Version numbers follow UCI conventions: 1001.1, 1001.2, etc.
- “Why” column must be specific enough that someone reviewing the log 6 months later understands the decision.
What to measure after delivery
These are the metrics that tell you whether the curriculum is working. Pull them from Canvas, ICP tracking, and simulation records.
| Metric | Source | Target | What it signals |
|---|---|---|---|
| SBA pass rate | Canvas gradebook | 80%+ first attempt | Whether learners can demonstrate the skill, not just recall it |
| KBA average | Canvas gradebook | 75%+ | Baseline content comprehension |
| ICP score distribution | Instructor tracking | No learner below 70% by week 3 | Preparation and engagement trends |
| Ticket completion rate | Simulation tracker | 90%+ of assigned tickets closed with documentation | Team output and handoff quality |
| Learner satisfaction | End-of-module survey | 4.0+ / 5.0 average | Perceived relevance and clarity (use alongside hard data) |
| Instructor facilitation score | IDQA Observation Tool | Meets expectations on all rubric domains | Whether the simulation is being facilitated as designed |
When to escalate vs. iterate
Iterate (normal process): SBA pass rates between 70-80%, minor pacing issues, a few unclear instructions. Fix in the update log, revise, and version.
Escalate to PM / stakeholders: SBA pass rates below 60%, systemic learner complaints about content relevance, instructor unable to facilitate simulation as designed, or objectives no longer aligned with job market needs. These may require a planning-level revision, not just a content fix.
Related tools/templates
- IDQA Observation Tool — facilitation quality scores
- Product Development Dashboard — tracking and status
- QA Worksheet & Rubric — review standards
- Reference — handoff and templates