Appendix H - Outcomes and Assessment Evaluation Rubric

Adapted from the Assessment Progress Template (APT) Evaluation Rubric

James Madison University© 2013 Keston H. Fulcher, Donna L. Sundre & Javarro A. Russell

Full version: https://www.jmu.edu/assessment/_files/APT_Rubric_sp2015.pdf

1 – Beginning
2 – Developing
3 – Good
4 – Exemplary
1. Student-centered learning outcomes
Clarity and Specificity
No outcomes stated.
Outcomes present, but with imprecise verbs (e.g., know, understand), vague description of content/skill/or attitudinal domain

Outcomes generally contain precise verbs, rich description of the content/skill/or attitudinal domain
All outcomes stated with clarity and specificity including precise verbs, rich description of the content/skill/or attitudinal domain
2. Course/learning experiences that are mapped to outcomes
No activities/ courses listed.
Activities/courses listed but link to outcomes is absent.
Most outcomes have classes and/or activities linked to them.
All outcomes have classes and/or activities linked to them.
3. Systematic method for evaluating progress on outcomes
A. Relationship between measures and outcomes 
Seemingly no relationship between outcomes and measures.
At a superficial level, it appears the content assessed by the measures matches the outcomes, but no explanation is provided.
General detail about how outcomes relate to measures is provided. For example, the faculty wrote items to match the outcomes, or the instrument was selected “because its general description appeared to match our outcomes.”
Detail is provided regarding outcome-to-measure match. Specific items on the test are linked to outcomes. The match is affirmed by faculty subject experts (e.g., through a backwards translation).
B. Types of Measures
No measures indicated
Most outcomes assessed primarily via indirect (e.g., surveys) measures.
Most outcomes assessed primarily via direct measures.
All outcomes assessed using at least one direct measure (e.g., tests, essays).
C. Specification of desired results for outcomes
No a priori desired results for outcomes
Statement of desired result (e.g., student growth, comparison to previous year’s data, comparison to faculty standards, performance vs. a criterion), but no specificity (e.g., students will perform better than last year)
Desired result specified. (e.g., student performance will improve by at least 5 points next cycle; at least 80% of students will meet criteria) “Gathering baseline data” is acceptable for this rating.
Desired result specified and justified (e.g., Last year the typical student scored 20 points on measure x. Content coverage has been extended, which should improve the average score to at least 22 points.)
3. Systematic method for evaluating progress on outcomes (continued)
D. Data collection and research design integrity
No information is provided about data collection process or data not collected.
Limited information is provided about data collection such as who and how many took the assessment, but not enough to judge the veracity of the process (e.g., thirty-five seniors took the test).
Enough information is provided to understand the data collection process, such as a description of the sample, testing protocol, testing conditions, and student motivation. Nevertheless, several methodological flaws are evident such as unrepresentative sampling, inappropriate testing conditions, one rater for ratings, or mismatch with specification of desired results.
The data collection process is clearly explained and is appropriate to the specification of desired results (e.g., representative sampling, adequate motivation, two or more trained raters for performance assessment, pre-post design to measure gain, cutoff defended for performance vs. a criterion)