
OSCQR is NOT a Scorecard!
Why the OSCQR Rubric Uses Time as Rating Criteria rather than Scores
The OSCQR rubric is intentionally designed as a formative, diagnostic tool—not a summative scorecard.
It is uniquely designed in a number of ways from all other online quality rubrics, and can be used flexibly to adapt to any online course quality review model – intentionally. That flexibility means it can support faculty development formatively (through reflection and self-assessment) and also be applied in summative contexts (such as peer or program reviews), always with the aim of guiding improvement. Its goal is to support faculty and online instructional designers to identify, prioritize, and plan for continuous improvements in online course design. A course cannot “fail” OSCQR; instead, the review process highlights opportunities for enhancement that can be addressed over time.
The Problem with Summative/Scored Rubrics
Research on audit culture and performance metrics cautions that when quality frameworks rely on scoring, they often incentivize compliance rather than genuine improvement. Shore & Wright (2015) argue that “audit has become a central organizing principle of governance that is reconfiguring power relations and reshaping institutions and work practices” (p. 421), closely tied to the proliferation of rankings and indicators that now drive organizational priorities (p. 422). Similarly, Strathern’s adaptation of Goodhart’s Law warns: “When a measure becomes a target, it ceases to be a good measure” (Mattson et al., 2021, p. 2). In other words, summative scoring can shift attention from improving to metric-chasing.
Lucas (2014) documents faculty resistance to checklist-style quality assurance frameworks in higher education, concluding that they are often perceived as externally imposed mechanisms that emphasize accountability over authentic enhancement. As she notes, quality processes are frequently experienced by academics as “forms of surveillance and control rather than genuine enhancement” (p. 220) and as “bureaucratic, mechanistic and decoupled from real academic work” (p. 222). A rubric designed as a “pass/fail” instrument risks alienating faculty and undermining its developmental value. Such approaches can also be counterproductive to the faculty–instructional designer relationship, positioning online instructional designers as enforcers rather than collaborators. This adversarial dynamic erodes the trust, respect, and collegiality essential for authentic dialogue and joint problem-solving in online faculty development and course design.
In contrast, OSCQR is intentionally structured to reinforce a collaborative, improvement-oriented partnership between faculty and instructional designers. By emphasizing estimated effort, impact, and prioritization—not scores—OSCQR frames review as a formative process with reflection and self-assessment components that fosters trust and mutual respect. Its focus on continuous improvement ensures that the review process is experienced as supportive rather than punitive, centering the shared goals of online course design effectiveness, and online teaching and learner success.
Support for a Formative, Improvement-Oriented Rubric
The broader literature on rubrics highlights their strongest potential when used as formative tools that scaffold reflection and guide iterative improvement. For example, Panadero & Jonsson (2013) conclude that rubrics can play a key role in supporting self-regulation by helping individuals plan, monitor, and reflect on their work, functioning as a guide for approaching tasks and clarifying expectations (pp. 133–135). Panadero (2020) similarly found that rubrics make “expectations and criteria explicit, which was seen to facilitate assessment processes such as feedback and self-assessment” (p. 101). In the OSCQR context, these same principles apply to online faculty: the rubric serves as a developmental guide for planning, monitoring, and reflecting on online course design, as well as a framework for online course review, refresh and improvements. This aligns with OSCQR’s role in making online course design criteria explicit, providing structured feedback, and supporting faculty self-assessment and more formal course review processes in the spirit of continuous online course design improvement.
Research specific to online course quality evaluation also supports formative use. Lee et al. (2020) demonstrated that rubrics have high instructional value when employed to improve design, not only to certify it. Shattuck et al. (2014) describe the value of online quality rubrics undergoing cycles of continuous improvement, reflecting evolving best practices rather than fixed benchmarks. A recent scoping review of online course quality instruments confirms that while checklists and rubrics are widely used, their greatest value lies in guiding ongoing improvement rather than serving as summative scorecards (McInnes et al., 2024).
Taken together, these findings suggest that scored, summative rubrics can inadvertently reduce authentic engagement with quality principles by fostering compliance behaviors, while OSCQR, intentionally designed to be used as a formative and improvement-oriented rubric, aligns with evidence supporting transparency, feedback, and iterative course enhancement. Importantly, by framing review as a collaborative process rather than a judgment, OSCQR helps build trust and respect between faculty and instructional designers. This approach reinforces a shared commitment to continuous improvement, online teaching effectiveness, and learner success.
Beyond its role in new online course development, review, and refresh, OSCQR is also intended for summative use in a variety of contexts. These include formal or informal course review and refresh processes, carried out through reflective self-assessment by experienced online faculty or by instructional designers, peer reviewers, and multidisciplinary teams, most often focusing on mature online courses. OSCQR is also used in certification, program review, and accreditation self-studies. Even in these cases, however, OSCQR does not function as a scorecard. Instead, its flexibility allows it to adapt to any online course quality review model, framing results in ways that support prioritization and continuous course design improvements over time.
What the OSCQR Rating Criteria Are Based On
In OSCQR the rating criteria (Sufficiently Present, Minor Revision, Moderate Revision, Major Revision, and Not Applicable) were created to provide a common, practical framework to aid in prioritization of course design improvements. These categories support both formative uses (such as faculty self-assessment and reflection) and summative implementations (such as peer, instructional designer, or program-level reviews), ensuring that results guide action planning without functioning as a scorecard.
- Estimated Effort/Time to Revise
The original anchor point was practical: how much time or effort a typical revision might require.- Minor Revision: ~30 minutes or less
- Moderate Revision: ~30 minutes–2 hours
- Major Revision: 2+ hours
- Impact on the Learner Experience
Beyond time, the categories also help reviewers and faculty gauge the relative importance of the issue for learner engagement, success, and accessibility. For example, adding a missing “alt text” to one image might be minor, but revising all course images and documents for accessibility is major in scope and impact. - Support for Prioritization in Action Planning
The rubric and its Action Plan feature also allow reviewers to tag revisions as essential or important in addition to the amount of effort necessary. This makes it easier for faculty to identify “low effort/high impact” improvements if there are time constraints, while planning for improvements that might require additional time, skills, or technologies over time. - Culture of Continuous Improvement
The time-based criteria were designed to reinforce OSCQR’s philosophy that quality online course design is an iterative and evolving process. By avoiding numerical scores, OSCQR prevents a “pass/fail” perception and keeps the focus on ongoing growth, not compliance.
The OSCQR focus on estimated time for revisions is:
- Non-evaluative → The categories do not grade or score a course.
- Non-judgmental → An online course can’t “fail” OSCQR.
- Flexible → They are not fixed metrics but practical estimates to support planning.
- Grounded in practice → They reflect the typical effort, scope, and impact of a revision.
- Forward-looking → They inform the online course refresh Action Plan and support continuous improvement, iteration, and helping online courses evolve to meet learner needs, accessibility standards, and best practices in online teaching.
What do you think about this?
If you are interested in learning more about OSCQR join one of our free webinars: OSCQR Fall 2025 webinar series registration and check out the OSCQR website. OSCQR certifications (Self-Assessment, Reviewer & Trainer) are also available.
References
Lee, J. E., Recker, M., & Yuan, M. (2020). The validity and instructional value of a rubric for evaluating online course quality: an empirical study. Online Learning, 24(1), 245–263.
Lucas, L. (2014). Academic resistance to quality assurance processes in higher education. Policy and Society, 33(3), 215–224.
Mattson, C., Bushardt, R. L., & Artino, A. R., Jr. (2021). When a measure becomes a target, it ceases to be a good measure. Journal of Graduate Medical Education, 13(1), 2–5.
McInnes, R., Hobson, J. E., Johnson, K. L., Cramp, J., Aitchison, C., & Baldock, K. L. (2024). Online course quality evaluation instruments: A scoping review. Australasian Journal of Educational Technology, 40(2), 55–75.
Panadero, E. (2020). A critical review of the arguments against the use of rubrics. Educational Research Review, 30, 100329.
Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129–144.
Shattuck, K., Zimmerman, W., & Adair, D. (2014). Continuous improvement of the QM rubric and review processes: Scholarship of integration and application. Internet Learning, 3(1), 25–34.
Shore, C., & Wright, S. (2015). Audit culture revisited: Rankings, ratings, and the reassembling of society. Current Anthropology, 56(3), 421–438.