Literature review
To help contextualize the research in this report, a review of existing literature as it relates to the Quality Matters standards, especially in terms of their impacts on faculty and students, was conducted. As well, research on the evaluation of the standards themselves was included in the search. Most of the literature evaluating the QM standards focuses on course-level implementation of the rubric rather than on evaluating the standards themselves. The peer-reviewed literature (summarized in Appendix A. Overview of peer-reviewed research) consisted mainly of commentaries and case studies of single courses’ or institutions’ experiences of implementing and/or evaluating the QM rubric at a class level. One article did a social network analysis of how a QM training course influenced users’ potential reviewer networks. Another article focused on applying learning analytics to the QM review process to further improve the quality of courses. Aside from those two outliers, most literature relayed a single course, program, or institution’s experience of implementing the QM rubric. Few peer reviewed articles reported on any program evaluation or experimental design studies testing the effects of a QM-reviewed course as compared to a non-QM reviewed course on student outcomes. Major themes in the literature related to practice change, student satisfaction, faculty buy-in, and the continuous quality improvement of the QM standards.
Practice Change
Two articles touched on how the QM standards resulted in practice change. According to Adair (2017), evidence from course review exit surveys conducted since 2006 indicated a link between service as a review and the voluntary application of the review to apply the QM standards to their own courses (p. 156). Ali and Wright (2017) explored whether the use of QM as a quality control tool produced better teaching in the online classroom and whether using such a tool actually changed the instructional practices of the teachers. They determined that QM was valuable for identifying logistical errors within courses, for constructing and using course objectives, and for the alignment of course objective and learning outcomes. Instructors changed how they developed their course objectives to ensure that they were specific, observable, and measurable.
Student Satisfaction
Two articles indicated that students were more satisfied with their QM reviewed courses than they had previously been with their non-QM reviewed versions of comparable courses (Crews, Bordonada, & Wilkinson, 2017; Shattuck, n.d.). One study found that students ranked teaching presence lower in the QM-revised version of a course as compared to the course taught the previous spring (Swan, Matthews, Bogle, Boles, & Day, 2012). The authors of this study hypothesized that this finding could be due in part to the instructor focusing on meeting the QM standards and thus spending less time paying attention to teaching presence online during the course iteration.
One interesting finding regarding student satisfaction came from Ralston-Berg and Nath (2015). They surveyed students from a small US university as to what degree they valued the QM proposed course features. They found a discrepancy between the features that QM ranked as being extremely important and the features the students ranked as being extremely important. They also found that students ranked certain features as being extremely important to them while QM ranked those same features as being less important to course design. This finding is interesting and may influence student satisfaction in QM-reviewed courses.
Faculty Buy-In
Three articles addressed faculty buy-in to the QM rubric. Roehrs, Wang, and Kendrick (2013) found that faculty members with experience in online teaching, but no QM experience, found the QM rubric easy to use. Five of the six participants agreed with the certified peer reviewers about which standards were present or absent in their courses. Cowan, Richter, Miller, Rhode, Click, and Underwood (2017) looked at using QM to build a community of practice through social network analysis. They found that those who are sought for advice are key individuals in the success of QM implementation and will have an impact on the opinions of others (e.g., change agents, champions). They found that 65% of respondents were not ready to move further with QM training or course reviews and 33% wanted more information before making a decision as to whether they would adopt QM practices. The main reasons for not wanting to move forward with QM included lack of time and no credit awarded in promotion and tenure processes for their work on online course development.
- Budden and Budden (2013) provided a list of recommendations to increase faculty buy-in to QM. These include:
- Encourage faculty certification with rewards
- Require faculty teaching 100% or 50% online courses to obtain QM certification
- The use of examples from all disciplines
- Providing additional examples of how standards can be applied to different courses
- Encouraging all administrators to consider QM certification a strong faculty development activity
- Encouraging administrators to consider QM a good service activity
- Encourage certification with non-financial rewards such as release time, dedicated parking, assignment of a TA to help develop classes, etc.
- Provide funds to purchase/lease equipment to enhance the learning environment (p. 383).
Continuous Quality Improvement of the Standards
According to QM, the standards undergo a process of continuous quality improvement. In order to continually update and improve the rubric, data from every official course review using a particular edition of the rubric is analyzed (Adair, 2017). As part of this analysis, each rubric standard is reviewed by two reviewers to achieve interrater reliability. The goal is to identify standards that need to be improved but it also indicates how similarly and consistently the reviewers apply the standards (Adair, 2017). QM conducted an review of 111 online courses from 29 institutions and provided an overview of the standards most commonly unmet. These include:
- 22% of courses lacked an instructor introduction
- 22% of courses lacked activities to foster student-to-student interaction
- 24% of courses did not state any pre-requisite knowledge or skills
- 24-27% of courses did not have links to academic support, campus tutoring services, or student support services
- 27% of courses did not state learning objectives
- 32% of courses lacked any netiquette standards
- 38% of courses did not provide students with any self-check or feedback on their progress
- 59% of courses did not provide adequate text alternatives to sound or video materials (Legon & Runyon, 2007, p. 2).
Quality Matters published several articles and reports that addressed the continuous quality improvement (CQI) of the standards. Much of the reported evaluation work appears to have been commissioned by QM and while reported outcomes were generally favourable, clear links to the QM standards as an intervention were not made. Additional grey literature focused on measuring student satisfaction with the quality of online courses. One article outlined the use of a survey and colour-based personality test (similar in concept to a Myers-Briggs test) to determine student satisfaction based on perceived learning needs. Another study used a survey to examine the degree to which students value the QM standards when considering the design of their online courses. More grey literature than peer-reviewed literature focused on evaluating the standards, as opposed to evaluating the implementation of the QM rubric at the course level. Nevertheless, not much research has been done focusing on evaluation and student satisfaction regarding the QM standards. Appendix B. Overview of grey literature relating to the QM standards provides an overview of the grey literature on this subject.
Quality Matters at Ontario’s publicly funded colleges
No peer reviewed articles were found that addressed the use of the QM framework in the Ontario Public College context. Only 4 of the 24 publicly funded Ontario colleges advertise using the QM framework or an adapted version thereof on their website. Since so little information was available online, each institution was contacted to inquire about their use of the QM framework. 9 colleges reported using some form of the QM framework, while 15 colleges have not yet responded and do not indicate their use of the QM framework on their website. Appendix C. Overview of QM standards use in Ontario’s public colleges provides an overview of which publicly funded Ontario colleges use the QM framework for their quality assurance processes.