18 Assessment in MOOCs

Darren Gowlett

JUNE 2021

Massive open online courses (MOOCs) are large, often free, asynchronous courses offered by various post-secondary institutions around the world including some of the most prestigious. They offer training and knowledge on a wide range of subjects to anyone with an Internet connection. Hundreds of thousands of students may be enrolled in a single course. They promise a revolution in the accessibility and democratization of higher education worldwide (Patru & Balaji, 2016, p. 23). Since the first MOOC was offered in Manitoba in 2008 (Sandeen, 2013), they have diversified and become increasingly popular. Popular MOOCs websites include Coursera (the largest boasting 77 million learners and over 200 partner universities), Edx, FutureLearn, Udacity, Udemy, Peer 2 Peer University, and FUN (French). The courses themselves, whether free or not, are often provided by universities. Some are available as a leisure activity (many UK seniors take courses with FutureLearn) and some are credit courses for university. MOOCs allow for students to get whole degrees online, or more innovatively, they allow for the modularization of degree programs.

Assessment does not seem to be a good fit for such open courses that students often engage in casually and notoriously take very lightly. As the field matures, MOOCs will have to come to grips with the problems inherent in assessing something that is online, open, and massive.

Problems with assessment

Many MOOCs require little or no assessment while merely delivering course material and allowing for discussion forums. Many others provide assessment only for participants as formative or summative guides. These often include easily scalable multiple choice, matching and peer assessment. Problems arise as MOOCs try to move into the territory of established institutions. One of the features that separates free MOOCs at one end of the spectrum from expensive degree programs at the other is valid assessment. Assessment provides accountability to any certification of achievement which allows for recognition by the outside world. To be affordable, MOOCs cannot employ qualified assessors as educational institutions can with their resident education (outside the home) infrastructure. Students may not have any contact with teaching staff (Huisman, Admiraal, Pilli, van de Ven & Saab, 2016). A student could do coursework for an entire graduate degree, but without quality assessment, it is not recognized. If it is not recognized, it has less value.

As with college and university online courses there is the problem of proctoring assessments which threatens the validity of tests. This problem, however, especially since college and universities around the world have gone online during the Covid-19 pandemic, is not unique to MOOCs. Software solutions are available.

Peer assessment is commonly used to provide feedback on student writing or projects. Students are matched automatically by the software to assess other students. This provides very uneven assessment as there is a wide disparity of ability, motivation and effort between assessing students. The quality of peer assessment can be very low and the wide range of students means that many are not true peers (Meek, Blakemore & Marks, 2017).

The online nature of MOOCs gives rise to tension between the technological limitations of assessment and meeting deeper pedagogical needs (Hills & Hughes, 2016). The open nature of MOOCs means a wide variety of students and a wide range of prior learning.

A recent innovation in MOOCs is the introduction of badges. Badges are an intermediate step between MOOCs purely for interest and credentialed education such as college or university. They provide a structured set of courses with assessments and credentials (Hills & Hughes, 2016). They can be a steppingstone to higher education, but however they are used, they make quality assessment even more important in the MOOC space.

Assessment

There are a number of solutions to bridge the gap between courses for interest and courses for credit. With large course enrollments at low cost, an instructor or facilitator cannot provide adequate personalized assessment particularly on written work. For MOOCs to be accredited and trusted, they need secure and reliable assessments.

MOOCs do informal learning very well. They can also do informal formative assessment very well. Simple multiple choice or matching exercises are common to text student mastery of concepts. The computer mediated nature even also allows for adaptive assessments based on Item Response Theory (IRT) (Bates, 2014). Assessments can be constructed that adapt to the level of the learner providing a precise and adaptive assessment.

If MOOCs are to approximate university classes, student writing needs to be assessed. One way to do this is peer assessment. Peer assessment has the added advantage of offering the peer assessor more exposure to a range of student work and a different perspective on the assignment (Huisman, Admiraal, Pilli, van de Ven & Saab, 2016). Peer assessment may be the only universal assessment method for MOOCs particularly for formative assessment (Xiong & Suen, 2018). “Quality” in terms of peer feedback can be defined as “the degree to which student assessments correspond to those of experts or the degree to which the content of student evaluation adheres to characteristics of good feedback recognized in the theory of the discipline” (Ashton & Davies, 2015). The quality of peer assessment can be improved by having multiple reviewers for each submission (Coursera uses four) (Meek, Blakemore & Marks, 2017).

Meek, Blakemore & Marks’ (2017) study of peer feedback in a biomedical science MOOC found that students who write higher quality papers tend to give higher quality reviews. Huisman, Admiraal, Pilli, van de Ven & Saab (2016) found a similar result. Thus, the quality of assessment can be increased by increasing rater quality through ensuring minimum standards of entry to a MOOC course. Peer rater quality can be increased by including the accuracy of ratings given in final grades (Xiong & Suen, 2018). Scaffolded rubrics are also important for improving the quality of peer feedback (Ashton & Davies, 2015).

Peer feedback could be conducted in small groups where students are grouped with others with different interests or different abilities based on student profiles (Xiong & Suen, 2018). Students can be evaluated by peers within a group or as a group (group mark). Group assessment would require more surveillance from teaching staff to ensure that groups are working well. Group leaders identified and chosen from among students could also increase the efficacy of group assessment.

MOOCs may not be able to completely replicate the assessment capabilities of residential education. However, they do provide a viable alternative. Technological advances in proctoring technology and refined methods of assessment are bringing the capability of MOOCs closer to traditional courses. The growth in online education thanks to the COVID -19 pandemic have made online learning more palatable for students and institutions. The pandemic may have brought the early promise of MOOCs as an alternative to traditional residential education closer to reality. In the future MOOC will improve the reliability of their assessments and be a more important part of credentialed education.

 

References

Ashton, S., & Davies, R. S. (2015). Using scaffolded rubrics to improve peer assessment in a MOOC writing course. Distance Education36(3), 312–334. https://doi-org.ezcentennial.ocls.ca/10.1080/01587919.2015.1081733

Bates, T. (November 8, 2014). A review of MOOCs and their assessment tools. Online Learning and Distance Education Resources. Retrieved from https://www.tonybates.ca/2014/11/08/a-review-of-moocs-and-their-assessment-tools/

Hills, L., & Hughes, J. (2016). Assessment worlds colliding? Negotiating between discourses of assessment on an online open course. Open Learning31(2), 108–115. https://doi-org.ezcentennial.ocls.ca/10.1080/02680513.2016.1194747

Huisman, B., Admiraal, W., Pilli, O., van de Ven, M., & Saab, N. (2018). Peer assessment in MOOCs: The relationship between peer reviewers’ ability and authors’ essay performance. British Journal of Educational Technology49(1), 101–110. https://doi-org.ezcentennial.ocls.ca/10.1111/bjet.12520

Meek, S. E. M., Blakemore, L., & Marks, L. (2017). Is peer review an appropriate form of assessment in a MOOC? Student participation and performance in formative peer review. Assessment & Evaluation in Higher Education42(6), 1000–1013. https://doi-org.ezcentennial.ocls.ca/10.1080/02602938.2016.1221052

Patru, M., Balaji, V., (2016). Making Sense of MOOCs: A Guide for Policy-Makers in Developing Countries (PDF). Paris: UNESCO. pp. 17–18, 21–22, 24. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000245122

Sandeen, C. (2013). Integrating MOOCS into Traditional Higher Education: The Emerging “MOOC 3.0” Era. Change45(6), 34–39. https://doi-org.ezcentennial.ocls.ca/10.1080/00091383.2013.842103

Xiong, Y., & Suen, H. K. (2018). Assessment approaches in massive open online courses: Possibilities, challenges and future directions. International Review of Education / Internationale Zeitschrift Für Erziehungswissenschaft64(2), 241–263. https://doi-org.ezcentennial.ocls.ca/10.1007/s11159-018-9710-5

 

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

On Assessment Copyright © 2021 by Students of TLHE 720 at Centennial College is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book