Most of us remember a school project that was an epic fail. One that snuck up on us, we didn’t understand, or simply didn’t have the interest to complete. As a humble example, in grade 6, I was supposed to make a 3D map of South America. I completely forgot about it until the night before it was due and thus brought a very wet model made from homemade play-dough on a plank of plywood to school on the fateful due date. It looked terrible, got squished, the wet paint bled everywhere, and my grade was most undesirable. While I learned not to leave assignments to the last minute, I could have learned a whole lot more about geography.
Mercifully, the online environment does not accept submissions still wet with paint. However, as more courses, programs, or lessons are delivered online, educators and students have had to rethink how online, synchronous, and asynchronous learning occurs and how it can be demonstrated. The most effective online and blended teaching and learning environments employ meaningful technology tools that help students understand what they know so far and what they still need to understand or practice (Gikandi et al., 2011). Educators can also use web-based assessment tools to gauge and maximize their teaching efficacy through unique affordances from different platforms.
Traditional assessment ideas include bleary-eyed students woefully answering exam questions after staying up all night cramming for a test. But when assessments are most meaningful, they are not a series of hoops through which students need to jump. While diverse in delivery, type, and implementation, assessments generally refer to a process of fostering and monitoring topical awareness over a defined time (Andrade & Brookhart, 2020; Walvoord, 2010). When we assess student learning using technology, we gather information on what students know or understand based on their online learning experiences. The results can indicate where students need additional direction and whether the content delivered by the educator meets learning objectives (Gikandi et al., 2011). Thus, the goals of assessment are to both evaluate and improve student learning (Dixson & Worrell, 2016).
When constructing online assessments, the best place to begin is not with how do I assess students using online tools? or how do I make my online assessments equitable and accessible? Instead, Morris (2021) suggests that we ask ourselves, what am I teaching? Only then can we consider the people in our classes, how they learn, and how they can best express what they have learned. In most cases, educators outline learning objectives and outcomes at the start of any course or semester, and this is a sound place to begin when constructing online assessments.
Learning objectives and outcomes are otherwise known as learning goals. Research on student assessment suggests that articulating clear and explicit goals for student learning in any assignment, unit, topic, or course is essential before constructing any assessment or giving feedback to students (Ambrose et al., 2010). A stated goal may relate to a specific task or performance, be relative to a standard expected of students, or be comparable to prior performance (Gikandi et al., 2011). For example, a goal may be to understand and solve specific mathematics problems, it could be to build a free-standing structure, or it might be to write a persuasive essay. The online or blended environment, like all learning environments, demands clarity so that students understand what they will learn and will be able to know when they have met learning objectives (Hattie & Temperley, 2007). In other words, at the outset of each new course, module or activity, students must know where they are going (Hattie & Temperley, 2007).
Many educators suggest constructing and using rubrics to articulate and measure progress toward learning goals. Ambrose et al. (2010) recommend using a rubric to specify and communicate students’ performance criteria into unique components. The rubric can describe precise characteristics of high, medium, and low quality work in each component, guiding a student’s understanding of the expectations and relative outcomes (Andrade, 2000). To that end, educators must carefully consider what they want students to know and how students can express this knowledge. In online environments, educators must also design an assessment that can be constructed and submitted digitally.
One potential caveat regarding the use of rubrics in any learning environment is the potential risk of assessing only one-dimensional tasks such as spelling or organization rather than subjective processes such as critical thinking (Kohn, 2006). Rubrics should guide the student, articulate goals, facilitate communication between and amongst faculty and students, and not be a defensible position that reflects only a grade or standard (Andrade, 2005). Many free online rubric makers (see resources below) can help create and communicate learning goals and criteria to students.
However, sound online assessments do not begin and end with a rubric or checklist — rubrics are simply tools to guide the assessment process (Andrade, 2005). Further, not all activities require a rubric. Participation in online games, puzzles, or collaborative activities can be straightforward enough that simply completing or participating in them demonstrates success or indicates where a student may have gaps in learning. Further, some online tools have built-in algorithms that provide students and educators with results, helping to remove subjectivity or bias from an assessment based solely on our input.
Purpose of Assessments
A benefit of assessment in online environments is that technology can often provide instant results to both students and educators. Online quizzes, tests, or even synchronous peer review can be quick and, in many cases, immediate. Communicating real-time results to individual students helps guide their relative understanding of learning goals in a timely manner (Gikandi et al., 2011). At the group level, immediate tech-enabled assessment results offer educators the opportunity to observe general trends in the course and where they may need to provide additional guidance or support. Instant tech-enabled results give an advantage over traditional teaching venues in which students complete a written test or quiz and then wait days (or even weeks) for the results. Immediate indications of students’ goals or learning objectives significantly benefit from online, tech-hosted assessment (Gikandi et al., 2011).
The objective of an assessment will differ based upon its purpose. There are two general types of assessment:
- Formative assessments are delivered in any online course, unit, or lesson when the objective is to determine how well a student is learning smaller or incremental sections of course material. For example, a formative assessment may indicate how thoroughly a student understands content from an online video they watch outside of class time, information covered in synchronous online classes, or readings covered in online sources that they consult asynchronously. Formative assessments are often incremental and ongoing, providing feedback to students to support tangible improvement (Bennett, 2011). These developmental assessments can give us a sense of where students are in their learning and where we need to focus on subsequent or additional learning. Formative assessments are often low-stakes or no-stakes in the context of grades, which means they contribute minimally to a student’s final mark in a course or program.
- Typical forms of summative assessment include a final exam, unit test, or cumulative project. Often, these types of assessments quantitatively measure student learning over a full term, entire course, or more extended program (Gikandi et al., 2011). Summative assessments are usually high-stakes and significantly contribute to a student’s final mark in a course or program. From an educator’s perspective, this kind of assessment can provide insights for developing subsequent iterations of a course, unit, program, or significant assignment.
Rubrics and coordinating assessments are important for articulating performance criteria. However, when considering and constructing both rubrics and a variety of assessments, online educators should be mindful of potential variables that impact equitable inclusion practices. Consider the clarity of expectations, access to technology, communal support, and each student’s individual growth and development (Ambrose et al., 2010; Bond, 2020; Fornauf & Erickson, 2020).
- Avoid academic or educational terms that do not relate to or enhance understanding of the assessment focus (Gikandi et al., 2011).
- Be consistent with expectations and your feedback. Diverging from the norm without cause or suitable warning can distract from the learning at hand and clutter student understanding (Tierney & Simon, 2004).
- While we may believe that we have clearly outlined expectations and learning outcomes, mistakes can occur. If we thought that we described expectations clearly, but multiple students expressed confusion or interpreted our directions differently, we need to think further about the clarity of our guidelines (Andrade, 2005).
- Although students might have access to video streaming services, there may still be complications related to various uses. The abilities of different technologies are not the same for all students (Wattal et al., 2011).
- Instructors should be mindful that cultural and individual differences regarding the role of family and community can impact students’ learning communities and the role of formal education in life (Eliason & Turalba, 2019). Notably, family and community responsibilities may impact how and when students engage online.
- Not all students begin and end their learning in the same intellectual space; some students start with a more advanced understanding of course concepts, while others require support to move beyond their previous knowledge.
- It can be challenging for instructors to ensure that all students are challenged and encouraged, given the various preparation when entering the course.
- When considering the affordances of cohorts and individual students, instructors do not need to limit student performance and progress indicators to one specific type of assessment. We can combine results from both formative and summative assessments to gauge how well students are learning, where gaps in learning exist, and to calculate students’ grades (Dixson & Worrell, 2016).
- Contemporary research on grading suggests that formative and summative assessment should not be static and that arriving at a final grade requires coherent and equitable grading practices, including a teachers’ professional judgment (Feldman, 2019).
Example 1: The Summative Un-Essay
An UnEssay invites students to submit a summative assignment in their chosen format. This practice is inspired by the literature on authentic assessment (Svinicki, 2004), which emphasizes creating assessments relevant to student goals, intended professional environment, and students’ unique strengths and talents. The UnEssay is particularly welcome in digital environments because it mirrors concurrent revisions in pedagogy to suit the digital environment.
Rather than completing a traditional essay or lab report, the UnEssay invites students to plan and create a summative project that addresses course concepts in their interpretive way. This creative opportunity requires a clear explanation to students and a sincere invitation to challenge themselves, play to their strengths, or try something new while embracing the freedom to do so. Students can take on their learning objectives, express them in any way they like, and present their work digitally.
The Un-Essay significantly departs from the traditional end-of-term research paper or final exam. It requires concise communication to support students’ understanding of the parameters and possibilities of this creative assignment.
- I introduce the UnEssay in the first week of class so that students can begin to generate ideas and, quite frankly, get accustomed to the idea that they have freedom of expression in their final assessment.
- Provide examples of previous projects that students have completed. If this is the first time you’ve incorporated an UnEssay into the course, it may be helpful to see examples from students in courses taught by Cate Denial and Christopher Jones [Twitter post]. Students submit a series of paintings/drawings, documentary-style videos, websites, blogs, podcasts, and social media accounts.
- Scaffold the assignment.
- Within the first few weeks of the course, my students discuss potential ideas in small online breakout groups. I have found that discussions amongst peers help refine ideas, provide direction, and help students make their project plan.
- Within the first month of the course, ask students to submit a proposal for their UnEssay project. Provide feedback on potential directions, revisions, challenges, or guidance.
- At mid-semester, students complete a check-in (either personally or via a Google Form). Formative discussions help students stay on track and be aware of problems that could negatively impact completion.
- Discuss with students how their projects will be assessed/graded. Unconventional assignments often demand unconventional assessment methods. What counts in this assignment? What if a student attempts a project that involves a steep learning curve in using technology, for example? How will you and the student account for failures or experiments? Create a rubric or guideline with students that clearly outlines the project’s requirements but still permits freedom of expression and calculated risk-taking. These clear expectations are essential in recognizing Feldman’s (2019) requirement that fair assessments be structured accurately.
- If students prefer to write a traditional essay, this is entirely acceptable! Some students find the security in this helpful, or they genuinely enjoy academic writing.
- Artist Statements are what I ask students to submit with their creative artwork. I ask them to indicate the project’s connection to course content and how it meets the assignment’s requirements. Requirements for the artist’s statement can be outlined in the rubric as collectively established above.
- Build-in time at the end of the term for students to share their projects, a process that I find invites increased connection and rapport in class (which is especially welcome in online environments). Potential presentation opportunities include small group discussions, an online recording of digital projects, or large group presentations, depending upon the size of the course.
- Be prepared for discussion as students lack experience with such a creative assignment. I find that asking them about their learning goals, professional goals, or personal interests helps shape a meaningful project.
- In a semester-long project, it is crucial to ensure incremental progress. Scaffold the assignment such that students brainstorm ideas independently and with one another, submit a proposal to the instructor, check on progress at mid-semester, and make a final submission.
- Creating a grading scheme for a variety of creative projects is sometimes challenging. Each project will be different, so creating a rubric that is both fair and flexible can take some time and some iterations. Be prepared to revise or reconstruct the rubric as the variety of UnEssay projects becomes more apparent.
- Cara Ockobock’s Fundamentals of Biological Anthropology UnEssay Instructions [PDF]
- Mark Kissel’s UnEssay Guidelines
- Examples of UnEssay projects in Christopher Jones’s US History course [Twitter post]
- Cate Denial’s guidelines for grading UnEssays
Example 2: Assessment with Blooket, a Not-So-Serious Game
Blooket is a digital quiz game platform that can be used by individual students in their own time or amongst groups of students as they play the game simultaneously. Games are helpful because they can raise motivation during learning processes (Iten & Petko, 2016) and increase learning. After all, games generally require fewer cognitive resources (Robson et al., 2015) than other static learning contexts.
Online games are suitable for reviewing academic concepts before a large-stakes assessment, determining student knowledge before a lesson, or measuring student recall at the end of a class period. Blooket is helpful because, unlike other tools, it does not require that the game award points for how quickly students respond to questions. The lack of scoring can reduce non-helpful pressures for students who need additional time. The platform offers a variety of visual and play themes, including car races, a cafe, gold mining, and even holiday themes (such as Candy Quest at Halloween). The thematic presentation helps transform quizzes into competitive games similar to those students might play on their phones.
- Instructors can sign up for a free account on Blooket.com
- The game can be played using pre-established sets of questions, or instructors can create sets relevant to their specific needs or topics.
- To assess student knowledge before a lesson or unit, an instructor can create a game to see how many questions a student can already answer. This information can be helpful for the instructor in creating subsequent lessons and can outline to students what they may or may not already know. Students can repeat the game at the end of the class to guide personal insights into what they learned through the session.
- You can use Blooket as a review tool: course terms, definitions, and concepts can be expressed in multiple-choice questions so that students can practice in advance of a test or other high-stakes assessment.
- Many of the game themes on Blooket offer the opportunity for students to steal points or assets from other students, which can help limit how many times a single student wins.
- Most Blooket games can end after all students answer all questions or expire after a set time. The time mode allows for repeatable questions so students can revisit questions that they did not answer the first time correctly.
- The instructor can see how many questions each student answered correctly, providing insights into which students may be struggling with course concepts or retention.
- Online game platforms require that students access the requisite technology to participate. Not all students will have a mobile phone, laptop, or tablet, but teams can use Blooket if one student has technology that the group can use.
- Games can help students identify which questions or concepts they are less familiar with, but there is no way to record answers they may wish to revisit. Instructors should remind students to take note of words, concepts, or information that they need to review.
- Rubric-Maker.com is a rubric authoring tool.
- Quick Rubric is an easy-to-use tool for creating rubrics.
- 14 Ways to Turn Your Classroom into a Game Show includes good ideas for creating interaction in your online class.
Ambrose, A. S., Bridges, M. W., DiPietro, M., Lovett, M. & Norman, M. K. (2010). How learning works: Seven research-based principles for smart teaching. Jossey-Bass.
Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57(5), 13-19. https://www.ascd.org/el/articles/using-rubrics-to-promote-thinking-and-learning
Andrade, H. G. (2005). Teaching With Rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27–31. https://doi.org/10.3200/CTCH.53.1.27-31
Andrade, H. L., & Brookhart, S. M. (2020). Classroom assessment as the co-regulation of learning. Assessment in Education: Principles, Policy & Practice, 27(4), 350-372. https://doi.org/10.1080/0969594X.2019.1571992
Bennett., R. E. (2011). Formative assessment: A critical review. Assessment in Education : Principles, Policy & Practice, 18(1), 5–25. https://doi.org/10.1080/0969594X.2010.513678
Bond, M. (2020). Schools and emergency remote education during the COVID-19 pandemic: A living rapid systematic review. Asian Journal of Distance Education, 15(2), 191-247. https://doi.org/10.5281/zenodo.4425683
Dixson, D. D., & Worrell, F. C. (2016). Formative and summative assessment in the classroom. Theory into Practice, 55(2), 153-159. https://doi.org/10.1080/00405841.2016.1148989
Eliason, M. J., & Turalba, R. (2019). Recognizing oppression: College students’ perceptions of identity and its impact on class participation. The Review of Higher Education, 42(3), 1257-1281. https://doi.org/10.1353/rhe.2019.0036
Fornauf, B. S., & Erickson, J. D. (2020). Toward an inclusive pedagogy through universal design for learning in higher education: A review of the literature. Journal of Postsecondary Education and Disability, 33(2), 183-199. https://files.eric.ed.gov/fulltext/EJ1273677.pdf
Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers & Education, 57(4), 2333-2351. https://doi.org/10.1016/j.compedu.2011.06.004
Hattie, J. & Temperley, H. (2007). The power of feedback. Review of Educational Research, 77(81). https://doi.org/10.3102/003465430298487
Iten, N. & Petko, D. (2016). Learning with serious games: Is fun playing the game a predictor of learning success?. British Journal of Educational Technology, 47(1), 151-163. https://doi:10.1111/bjet.12226
Kohn. (2006). Speaking my mind: The trouble with rubrics. English Journal, 95(4), 12–15. https://doi.org/10.2307/30047080
Morris, S. M. (2021, June 09). When we talk about grades, we are talking about people. SeanMichaelMorris. https://www.seanmichaelmorris.com/when-we-talk-about-grading-we-are-talking-about-people/
Robson, K., Plangger, K., Kietzmann, J.H., McCarthy, I., & Pitt, L. (2015). Is it all a game? Understanding the principles of gamification. Business Horizons, 58(4), 411-420. https://doi.org/10.1016/j.bushor.2015.03.006
Tierney, R., & Simon, M. (2004). What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research, and Evaluation, 9(1), 2. https://doi.org/10.7275/jtvt-wg68
Walvoord, B. E. (2010). Assessment clear and simple: A practical guide for institutions, departments, and general education (2nd ed.). John Wiley & Sons.
Wattal, S., Hong, Y., Mandviwalla, M., & Jain, A. (2011, January). Technology diffusion in the society: Analyzing digital divide in the context of social class. 2011 44th Hawaii International Conference on System Sciences (pp. 1-10). IEEE. https://doi.ieeecomputersociety.org/10.1109/HICSS.2011.398