Assessment has become a thorn in the side of many educators over the past three years. First, the rapid shift to remote teaching during the pandemic forced many educators to adopt assessment approaches that they may not have been comfortable with or that they recognized were not ideal for student learning. Then – just as many of us were returning to the more familiar assessment circumstances of in-person classes – OpenAI released ChatGPT. Any assessment with a non-invigilated written component, including the writing of computer code, now raises questions about if – and to what extent – students are making use of generative AI.
At the same time, the events of the past three years have highlighted existing troubles with our assessment practices and prompted us to reflect on the purpose of assessment in teaching. The language of care in teaching that became more prevalent during the pandemic helped to reframe the conversation about academic integrity into a deeper consideration of why students cheat. One culprit is poorly designed assessments, which may:
- only require students to recall what they have already learned, and/or
- are mismatched with what students expect to do and learn in the course, and/or
- unfairly disadvantage some students and not others, and/or
- have unnecessarily high stakes.
Assessment (re)design thus offers educators the opportunity to have a meaningful impact on issues of academic integrity.
We recognize that you may not currently be able to significantly redesign your course assessments, which takes time and effort to do thoughtfully. Trying anything new in the classroom also carries a degree of risk, particularly for educators who are already in precarious roles like sessional instructors, contractually limited appointments and pre-tenure faculty. Even if you do have the capacity to redesign your assessments, we suggest starting small: addressing the assessment that concerns you the most or will have the greatest impact, and then building on your experience.
We have divided the chapter into two parts:
- a series of shorter-term, “quick fix” strategies to help counteract or embrace the easy access to generative AI, and
- a workbook to guide you through the redesign of an assessment, based on our intensive Assessment Development Workshop
We hope there is a path through the resource for all educators, acknowledging that you will each be teaching in different contexts, be at different points of your career, and be working under different conditions. We also encourage you to not go through the resource in isolation but rather, to reach out to your Faculty’s key contact at the MacPherson Institute and discuss your assessment further.
Throughout the chapter, we will foreground the following assessment design principles:
- Authentic assessments
- Assessments are authentic when they “replicate real world performances as closely as possible” (Sviniki 2004), “foster[ing] disciplinary behaviours and ways of thinking and problem solving used by professionals in the field” (via Queen’s U module); we will elaborate on what defines “authentic” assessments later in the chapter.
- Assessments reflect the goals, interests and lived experiences of the learners; learners can see themselves in the assessment and are intrinsically motivated to complete it well.
- Universal Design for Learning (UDL)
- Assessments are proactively designed with accessibility in mind, with the aim of eliminating barriers to give all students an equal opportunity to succeed.
- Constructive alignment
- Assessments are aligned with course and program learning outcomes; i.e., does the assessment demonstrate that the learner has met the course learning outcomes? Do learners evidence their grasp of essential course skills and knowledge by completing the assessment?
- Assessment for learning
- Assessments are opportunities for students to enrich and extend what they have learned by applying them in novel contexts; the assessment itself is a site of learning.