The invitation to participate in developing an OER on special topics in assessments had me reflecting upon — and taking a virtual walk back through — three of the courses I’ve taken in the Centennial College Teaching and Learning in Higher Education (TLHE) program this (pandemic) year: Open Educational Resources (OERs), Assessing and Evaluating Learning and Special Topics in Assessments.
I’m a life-long lover of learning with six years of post-secondary education under my belt, but it’s my industry experience that opened the door to teaching for me. As such, I have been teaching, part-time, at Centennial College, in Toronto, Ontario, Canada, for the better part of five years. My post-secondary studies were not centred around education — I have a general arts degree from the University of Waterloo and a diploma in print journalism from Centennial College — so in order for me to learn more about how to teach my lived experience, I turned to Centennial College’s TLHE program, signing up for my first (in person) course in January of 2017. Because I was blending my part-time teaching (part-time teaching comes with a schedule that changes each semester) with up to two other streams of work (that often consumed all seven days of the week), I struggled to find time for in-person learning — and, as a result, I had to hit the pause button on my progress as a student in the TLHE program. It took a pandemic — and the resulting shift to working and learning from home — to open up the opportunity for me to resume my studies in higher education this year. I share this with you because I’m often amazed that, after five years of being in a classroom — and after completing four courses in the TLHE program — I’m still struggling with assessments (perhaps even more than before — and most certainly more than any other part of the teaching and learning process). I struggle with appreciating their value as authentic metrics of the teaching and learning experience — a struggle that I find is only exacerbated by any evidence of cheating that occurs along the way. Though my experience as a teacher grows exponentially with each passing semester, I’m afraid I find myself getting no less used to the many and varied ways in which we uncover compromises to academic integrity — for just when I think I’ve managed to get out in front of one potential avenue for a breach, I happen upon another and I have to start the reflection-slash-modification process all over again. It seems relentless, to say the least.
To my way of thinking, a large part of the challenge with assessments is rooted in managing the process of evaluating their effectiveness. When you factor in the time it takes to research potential solutions to the challenges that we see — and some more time to modify or develop new assessments and/or instructional strategies so they are aligned with the course content — and then a little more time after that to evaluate and get out in front of any potential “cracks” that open up opportunities for cheating — you’ve got a whole semester of work wrapped up in … well … evaluating evaluations! TLHE professors Paula Demacio and Mindy Lee sum it up best when they suggest, in the Assessments and Evaluations course, that it can take semesters — if not years — to fully assess and appropriately modify an evaluation. Amen.
That said, we all know that assessments remain an essential element of our teaching and learning process. Graham Gibbs (2010), noted professor and expert in qualitative research methods, has said,
“Assessment makes more difference to the way that students spend their time, focus their effort, and perform, than any other aspect of the courses they study, including the teaching. If teachers wants to make their course work better, then there is more leverage through changing aspects of the assessment than anywhere else.”
Somewhere in my TLHE journey, I read a blog post that talked about exam time being the “season of dead grandmothers.” At first blush, the headline had me nodding along in agreement at the implied “truth” in it — this not-so-uncommon tendency for students to use the death of a loved one to explain a missed assessment, especially at the end of a term. That was, of course, until I drilled down on the link to the original paper entitled, “The Dead Grandmother/Exam Syndrome” (Adams, 1999), which features research that supports that this is no joke — that there is, in fact, an “epidemic of grandparents dying in the last two months of the semester,” and therefore a genuine increase in the number of deaths in students’ families at the end of the academic year (Adams, 1999). The author of this research, Mike Adams, goes on to suggest that these deaths are (admittedly anecdotally) connected to the worry that grandparents’ have over the grandkids’ success at school — and it reminded me that each time a student offers such an explanation for a missed assessment, educators face yet another conundrum in the assessment process. To borrow (and modify — or remix) a line from Shakespeare’s Hamlet, “to believe or not believe, that is the question.” I took great comfort in David Goolbar’s (2016) response to this dilemma:
“Students need to be responsible for their actions, certainly, but instructors need to be responsible for creating an environment that encourages students to learn. We should strive to create courses in which students want to do the work on time — because we’ve successfully made the case that doing the work on time will benefit them. We should also look to make students trust us enough that if tragedy does strike — sometimes family members do die you know — [that] they feel comfortable coming to us and explaining why they need some extra time.”
I found Goolbar’s challenge to educators to flip the mindset on — or to apply a growth mindset (Dweck, 2007) to — the culture around assessments intriguing. So intriguing in fact, that I haven’t stopped thinking about it since. Yes, educators can — and should — consider how to minimize the opportunity for cheating on assessments. And yes, educators can — and should — enforce the appropriate consequences for this type of behaviour. But in addition to considering the obvious, educators can — and should — consider what it is about our culture that leaves students feeling like they have to resort to making up excuses to get out of assessments in the first place. When one of the main goals of post-secondary education is for learners to find meaningful work in a chosen field — it seems prescient for us to consider what life lessons we teach if we create a culture in the classroom that doesn’t encourage and support learners to be honest about the challenges they face in their own learning experiences. As someone who has employed hundreds of people in the hospitality industry in two different countries over the course of 20 years, I can tell you that honesty, integrity and hard work rank among the most attractive qualities I seek in candidates for my teams. “As is often pointed out, few students end up with jobs where they get paid to fill-out multiple-choice test bubble sheets,” (Frye et al., 2012).
Despite my frustrations, when it comes to assessments, I have always had (at least) one eye on their potential to be meaningful. As a writer, I genuinely enjoy providing written feedback to my students — and I have found a way to consider each submission (be these low-stakes engagement activities such as an independent reflections or higher-stakes, scaffolded group projects) as teachable moments in and of themselves; I also consider this feedback as a supplemental opportunity for connecting with learners (especially in this age of virtual learning). But despite my love of writing, and my efforts to do what is best for students, there is no doubt that striving to provide this kind of thoughtful, detailed written feedback (delivered on a timely basis, no less) remains a huge challenge. Even the writer in me wouldn’t be telling you the truth if I didn’t admit that the idea of having to write-out 48 “stories” for one low-stakes assessment may not necessarily be my idea of a good time. And, so, even in the face of finding some added value in the process, the assessment conundrum — for me, anyway — has persisted, despite efforts to the contrary.
But as I worked through the content in my courses on assessments this year, I found myself thinking back to Goolbar’s post — time and again — and in doing so, I recognized that although I didn’t know it at that time, Goolbar’s post, with its inherent challenge to educators to flip the mindset on grading, signalled the beginning of me “falling in love” with what I was seeing as the “problem of assessments” (Wiley, 2020). My studies of assessments during this year of the pandemic introduced me to some new and intriguing topics — disposable and renewable assessments, un-grading and approaching the role of teacher as coach — concepts that I now see may be engaged to support my new (and evolving) approach to assessments in the semesters to come. And, in this way, and through the work I’ve done as both a teacher and a student in this year of Covid, I’ve been reminded that learning can be quite magical. That, ultimately, effective metrics (assessments) are those that serve as a gateway for learners to to seek and do more on a subject.
And so, as the vaccines roll out — and the talk of a return to in-person learning begins in earnest — I’m hoping to embrace this genuine (perhaps insatiable) interest I’ve acquired for improving the meaning and value of assessments and apply these learnings to my practice as a teacher in the semesters to come.
Adams, M. (1999). The Dead Grandmother/Exam Syndrome. Annals of Improbable Research. https://www.improbable.com/airchives/paperair/volume5/v5i6/GrandmotherEffect%205-6.pdf.
Dweck, PhD, Carol S. (2007.) Mindset: The New Psychology of Success. Ballatine Books.
Educause Review. August 17, 2005. Why is Measuring Learning So Difficult. YouTube. https://er.educause.edu/multimedia/2015/8/why-is-measuring-learning-so-difficult-v.
Frey, Bruce B., Schmitt, Vicki L., & Justin P. Allen (2012). Defining Authentic Classroom Assessment. Practical Assessment, Research & Evaluation, 17(2). Available online: http://pareonline.net/getvn.asp?v=17&n=2.
Gibbs, G. (2010). Using assessment to support student learning. Leeds Met Press. http://eprints.leedsbeckett.ac.uk/id/eprint/2835/1/100317_36641_Formative_Assessment3Blue_WEB.pdf.
Goolbar, David. (November 2, 2016) ’Tis the Season of Dead Grandmothers. The Chronicle of Higher Education Community. https://community.chronicle.com/news/1598-tis-the-season-of-dead-grandmothers#sthash.1lBUXdrp.dpuf.
Wiley, David. (2020). Fall in Love with the Problem, Not the Solution. Improving Learning. https://opencontent.org/blog/archives/6625.