2 Assessing Critical Reading Assessments at Huron University College
Geoff Read, Tom Peace, and Tim Compeau
As the most recent professors in Huron University College’s signature first-year course, History 1801E, “Controversies in Global History,” we have struggled for several years with an issue that appears to plague university instructors far and wide: many of our students are not doing the readings for their weekly tutorials. This poses quite a problem since the premise of the tutorials is that through discussion of the readings, students will learn how to identify and assess arguments, particularly through the critical evaluation of the historical evidence upon which they are based. Students who do not do the readings for the tutorials, therefore, not only cannot participate in, or contribute to, the discussion, but actually cannot even follow the course of the conversation. They essentially learn nothing in the process.
So what to do? We increased the participation grade to 15% of the final mark to emphasize that we valued this component of the course. This had no apparent effect. We incorporated student-led discussions hoping that class members would feel obliged to help each other out by doing the readings thereby enabling them to answer each other’s questions. Again: this had at most a negligible impact on students’ reading and participation. For a few years we instituted content-based quizzes at the start of each tutorial. This made some difference but was labour-intensive for the professors and encouraged the kind of rote-learning that was at odds with our desire to encourage students to think of History as more than just the memorization of facts.
Then in 2016-17, following the Historians Teaching History Conference at Mount Royal University, we tried a new approach, requiring the students to fill out a critical reading assessment form available below for every tutorial where a reading was discussed. This assessment would then count for half the participation grade each applicable week. We hoped to convey several messages with this mechanism.
First, we wanted our expectations to be clear – we require students to come to class prepared, having done and reflected upon the assigned reading in a rigorous way.
Second, we hoped that by encouraging students to prepare properly, we would not only ensure that a critical mass of them would do the reading, but that they would be ready to discuss it at a relatively sophisticated level.
Third, we designed the forms to reinforce our in-class teachings. The form asks students to identify the thesis, the sources on which the argument is based, the author(s)’ position in the historiography, connections to other class materials, and three strengths and three weaknesses of the argument. Further, the form requires students to explain why, or why not, they found the argument convincing.
Fourth, we hypothesized that part of the culture of not preparing properly for classes was a general sense students had of disengagement from the course. Accordingly, we hoped that the continual evaluation and feedback provided on the assessments would be one means of keeping students engaged in the class material.
A fifth benefit of the assessment forms was not part of our initial motivation but is worth mentioning: completed and graded assessments provide excellent study materials for students as they prepare for the course’s tests and final exam.
So have the critical reading assessments been effective? Have they met our four main objectives?
We should be clear that there has been no rigorous test applied to measure this. We didn’t run a study to test student-learning, for example, before and after implementing the assessments. That said, the anecdotal evidence we see in our classrooms suggests the measure has been at least a partial success. Certainly, it seems a given that these assessments help to make our expectations as clear as possible with regard to tutorial participation. If students overlook the blurb in the syllabus that outlines these, and also miss the instruction given in both the introductory lecture and introductory tutorials of the year to this effect, then surely these assessments send a message about what they need to do to prepare for class. Moreover, the results have been encouraging. More students do the reading; class discussions are more substantial; and student engagement in the class does seem better. Tests and exams, additionally, seem to confirm that the students are having greater success at mastering the material and at developing their critical thinking and reading skills.
So pleased are we with how the experiment has gone that we have begun to implement these assessments in other classes. In the summer of 2017, for example, Tim adapted the critical reading assessments for his second-year online American survey course, History 2301E at Western University. In previous years, his attempt at replicating the tutorial experience online using the forum feature of the university’s course platform proved disappointing. With summer jobs and other distractions, students routinely skipped the discussion component where they were challenged to post and answer questions about the articles much as the students do in class. The introduction of the critical assessment sheet, weighted as a separate weekly assignment (10 sheets at 2% each), was accompanied by a significant improvement in the forum discussions over the twelve weeks of the course.
The impact of this weekly drill, carried out within such a short time frame, was also evident in student essays, especially for non-history students taking the course as an elective or for an essay-course requirement. Students with little or no experience with the demands of history essays received a crash course through these sheets and seemed to gain a clearer idea of how to interrogate and write about the books and articles they encountered in their own research. As with the in-class assessments, the online version caused a significant increase in the professor’s workload, and with only a single year in place it is too soon make any concrete claims as to their effectiveness. Nonetheless, the early evidence is promising.
However, this modest success story comes with a proviso. The positive effects of the assessments in tutorials are most obvious in the first halves of our courses, when we would estimate that somewhere between 80-90% of the students complete them and come to class better prepared accordingly. This is indeed a marked improvement on earlier years, and has positive effects in all four areas outlined above. But in 1801E, a full-year course, in the second half two discouraging trends emerge.
One sees an exodus of weaker or less-engaged students from the class. One possible explanation for this is that the burden of doing the assessments helps put them to flight. Another more troubling possibility is that once students fall behind on their assessments this helps create a feeling of hopelessness on their part wherein they feel they cannot possibly catch up in the class and give up.
A second negative trend is that by February and March, when essay-writing season hits, a dramatic drop-off in students doing the assessments, and therefore presumably the readings, as well as in the quality of class discussions takes place. This is entirely consonant with patterns that existed prior to our having implemented the critical reading assessments and suggests that the positive effects of the assessments are real but limited in both time and scope.
A third downside to the critical reading assessments is of course that like the content quizzes we experimented with previously, they create quite a bit of work for the instructors. Instead of heading back to our offices and quickly recording participation grades for the day for each member of the tutorials, for example, we must now spend roughly 3-5 minutes per assessment to go over them, ensure they are substantive, provide some constructive feedback, and record the grade.
The critical question for us, then, as instructors is whether or not when we weigh the positives against the negatives and factor in the extra work they create the assessments are worth the effort and cost. We are united in believing they are. As with so many assignments and pedagogical strategies, the payoffs of the critical reading assessments are admittedly greatest for those students who are fully engaged with the class. The best students, in short, remain the best students and take maximum advantage of the instruction we provide, including these assessments. But the improvements we see in class discussion and student engagement and performance combined with the fact that most students do the assessments most of the time, albeit with the drop off towards the end of the year, suggest to us that this is a strategy and exercise worth continuing.
So for next year we will be keeping the assessments and also incorporating some new strategies to try to encourage student learning in the lectures. Perhaps in March 2019 we can update activehistory.ca readers on how it all turns out.
History 1801E: Critical Reading Assessment
Author ______________________________________________________________
Title________________________________________________________________
Title of Publication (eg: William and Mary Quarterly) ____________________________________________________________________
Date Published _______________________________________________________
-
What is the central argument of this reading?
-
What evidence is provided to support the argument?
-
What other historians are discussed? How are their arguments or positions on the topic different?
-
How does this reading connect with the textbook and lectures? Does it complicate or challenge the other narratives we have examined?
-
List here the article’s strengths and weaknesses:
|
Weakness |
Strengths |
|---|---|
|
1. |
1. |
|
2. |
2. |
|
3. |
3. |
6. Are you convinced by the argument? Why or why not?