Student Evaluation of Teaching – A Trojan Horse or a Powerful Material?

September 28, 2015

Torgney Roxa

So, my job is to help teachers develop better teaching skills. Today I met two academic teachers from Botswana National University. They are in Lund as part of a project concerning research on water management. But they are dedicated teachers and therefore wanted to meet with me and my colleague, Thomas Olsson.

Before anything they started to talk about students’ evaluation of teaching (SET). It turned out that in the university where they work teachers get salary raises depending on their SET results. The SET includes questions like “Did you like the lectures?” and “Did the teacher provide you with hand-outs?” They were both concerned and frustrated with this. They said that some teachers manipulate the system just in order to get good grades on SET and thereby earn more money. SET, they said, becomes a popularity test and has very little to do with student learning.

Botswana is in the south of Africa. They have about two million inhabitants and it is probably best known from books and TV about “The No. 1 Ladies’ Detective Agency.” But the frustration these teachers voiced are experienced around the world. I know of universities in the US where SET are the major ground for promotion and for raise in salary.

It is idiotic. There are no other words for it. To use SET like that is irresponsible, ignorant, and unworthy of academic institutions. Yes, it is also true that evaluating teaching without asking students is just as bad. But to use SET as a measurement of quality is complete insanity.

First of all, students approach learning in different ways. Some are striving for understanding (deep approach to learning); some instrumentalize learning to pass the exam (surface approach). All teachers intuitively have seen these two categories. They know them. So if a teacher teaches for understanding, he or she refuses to give the best answer, sometimes makes things more complicated, sometimes offers a choice and demands engagement from students. All these things are hard to get right, but let us suppose that this teacher has succeeded (and those teachers who can are around). Now, if a majority of the students in this course are entering the course with surface approach intention, the teacher has to turn them around. This will cause frustration. Some students will think the teacher is refusing to give them information, is not doing his or her job, and is unprepared or unhelpful. The teacher will score low on the SET and may end up not being promoted or offered a raise. Instead, it will be the teacher who follows the flow, presents information for the test, and offers quick answers that will have greater chances for promotion and reward.

In reality though, it is much more complicated. But we will never know. Looking at student evaluation numbers only as an example, for a course that scores 80% (or 4/5) on the SET, we will have no real information about teacher effectiveness since we do not know from which approach students are filling out the form, whether that be deep or surface learning.

Furthermore, in a fantastic article by Sprague and Massoni (2005, Student Evaluations and Gendered Expectations: What We Can’t Count Can Hurt Us) we see that students place different expectations on female teachers compared to male teachers. Good male teachers are “provocative,” “inspiring,” and “ambitious goal-setters”; while good female teachers are “caring listeners” and create “comfort.” These reflections reveal how gender stereotypes in society invade our classrooms and, through ignorant use of SET, influence who we promote and reward. SET becomes a Trojan horse where gender politics, consumer mentality, and economic discourses undermine whatever academic values we have. It is sad that it happens.

The literature is filled with these kinds of outcomes. Over and over again SET produces interesting results, which call out eagerly for interpretation but never can be taken at face value. In my institution we collect SET four times per academic year. Courses end either in October, around Christmas, in March or in the end of May. We have collected the same type of forms since 2003 and we have 20 0000 completed questionnaires in our database. Consistently the system reports courses ending in October to be of a certain quality, the courses ending around Christmas are better. In March, the courses have the same quality as in October but in May they are worse. These are significant results and we have no idea of what is behind this pattern. Perhaps students and teachers are ambitious when they start the academic year. They pick up pace and peak around Christmas but in May everybody is tired and wants to leave. This is pure and imprecise speculation. We do not know. Should we then promote teachers who end their courses around Christmas while laying off those who teach in May based on these impressions?

Despite this, there is research showing that managers and administrators in institutions trust SET more than teachers do. This is understandable since managers and administrators do not have a personal experience of the courses in question. They are also under pressure to adapt to values in society and this pressure threatens to override their academic judgement. It is understandable, but not good.

So, what can we say? 1) Results from SET must be interpreted. They do mean something, but like research results, they need to be explored and made sense of. 2) To do this, SET-forms and questionnaires must be linked to some kind of research paradigm in education, psychology or something similar; otherwise results are extremely hard to interpret. We use the Course Experience Questionnaire by Paul Ramsden since it is linked to deep and surface approaches to learning. A high number on a course evaluation means that the course had more students with deep approach. And 3) Train the teachers in the paradigm chosen so that they have a chance to make sense of the results.

Botswana may seem far away but the teachers I met with today share our concern for how SET’s are being implemented and used. Most of us realise that we need to involve student voices while assessing quality of teaching. But, just as they are, we all should be aware of pitfalls and misuse. As I said, using SET scores straight off as a measurement of quality is nothing else then stupid. They must be constructed and interpreted wisely, If they are, they can indeed help us to become better at supporting positive student engagement.

 

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

DisruptED: Education Interrupted Copyright © by Paul R. MacPherson Institute for Leadership, Innovation and Excellence, McMaster University is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book