Context and Critique
The following section provides a kind of background to our approach to assessment as learning specialists. Some of this is speculative and certainly debatable, but if we are to meaningfully consider what assessment means in our work, then we should include an articulation of the foundations and context upon which our approach rests. A bit of explanation about what we mean by the term assessment is a good place to start.
I pick up on some of the peculiarities of language in all our talk about assessment in the next section, but, given that it’s such a roomy and often muddy term, a basic definition for our purposes will be helpful. In their 1996 text Assessment in Student Affairs: A Guide for Practitioners, John Schuh and Lee Upcraft define it this way:
Assessment is any effort to gather, analyze, and interpret evidence which describes institutional, divisional, or agency effectiveness.
There are things to quibble with there, but it’s a workable start. The emphasis on “effectiveness’ does highlight an important distinction – the idea of assessment for accountability and assessment for improvement, and raises the important question: effectiveness for whom? There are two stakeholder groups in the logic of assessment in our work, and they can (though not inevitably) introduce conflicting incentives into the mix. If we are conducting assessment to be accountable to the institutions that sponsor our work, effectiveness might mean being more cost-effective, serving greater numbers of students, creating a positive “user experience”, efficiency, meeting predefined “performance indicators”, etc. And if we are focused on assessment as a way to improve our work, effectiveness may mean generating better indicators for all of the above. “Intervention X is better than intervention Y because it had higher participation rates, or showed greater participant satisfaction”. Determining when our work is “effective” can be an endless game of begging the questions: Effective in what way? Effective for whom? What does effective mean? Who decides?
Again, I dive a bit deeper into these conundrums in subsequent sections, but I will add one more variable – in my opinion the most relevant and unifying. Assessment of learning. Again, this is complex and begs further questions (learning what?), but a comprehensive approach to assessment in our work must begin with this. Are students learning something valuable from us, something that might be otherwise more difficult to learn without us? And then we can imagine an approach to assessment that accommodates our commitment to accountability to both our sponsors and our students, and to conducting assessment so that we may improve in our efforts.
Ok – even that tiniest of introductions to “assessment” has exposed how complicated a thing it can be. There is a kind of uncontested but often superficial fervour for assessment in a “Student Affairs” context, which is often where we find ourselves situated. Make no mistake, I think assessment is good, assessment is essential for us to do our work well, to serve students better, to help them learn things that will serve them well in their lives, and to improve in those efforts. But this does not mean that we simply accept, uncritically, the orthodoxy of assessment practices as is gets codified in our profession. Indeed, I believe strongly that we should resist codification in this way, and demand a kind of critical approach of ourselves so that we can embrace what we deem to be good in the practices of our work but look closely at what lies behind, and underneath those practices – examine where they come from, what they imply, what values they signify. Assessment, like many other features of higher education in general and Student Affairs specifically, is not just a simple, neutral, unblemished code of practice handed down from on high. The righteousness of assessment as codified in increasingly prevalent competency-driven manifestos and how-to guides should not simply be taken for granted as an unassailable truth. There is context here, a history behind the current doctrine of assessment in Student Affairs that should be considered so we can be vigilant and thoughtful and avoid the indulgent self-importance that can come from such doctrines.
In short, we should assess our approach to assessment, and be rooted in context, and history and critique lest our thinking get overwhelmed by fads and groupthink.
The “Culture of Assessment”
It’s a bit hard to imagine a time when “assessment” was not an integral part of educational institutions. Naturally, the assessment of individual students and their mastery of subject matter has been around forever in some form. But assessment means more now – not just individual student learning, but institutional effectiveness at providing effective public education – whatever that may mean. By the 1980’s, a growing scientific data-based culture, an ethos of managerialism and efficiency, growing costs of higher education, and greater cries for accountability from the public paying for that education, contributed to the so-called “assessment movement” taking hold, at least in the in the United States, with Canada, in many ways, following suit.
Mostly, this came from a legitimate concern – high costs and low trust of education. Prospective students really only had an institution’s reputation as a guide to choice – a situation that was becoming increasingly intolerable (and still largely a persistent phenomenon). So, the demand for accountability, evidence of value, and increased transparency grew to a fevered pitch and a frenzy of so-called “learning outcomes assessment” took hold – assessments of individual students, of classes, of curricula, of departments, of divisions, of retention rates, of acquired thinking skills etc. etc. Institutions have been collecting this kind of data for decades, most prominently on large-scale assessments like the National Survey of Student Engagement (NSSE) and others. But still, there has been a disconnect between that collected data and a more informed public, often referred to as the “black box of education”. Data is collected; it’s just not very well shared and does not often find its way meaningfully into the glossy marketing material so prevalent in the higher-education marketplace.
And this has only raised the temperature on assessment. It would now be considered almost ludicrous to suggest that we do less assessment in education – who could possibly object to more “evidence-based decision making”? So now we have a deep and growing culture of assessment, accountability, evidence, data, transparency, quality control permeating all of higher education. And, to varying degrees across the country, the funding models of higher education are tied to outcomes that get measured through assessment.
Historically, when higher education was the traditional route for a privileged few, the value of that education was taken for granted. There were no cries for accountability. Privileged folks came in, privileged folks came out. In the still ongoing transition to greater access, the question is asked – what particular value is added by a post-secondary experience? It’s an expensive thing, and the benefits are not always obvious. So, it’s a reasonable question to ask, and one way to answer is to collect evidence about the various elements of the post-secondary experience and make a rational judgement about its worth. In other words, to assess the situation.
So how does our work, specifically in the context of Student Support/Affairs, fit into this picture? Actually, Student Affairs is a field that has played a prominent role in the evolution of this culture of assessment, mostly with the publication of a few key volumes and a history of having to demonstrate its value. A truncated bullet list of some some of those developments follows.
Assessment in the Student Affairs Context: a brief bulleted synopsis
- Early – in loco parentis, not much formal assessment to speak of.
- Wartime – Student Personnel Point of View (ACE, 1937). A commitment to educating the “whole student” wherein the assessment of student satisfaction becomes important.
- 1970’s – Student Development in Tomorrow’s Higher Education: A Return to the Academy (Brown, 1972); The Future of Student Affairs, (Miller and Prince,1976). The focus shifts to the idea of “student development” integrated into programming so assessment begins to focus on competencies and professional standards.
- 1980’s/90’s – Perspective on Student Affairs (NASPA, 1987). Co-curricular support becomes focused on supporting the academic mission, so assessment shifts its focus to that dimension – retention, first-year experience, etc.
- 1990’s/00’s – The Student Learning Imperative (ACPA, 1996); Principles of Good Practice for Student Affairs (ACPA & NASPA, 1997); Learning Reconsidered (ACPA & NASPA, 2004) and Learning Reconsidered 2 (ACPA & NASPA, 2006). The focus becomes about improving student and institutional outcomes and assessment becomes more systematic and committed to creating “assessment cultures” in Student Affairs. Out of this era we get stuck with these good, but sometimes rigid, restrictive models of assessment, (the orthodoxy of the “Assessment Cycle”) used mainly for accountability rather than inquiry or program evaluation and improvement.
- 2010’s and onward – In more recent years, we have moved towards the language of “Best Practices”, High-Impact practices, data-analytics. From Patrick Love and Sandra Estanek: If practitioners are to call a specific program a “best practice” (i.e., of high quality and worthy of the adjective “best”), then they must be able to point to some evidence that demonstrates a significant level of effectiveness based on clearly stated outcomes. Ultimately, assessment provides the common language and standard by which practitioners can compare programs and practices and identify what is truly a best practice in higher education. Assessment is the tool to help professionals consciously gather, analyze, and interpret the necessary evidence first to ascertain effectiveness and then to improve a program’s effectiveness (2004, p. 20).
So – lots of assessment going on, which on it’s face is a good thing. But like most seemingly common-sensical ideas, there can lurk a hidden agenda, a body of critique that should not be ignored in the conversation. Assessment is not a neutral idea. The fetishizing of Learning Outcomes is not neutral. All of it needs to be considered historically, in context. It needs to be examined for how it can be costly or ineffective or, worse, harmful.
So, before we breathlessly accept assessment and learning outcomes measurement as beyond reproach, let’s consider a few critiques. Note this is very much scratching the surface here and does not convey the depth of debate that persists on this topic – so know that you can dig very deep into this rabbit hole if you are so-inclined.
Early and Ongoing Critiques
First: The basic tension between assessment for improvement and assessment for accountability alluded to earlier.
From Peter Ewell: Accountability requires the entity held accountable to demonstrate, with evidence, conformity with an established standard of process or outcome. The associated incentive for that entity is to look as good as possible, regardless of the underlying performance. Improvement, in turn, entails an opposite set of incentives. Deficiencies in performance must be faithfully detected and reported so they can be acted upon. Indeed, discovering deficiencies is one of the major objectives of assessment for improvement (2009, p. 7).
A seemingly contradictory set of incentives. We feel this intuitively when we consider the idea of assessment. Why do we do it? And for whose benefit?
Second: the fear that assessment measures set up a competitive, consumerist context of higher education in which institutions are pitted against one another while students engage in comparison shopping.
Third: the fear that an emphasis on assessment and outcomes imposes the potential for too much external control or influence over what we do which is especially problematic if we reduce our assessment measures solely to crude and blunt instruments like standardized tests, superficial counting and averaging, or mindless submission to other agendas.
Fourth: a lack of clarity on what is being assessed, what meaningfully constitutes student learning and success, and how best to measure those things across a multitude of diverse institution types and student populations.
These are not trivial concerns and any comprehensive approach to assessment in Student Affairs should understand and respond effectively to these things. Assessment may simply be an administrative reality for people in this profession but that does not necessarily mean assessment is a thing to always be suspicious of. Yes, it may be externally mandated but, it can come from a desire to know better if our work has value for students. So, we should do it but do it well.
Deeper Critiques
Beyond, but related to, the ongoing battlefronts related to competing incentives and external control, lie the deeper critiques of assessment culture in education having to do with structures of power, the imbalances and hegemonies sometimes inherent to traditional assessment practices.
Assessment can be presented to us as a kind of wolf in lamb’s clothing – presented ahistorically. This can be an effective weapon of ideology. Present a thing as so common sensical that any critique of it can be dismissed as quaint.
But the assessment movement can be viewed as part of a broader neoliberal narrative about Taylorism, efficiency, as a tool of social regulation, and indoctrination, concerned only with the creation of human capital for the labour market. And it is rooted in a very fashionable but deeply problematic technology we call Learning Outcomes.
From Michael Bennett & Jacqueline Brady (2014, p. 146): Because engaged learning is so complex, the level at which the Learning Outcomes Assessment movement most often is focused renders meaningful assessment impossible….…however “democratic” recent assessment practices may claim to be, the [Learning Outcomes Movement] has been and will be used for profoundly undemocratic ends—as a disciplinary mechanism for college administrations, government entities, and accrediting agencies that seek to “objectively” measure the practices of institutions with vastly different resources serving dramatically stratified student bodies.
Or, from Trevor Hussey and Patrick Smith (2008, p. 113): It would be a very poor teacher who went into a classroom or seminar determined to produce, in his or her students, a certain number of specific learning outcomes, and who stuck rigidly to a programme whatever happened in the session itself.
Inherent in assessment culture and learning outcomes, indeed in many structures of higher education now in vogue, like curriculum models, is a basic need to bring order to chaos, and a privileging of certain ways of knowing rooted in Euro-centric, positivist, scientific models. It’s a phenomenon that some argue is a remnant of colonial history that sought to bring order and “civilization” to what was deemed “barbarous”. Education is deeply rooted in a fear of messiness and embraces the tools of tidiness to combat this. In their influential 2008 (p. 136) article about assessment practice in Student Affairs, Green, Jones, and Aloi list the steps of what they see as effective assessment practice. Although they are quick to add that the process is “…not linear nor prescriptive…” it is hard to see it any other way, especially when they go on to say “…for assessment to result in improvements to student learning, these key elements must be in place” (emphasis mine) – seems pretty prescriptive to me. This rootedness in western rationalism, while undoubtedly useful and productive, should, at the very least be acknowledged not as the required way of thinking, but as a certain way of thinking among alternatives.
From Riyad Ahmed Shahjahan (2011, p. 184)): By naming and representing education as a field in chaos, evidence-based education proponents, with good intentions, are justifying actions and measures to make education systems more evidence-based and in turn standardize and rationalize complex educational processes.
The assessment culture, and all that the evidence-based ideology entails as described so well by Riyad Ahmed Shahjahan, can be a kind of erasure, an objectification of experience, a form of oppression and a flattening of individual experience and curiosity and wonder.
So, the idea of assessment, especially that version of assessment rooted in learning outcomes and curriculum, can be a real affront to some people’s idea of what education is all about, what it’s for, what its deeper purpose is. Good educators know that not all learning outcomes are intended, and that considered in individual, discrete moments of teaching, should not be taken as performance indicators. Good educators know that too much managerialism causes a legitimate fear of performativity. Good educators know that knowledge is not a commodity. Good educators know that chaos and disorder is not a bug in the system of education, it is a feature. Good educators know that “systems” of management in education can be deeply offensive and inhospitable to other approaches to teaching and learning. Good educators know that we should never simply embrace fashionable ideas without considering their history, and context and their potential for perpetuating systems of harm and singular forms of thinking.
I prefer to dwell in middle-ground zones and be guided by common sense. To me, the common sense of it all is to make room for these alternative conceptions of education generally, and assessment specifically. A “best practice” or “evidence-based” approach to understanding our work is not repugnant to me. I do not subscribe to a deconstructionist, burn-it-all-down approach here, but rather an expansive one. I see no need for hostility and believe that the method should simply be chosen according to its fitness for purpose, and that the folks doing the assessment or research projects should offer some open acknowledgement of their inclinations and biases. This can go a long way towards loosening some of the oppressiveness that comes from a dogmatic adherence to a certain methodological orientation. I believe, not in the righteousness of only one true method, but in a continuum, and that the contours of human experience are most usefully illuminated by seeing them from a variety of perspectives.
Most research/assessment in Student Support/Affairs focuses on either a simple quantitative analysis of some dubiously claimed causal relationship between a program and an outcome, or a traditional qualitative account of student feedback about some aspect of their experience with a program. (Naturally, there is some range here). This is, of course, natural and good, but it also shows a hesitancy to explore other forms of inquiry whether that be large-scale statistical analyses investigating real causal connections at one end, or arts-based narrative inquiry at the other. I believe we should be hospitable to all forms of inquiry as long as they serve us well and ethically.
So here we are. So much to grapple with. History, context, critique, accountability. Where do we go? We still need to keep going with this despite all the legitimate fear and criticism. Why? Because we want to improve. Period.
Figuring out how we can improve student experiences, contribute meaningfully to their learning while also attending to performance indicators that matter, is central to what we do. How could we not take that seriously? Any thoughtful professional in the field must be dedicated to feedback about their work that will enable them to do it better. It’s about cultivating an ethos of “positive restlessness”, a drive for continued improvement. The central challenge, as always, is arriving at some consensus or at least coherent agreement across our sector, about what we deem to be the salient measures of student success and how to ethically account for the ways we contribute to them, while never losing sight of the structural inequities that exist in the system of higher education that can be reproduced through an ahistorical “outcomes” and “managerial” approach to assessment.
So go slowly.
Again from Riyad Ahmed Shahjahan (2011, P. 201): Educators, teachers, students, and policy-makers need time, not to be given more information for decision-making or learning, but more importantly to assess what we are overlooking in educating future generations…
It bears repeating.
…to assess what we are overlooking in educating future generations.
So these are the questions we need to wrestle with:
- What do we think that we, as learning specialists, can uniquely contribute to students’ overall learning and experience?
- Why is this important, not to us, but to students?
- How do we come together in some sort of consensus about this?
- How do we best assess the extent to which we contribute to those goals for students?
- How do we then make use of that feedback to improve what we do?
- How do we remain ever-vigilant about the broader inequities present in the system of higher education?
- How do we do meaningful assessment without being complicit in these inequities?