Introduction
Standardized testing can be understood, generally, as testing “designed to assess the knowledge and understanding a student has acquired of a school subject” and, more specifically; as testing that has to be administered and “scored in the same way, whenever and wherever it is used” (Traub 1994, 5). In this part of the book, Carol Ann Giancarlo-Gittens, Leo Groarke, Ralph H. Johnson, and Robert H. Ennis explore answers to questions about how different standardized and non-standardized tests of critical thinking can be valid to the extent that they accurately measure an adequate range of critical thinking dispositions or critical thinking skills.
In the first chapter, Giancarlo-Gittens introduces the problems standardized testing creates for teachers and students of critical thinking. She then discusses critical thinking dispositions testing as one way to address some common problems. In Chapter Two, Groarke argues in favour of accountability and supports the attempt to design and administer adequate tests of critical thinking skills. Groarke believes that such tests are required to judge the effectiveness of the many competing approaches to critical thinking education. But Groarke argues that one of the early and popular critical thinking tests devised by philosophers (in the “Delphi Project”) — the California Critical Thinking Skills Test — does not validly measure critical thinking skills. In criticizing the test, he enumerates a range of skills belonging to the exercise of critical thinking, a skills set that would need to be incorporated for adequate testing (and so teaching) of critical thinking to occur.
In Chapter Three, Johnson builds upon the work he has done elsewhere on “the dialectical tier” (Johnson 2000). The “dialectical tier” comprises the notion not only that arguments must be judged in terms of their logical cogency but that arguers must also be judged on how well they recognize and anticipate objections to their views. In keeping with his broader point that studies of argument have not paid enough attention to the dialectical tier, Johnson contends that the same can be said of critical thinking tests.
Despite their concerns about existing tests, Groarke and Johnson — like Giancarlo-Gittens — remain optimistic about the possibility of developing valid critical thinking tests that will work toward improving critical thinking education. These authors stress the importance of thinking about ways to improve current tests and to create new instruments that more adequately cover the different facets of critical thinking. In Chapter Four, Ennis discusses the series of tests that he thinks may be the best available measures of critical thinking — The Cornell Critical Thinking Test (Levels X and Z). He explains not only how this series of tests has been consistently evaluated for their validity but, more generally, he provides a methodology for testing the validity of any critical thinking measure of the X and Z sort. It can be plausibly said that Ennis outlines a methodology that should be applied if and when others attempt to develop new standardized tests, especially alternative kinds of assessment that purport to support critical thinking as the main goal of instruction.
References
Johnson, R. 2000. Manifest rationality. Pittsburgh: Erlbaum.
Traub, R. 1994. Standardized testing in Canada: A survey of standardized achievement testing by ministries of education and school boards. Toronto: Canadian Education Association.