1 Assessing Critical Thinking Dispositions in an Era of High-Stakes Standardized Testing
Carol Ann Giancarlo-Gittens
On the first day of school, fifth-grade teacher Erica Bradley waits with anxious anticipation to greet her students and to begin what she has dreamed of for years — a career of helping children to learn about amazing new subjects while becoming skilled and knowledgeable about the world around them. At the secondary school down the street, Jerome Harris, a mathematics teacher fresh from his teacher preparation program, enthusiastically describes to his students how they will be experiencing a technique called problem-based learning this semester (Duch 2001). Trained in social constructivist teaching methods, Mr. Harris is eager to guide his students through a collaborative process of meaning-making regarding real-world problems as they master the standards-based mathematics content.
It is not long into the school year, though, before Ms. Bradley is told by her principal to spend more time on reading and math because those are the subjects on the state-mandated standardized test. At the high school, Mr. Harris is approached in the break room by his mentor teacher, who conveys her concern that Mr. Harris’s teaching, while admirable, needs to change. In her view, Mr. Harris does not focus enough on the basic skills the students will need to pass their state-mandated high school exit exam. This is the stressful reality. How teachers do their job is directly related to the performance expectations that have become part and parcel of high-stakes standardized testing and accountability systems that are pervasive in K-12 education in the United States and perhaps to a lesser extent in Canada.
Numerous articles can be found in the educational literature that describe the history and current impact of high-stakes standardized testing on educational practice (Darling-Hammond 1985; Goertz 2003). The widespread adoption of accountability systems that rely on standardized tests to drive educational reform gained momentum in the 1980s and 1990s and has become accepted practice today. The assumptions behind the high-stakes testing movement are that testing will increase student performance outcomes, positively influence educational policy reform efforts, motivate student achievement, and increase teacher effectiveness (Stecher 2002). However, the research does not unambiguously support the validity of these assumptions.
A wide range of outcomes have resulted from the current accountability movement, with many representing dire consequences for students and teachers alike. Behaviours that have been documented, either in research or in the media, include such things as the narrowing of the curriculum to focus exclusively on the subjects covered on a state-adopted assessment instrument; increased class time spent on test-related activities to improve students’ test-wiseness; increased incidences of academic dishonesty including direct coaching, divulging of test items, and other forms of cheating; student apathy and disengagement; teacher attrition; and encouragement of widespread testing exemption practices for low-performing students (Darling-Hammond 1985; Jones 1997; Hoffman 2001; Stecher 2002; Neill 2003; Goldberg 2004).
Nevertheless, the sheer practice of administering standardized assessments in general should not be portrayed as the destructive agent behind these undesirable changes. A holistic condemnation of the accountability movement denies the genuine benefits of having valid and reliable data on student performance. Test results are useful to determine whether students are meeting curricular standards. Further-more, true progress in educational reform efforts can be accomplished only through rigorous evaluation of the efficacy of curricular change. With this said, there are clear practices in the current iteration of high-stakes standardized testing that continue to cause alarm. This chapter addresses how the use of basic-skills, factual-knowledge-oriented, state-mandated tests results in the systematic neglect of higher-order thinking skills and dispositions in the assessment process and, consequently, in classroom-based curricular design and delivery. The chapter highlights a rarely mentioned but worrisome concern: that critical thinking (CT) as an educational outcome, particularly the assessment of CT dispositions, may be an unintended casualty associated with high-stakes state-mandated testing programs.
Critical Thinking as an Educational Outcome
The expression “critical thinking” can be traced back to the work of John Dewey and Max Black in philosophy. It is also sometimes associated with the work of W.G. Perry and other developmentalists in cognitive psychology, where it has associations with reflective judgment, intelligence, logical thinking, and problem-solving. To some people the term is coextensive with informal logic, while others see it as an alternative way of talking about the scientific method.
There is broad consensus among critical thinking theoreticians that a central goal of education is to prepare persons who willingly and skillfully engage in CT. In short, the educational system should produce graduates who are willing and able to use their cognitive powers of analysis, interpretation, inference, evaluation, explanation, and self-monitoring meta-cognition to make purposeful judgments about what to believe or what to do (Paul 1984; Ennis 1985; Facione 1990; Carter-Wells 1992; Winn 2004). Goals 2000: Educate America Act called for all students to leave grades 4, 8, and 12 “having demonstrated competency over challenging subject matter” and every school in America to “ensure that all students learn to use their minds well, so that they may be prepared for responsible citizenship, further learning, and productive employment in our Nation’s modern economy” (Education 1990). A national survey of employers, policy-makers, and educators found consensus that the dispositional dimension, as well as the skills dimension, of critical thinking should be considered an essential outcome of a college education (Jones 1995).
In 1990, under the sponsorship of the American Philosophical Association, a cross-disciplinary panel completed a two-year Delphi Project that yielded a robust conceptualization of critical thinking understood as an outcome of college-level education (Facione 1990). Before the Delphi Project, no clear consensus definition of critical thinking existed (Kurfiss 1988). Broadly conceived by the Delphi panelists, critical thinking was characterized as the process of purposeful, self-regulatory judgment. Throughout this cognitive, non-linear, recursive process a person gathers and evaluates evidence in order to form a judgment about what to believe or what to do in any given context. In so doing, a person engaged in critical thinking uses his or her cognitive skills to form a judgment and to monitor and improve the quality of that judgment (Facione 1990). This robust definition of critical thinking provided the conceptual framework to address the Goals 2000: Educate America Act mandate and was the focus of a replication study of the definition and valuation of critical thinking that resulted in a consensus among educators, employers, and policy-makers alike (Jones 1994). The Delphi Report’s consensus expression of critical thinking was vital to advancing the national conversation beyond semantic disputations and into the more important realm of measurement.
The Disposition Toward Critical Thinking
Contemporary critical thinking scholars acknowledge that any discussion of critical thinking must include both thinking skills and thinking attitudes, or dispositions. The phrase critical thinking disposition refers to a person’s internal motivation to think critically when faced with problems to solve, ideas to evaluate, or decisions to make (Facione 1997; Giancarlo 2004). These attitudes, values, and inclinations are dimensions of one’s personality and motivational style which relate to how likely a person is to approach decision-making contexts or problem-solving situations by using their reasoning skills. The honing of one’s critical thinking skills, as well as developing the disposition to use one’s skills, is vital for success both in school and throughout a person’s life. It is not sufficient for educators to nurture students’ cognitive skills if, when faced with a decision on what to do or what to believe, the students fail to exercise what they have learned. When making decisions, students must apply sound reasoning over other strategies such as passive and unquestioning acceptance of the popular or consensus opinion. Valuing the disposition toward critical thinking as an educational outcome is a declaration of the centrality of this characterological dimension of the critical thinking process. It is only though the combined effort to teach thinking skills while nurturing the desire to be a confident and capable thinker that we will produce future generations of leaders who will be capable of solving the significant global challenges of the modern world (e.g., global warming, poverty, AIDS/HIV, etc.).
The dispositional portrait of the ideal critical thinker was described by the Delphi experts as follows:
The ideal critical thinker is habitually inquisitive, well-informed, honest in facing personal biases, prudent in making judgments, willing to reconsider, clear about issues, orderly in complex matters, diligent in seeking relevant information, reasonable in the selection of criteria, focused in inquiry, and persistent in seeking results which are as precise as the subject and the circumstances of inquiry permit. (Facione 1990, 2)
Until only recently, the traditional assessment of a student’s critical thinking has focused nearly exclusively on CT skills. It was not until the publication of the California Critical Thinking Disposition Inventory (CCTDI) in 1992 that researchers and educators had an instrument by which to assess a person’s disposition toward critical thinking (Facione 1992; 2006). The CCTDI captures the Delphi description of the ideal critical thinker in terms of seven non-orthogonal subscales: truth-seeking, open-mindedness, analyticity, systematicity, CT self-confidence, inquisitiveness, and cognitive maturity. The introduction of the CCTDI led to investigations demonstrating a connection between critical thinking skills and dispositions, and the value of CT disposition for the prediction of educational success (Colucciello 1997; Walsh 1999; Giancarlo and Facione 2001; Kakai 2000; Zoller 2000; Giancarlo 2004; Nokes 2005; Lampart 2006).
The Impact of High-Stakes Testing on Educating
for Critical Thinking Dispositions
Critical thinking is widely recognized as a liberating force in education and a powerful resource in one’s personal and civic life. Many educators and researchers would concur that critical thinking instruction is vital in the K-12 curriculum (Lipman 1987; Kuhn 1990). Educators and scholars recommend that critical thinking instruction in the K-12 curricula develop CT skills and foster the disposition to use those skills as preparation for both college and later life. Reconciliation of the aforementioned educational goal with the goals of high-stakes standardized testing is the challenge to be faced (Chudowsky 2003). Tests that required only limited and lower-level thinking activities, such as memorization and recall of basic facts and skills, are not sufficient to meet the goal of educating students to become thinking members of society.
High-stakes testing and accountability programs have a direct impact on curriculum and instruction at the elementary and secondary levels (DiMartino 2007). Abrams and Madaus (2003) outline seven principles to describe consistent ways in which high-stakes testing affects teaching and learning. Most relevant to this discussion are principles 4 and 5. Principle 4 states, “In every setting where high-stakes tests operate, the exam content eventually defines the curriculum” (33). Highly related to this phenomenon is the practice captured in Principle 5: “Teachers pay attention to the form of the questions of high-stakes tests (short-answer, essay, multiple-choice, and so on) and adjust their instruction accordingly” (33). Through these principles the authors draw attention to influences such as the symbolic and perceptual importance of high-stakes testing, and the power high-stakes testing practices have to compromise the validity of test scores because of the potential to over-emphasize test preparation behaviours. The power to corrupt educational practice stems from the fact that the more likely a test result will be used for major educational decisions the more likely a teacher will “teach to the test.” Research is readily available to suggest that teachers alter the emphasis placed on the core content areas being taught in the classroom to become nearly synonymous with the content included on state tests (Stecher 2002; Goldberg 2004).
It is clear that high-stakes testing affects K-12 curriculum. This impact is not limited, however, to the content being addressed. The thinking skills required by the assessment instruments also influence the instructional strategies teachers employ in their classrooms (DiMartino 2007). When state-mandated tests demand limited and lower-level thinking activities, such as memorization and recall of basic facts and skills, this conjures up the epistemological view of learning that is consistent with the tenets underlying direct instruction teaching: learning is best accomplished when subject-matter skills and knowledge are broken into their component parts and taught to students in a carefully planned, sequenced, and structured manner that is teacher centred (Palincsar 1998). For the acquisition of knowledge structures such as facts, rules, and action sequences, direct instruction is the preferred teaching method (Borich 2004). This is in contrast to the instructional techniques that serve to teach students broad concepts and abstractions, and to nurture critical thinking skills and dispositions. Indirect instructional strategies that emphasize inquiry, discovery, and engaging students in the construction of meaning, such as problem-based learning, are viewed as optimal when the cognitive activities associated with higher-order thinking are the educational aim (Palincsar 1998; Borich 2004). Results from national surveys of teachers provide undeniable evidence of a disconcerting shift toward direct instructional techniques that emphasize basic skills. This emphasis is now common practice, a move away from more innovative teaching approaches such as team-teaching, creative, and divergent thinking projects, long-term integrative units, and collaborative problem-solving (Costigan 2002; Pedulla 2003a; Pedulla 2003b; Taylor 2003).
The centrality of testing programs as a powerful force to be reckoned with for new and experienced teachers alike, and the ramifications of the pressure to teach in prescribed, restricted ways have been identified as potential threats to teacher retention. This issue was raised by Costigan (2002), who has written about the effects of the “Culture of High-Stakes Testing” on new teachers. Based on his work with beginning teachers in New York City, he describes how new teachers cope with the realization that mandated testing quickly becomes a primary focus in everyday classroom practice. Teachers in Costigan’s study are quoted as saying that the pressure they experience from their principals to teach in a prescribed, direct-instruction fashion has made them frustrated and emotionally distraught to the point where they are questioning their vocational decision. The frustration and stress these teachers convey stem from the pressure to focus their teaching on only those activities that will help their students pass the tests. For these teachers it meant they could not implement creative activities that they felt would motivate the students and engage them in meaningful learning (Costigan 2002).
In this era of high-stakes testing, one might wonder what exactly new teachers are being taught when it comes to best practices for instruction. In teacher education methods courses — geared toward the teaching of the content areas — there is increased attention being paid to instructional practices that encourage thinking and the active engagement of students in their own learning. Topics such as student-centred instruction, collaborative problem-solving, problem-based or project-based learning, and constructivist pedagogy are commonplace. Instructional practices such as these have been shown to enhance students’ critical thinking, including engaging students in critical thinking, modeling critical thinking behaviour, and creating a climate of inquiry in the classroom (Facione 1998; 2008). Furthermore, these instructional strategies represent what is known about how to maximize student motivation, engagement, and, ultimately, deeper understanding (Costa 1989; Johnson 2008). As was outlined above, ample research evidence suggests that there is a close connection between critical thinking and educational success (Baron 1987; Giancarlo 1994; Facione 1995; Williams 2006; McCall 2007). In a well-designed study by Williams et al., the scores based on critical thinking skills explained a significant variance in dental hygienist students’ success on board scores, over and above all other measured variables.
Assessing Critical Thinking Dispositions among K-12 Learners
The majority of studies examining CT dispositions in relation to the academic experience have concentrated on post-secondary learners. To date, little is known about the critical thinking dispositions of elementary and secondary learners. This gap in the literature existed until a dispositional assessment tool suitable for use among adolescent and younger learners was developed. In 2000, the California Measure of Mental Motivation (CM3) was introduced as a valid measure of the disposition toward critical thinking among adolescent students (Giancarlo 2004). Since the initial publication of the validation work underlying the CM3 (hence known as the CM3 Level II for secondary students), three additional levels of the instrument have been developed: Level Ia for grades Kindergarten through 2nd grade (primary), Level lb for Grades 3-5 (upper elementary), and Level III for post-secondary students and adults (Giancarlo 2006). Students who complete the California Measure of Mental Motivation M3 Level Ia are asked to circle directly on the survey booklet the face that shows whether the sentence is true about them or false about them. CM3 Levels lb, II, and III utilize separate answer sheets or can be administered in an online environment.
The CM3 is designed to measure the degree to which an individual is cognitively engaged and mentally motivated toward intellectual activities that involve reasoning. The dispositional domains measured by the CM3 are not linked with any particular curricular area. All forms (Levels Ia, lb, II, and III) of the CM3 target four main dispositional aspects of critical thinking: learning orientation, mental focus, cognitive integrity, and creative problem-solving. These four domains of mental motivation can be identified in the writings of many researchers who have investigated how students differ in their problem-solving and decision-making (Ames 1984; Fisher 1990; Graham 1991). The four scales of the CM3 can be defined as follows:[1]
Learning Orientation: High scores in learning orientation indicate a motivation or desire to increase one’s knowledge and skill base. These individuals value learning for learning’s sake and express an eagerness to engage in the learning process. These individuals express an interest for engaging in challenging activities, and endorse information seeking as personal strategy when problem solving. Low scores indicate a muted desire to learn about new or challenging topics. These individuals express a lack of willingness to explore or research an issue and may even purposefully avoid opportunities to learn and understand. These individuals will attempt to answer questions with the information they have at hand rather than seeking out new information.
Mental Focus: High scores in mental focus indicate self-reported diligence, focus, systematicity, task-orientation, organization, and clear-headedness. While engaging in a mental activity this person tends to be focused in their attention, persistent, and comfortable with the problem solving process. Low scores indicate a compromised ability to regulate attention and a tendency toward disorganization and procrastination. These individuals may also express frustration with their ability to approach solving problems.
Cognitive Integrity: High scores in Cognitive Integrity indicate motivation to use one’s thinking skills in a fair-minded fashion. These individuals are positively disposed toward seeking the truth and being open-minded, and are comfortable with complexity; they enjoy thinking about and interacting with others with potentially varying viewpoints in the search for truth or the best decision. Low scores indicate the expression of a viewpoint that is best characterized as cognitive resistance. These individuals are hasty, indecisive, uncomfortable with complexity and change, and are likely to be anxious and close-minded.
Creative Problem Solving: High scores on Creative Problem Solving indicate a tendency to approach problem solving with innovative or original ideas and solutions. These individuals pride themselves on their creative nature, and this creativity is likely to manifest itself by a desire to engage in challenging activities such as puzzles, games of strategy, and understanding the underlying function of objects. For these individuals, there is a stronger sense of personal satisfaction from engaging in complex or challenging activities than from participating in activities perceived to be easy. Low scores reflect the absence of feelings of personal imaginativeness or originality. This manifests itself by the tendency for these individuals to avoid challenging activities. They will choose easier activities over challenging ones.
The following sample items and response formats are from the CM3 family of instruments[2]
Level Ia (25 items) TRUE/FALSE |
K-2 | “Sometimes I stop listening even when I know I should be paying attention.” |
Level Ib (25 items) TRUE/FALSE |
3-5 | “I like learning things that are hard for me when I first try them.” |
Level II (72 items) Answered on a scale of 1-4 (strongly disagree/strongly agree) |
6-12 | “No matter what the subject, I am eager to know more about it.” |
Level III (72 items) Answered on a scale of 1-4 (strongly disagree/strongly agree) |
Post-secondary | “I like trying to figure out how something works.” |
Reliability and validity studies have been conducted with the CM3 Level II instrument. Among secondary students, the scales of the CM3, as measures of the disposition toward critical thinking, have been shown to have strong positive correlations with academic motivation goals, academic self-efficacy, and self-regulation (Urdan 2001; Giancarlo 2004). Findings also demonstrate significant negative correlations between the CM3 and measures of self-handicapping and fear of failure. In relation to indicators of academic achievement and critical thinking skills, Giancarlo, Blohm, and Urdan (2004) report that the scales of the CM3 were positively correlated with all five content area tests of the Stanford 9 Content Area Test (1996). Other validity studies with the CM3 have been conducted and the publisher (https://www.insightassessment.com/article/quality-validity-and-reliability) — as part of the instrument research and development process — has revealed positive correlations with the Naglieri Nonverbal Abilities Test (Naglieri 1988) and The Test of Everyday Reasoning (Facione 2000). In summary, the assessment literature on critical thinking dispositions at the K-12 level and the relationship to critical thinking skills and academic achievement indicators can be expected to grow at a rapid pace now that the CM3 is available to educators and researchers alike.
Authentic Assessments: Are They a Solution?
There is a growing acknowledgment in the educational assessment “best practices” literature that the evaluation of authentic student work products is the preferred method for measuring student learning outcomes (Allen 2006). There is reason to be hopeful that the trend in high-stakes testing is expanding to include not only the basic, core-content proficiencies but assessment tools that are more authentic and curriculum based. Authentic assessments, particularly when they are tied to real-world problems, require students to demonstrate not only content knowledge, but also the applied skills that they have acquired through instruction (DiMartino 2007). Students must recognize the appropriate skills to be applied to the problem context and be inclined to engage in these cognitive endeavours, whether it is the disposition to exercise creative problem-solving in the anticipation of consequences, the envisioning of alternatives, or the open-minded consideration of competing viewpoints and diverse perspectives on the topic at hand. In the classroom, this can include assessments based on live performances, such as speeches, debates, presentations, talk-aloud processes during problem-solving, and dramatic performances. Lest one think that the assessment of authentic student performances precludes the use of a paper-and-pencil or large group administration modality, the concept of authentic assessment can be applied to standardized testing because it encompasses the evaluation of outcomes or products of student work, such as essays, poems, short stories, and works of art (Taylor 2005).
Several states are exploring more innovative testing programs that permit students to respond to open-ended and free-response test item formats. For example, reporting on a study of 257 Grade 10 English, math, and science teachers in the state of Massachusetts, Vogler (2002) found that teachers were making observable changes in their instruction to give greater emphasis to creative and critical thinking, inquiry-based learning, and problem-solving activities. Teachers in this study attributed these instructional changes to the desire to help students perform well on the Massachusetts Comprehensive Assessment System (MCAS), a performance-based assessment tool that has been used in the state of Massachusetts since 1998 (Vogler 2002).
Other investigations into the effects of performance-based assessments on teaching practice have shown promising results that instructional emphasis on higher-order thinking and problem-solving have remained intact and in fact increased (Koretz 1996; Vogler 2002). The benefits of an instructional focus on higher-order thinking are not restricted to improved cognitive skills. Tiwari, Lai, So, and Yuen (2006) have demonstrated that problem-based learning strategies in the classroom can lead to gains in critical thinking dispositions.
A recent entrant into the large-scale assessment arena is the Collegiate Learning Assessment (CLA) (2007), available from the Council for Aid to Education for use at the post-secondary level. Used to ascertain “added value” in terms of student learning gains at the level of the institution rather than the level of the individual student, the CLA uses an open-ended question format that requires respondents to provide narrative responses that are then scored with a focus on the student’s ability to make and critique an argument in the context of a performance task. The value of the CLA as a measure of critical thinking at the college level is untested and will, no doubt, be the focus of numerous research investigations. It remains to be seen what impact tools emphasizing performance-based testing formats will have on the widely accepted standardized testing strategies that characterize the contemporary K-12 educational environment. Any assessment plan for measuring learning outcomes can take the approach of measuring only a representative sample. Developers of the CLA suggest this approach, providing only institutional indicators as opposed to individual student results. This approach to assessment should be watched for its impact on the maintenance of classroom instruction that is grounded in inquiry and inclusive of both critical thinking skills and dispositions.
Conclusion
Care must be taken so as not to let accountability systems lead to the egregious neglect of breadth of content coverage and inquiry-based pedagogical techniques and assessment strategies. Many standardized tests continue to rely on question formats that tap factual content knowledge, or in other words, questions that demand thinking at the lowest levels — Knowledge and Comprehension — of Bloom’s taxonomy (1956). Furthermore, it is inadequate to assess critical thinking skills alone and disregard the dispositions dimension of critical thinking despite the demonstrated relationship between dispositions and conventional indicators of student academic achievement. It is imperative to require students to demonstrate not only higher-order thinking and problem-solving skills but also critical thinking dispositions. State-mandated standardized testing programs must also be held accountable for effectively assessing not only basic knowledge and content standards, but also those curriculum standards that assure students are both willing and able to engage in high-order thinking.
The power wielded by the architects of accountability systems and mandated high-stakes testing programs must be directed toward positively affecting and maintaining our dedication to critical thinking as a central student learning outcome. We are committed at this time to the administration of standardized tests, and to the high-stakes decisions that are often linked to test results. At the highest levels there is faith in testing as the piston that can provide the driving force for the reform of the American educational system. “Buy in” on the part of the general public and the educational community is commanding, and therefore testing compels pedagogical and curricular changes in the classroom. When there is faith in the goals and a presumptive validity of the testing program, teachers modify their practice in order to boost scores on the tests. If the state-mandated tests require critical/ creative high-order thinking, student-centred teaching methods that promote critical thinking skills and dispositions and active learning will be implemented. The end result is high-quality teaching and the achievement of higher-level learning outcomes.
Negative trends related to high-stakes testing are changing the educational landscape of today’s classrooms. These effects must be reversed if students are to receive a complete education that will prepare them for the complexities of the world we live in. If real improvement of schools is the goal, then we must recognize that the path to success is through teaching for deeper learning and understanding, not through teaching to a domain-restricted test. Only then will the goals of the accountability movement be actualized.
References
Abrams, L., and G. Madaus. 2003. The lessons of high-stakes testing. Educational Leadership 61(3): 31-5.
Allen, M. 2006. Assessing general education programs. San Francisco: Jossey-Bass.
Ames, C. 1984. Competitive, cooperative, and individualistic goal structures: A motivational analysis. In Research on motivation in education, ed. R.A. Ames, 177-207. New York: Academic Press.
Baron, J. 1987. Evaluating thinking skills in the classroom. In Teaching thinking skills: Theory and practice, ed. J.B. Baron and R.J. Sternberg, 221-47. New York: W.H. Freeman.
Bloom, B., and D. Krathwohl. 1956. Taxonomy of educational objectives: The classification of educational goals, by a committee of college and university examiners. New York: Longman & Green.
Borich, G. 2004. Effective teaching methods. Upper Saddle River, NJ: Prentice Hall.
Carter-Wells, J. 1992. Defining, teaching, and assessing critical thinking in a multicultural context. Washington, DC: Association of American Colleges.
Chudowsky, N., and J. Pellegrino. 2003. Large-scale assessments that support learning: What will it take? Theory Into Practice 42(1): 75-83.
Colucciello, M. 1997. Critical thinking skills and dispositions of baccalaureate nursing students—A conceptual model for evaluation. Journal of Professional Nursing 13(4): 236-45.
Costa, A., and L. Lowery. 1989. Techniques for teaching thinking. Pacific Grove, CA: Critical Thinking Books and Software.
Costigan, A. 2002. Teaching the culture of high stakes testing: Listening to new teachers. Action in Teacher Education 23(4): 35-42.
Darling-Hammond, L., and A. Wise. 1985. Beyond standardization: State standards and school improvement. The Elementary School Journal 85(3): 315-6.
Duch, B., S. Groh, and D. Allen. 2001. The problem of problem-based learning: A practical “how to for teaching undergraduate courses in any discipline. Sterling, VA: Stylus Publishing.
Ennis, R. 1985. The logical basis for measuring CT skills. Educational Leadership 43: 5.
Facione, P. 2000. The test of everyday reasoning. Millbrae, CA: Academic Press.
. 1990. Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. The Delphi Report: Research findings and recommendations prepared for the committee on pre-college philosophy. Washington, DC: American Philosophical Association.
Facione, P., N. Facione, and C. Giancarlo. 2006 [1992]. Test manual: The California critical thinking disposition inventory. Millbrae, CA: Academic Press.
. 1997. The motivation to think in working and learning. In Preparing competent college graduates: Setting new and higher expectations for student learning, ed. A. Jones, 67-79. San Francisco: Jossey-Bass.
Facione, P., N. Facione, S. Blohm, and C. Giancarlo. 2008 [1998]. Test manual: The California critical thinking skills test, Revised edition. Millbrae, CA: Academic Press.
Facione, P., C. Giancarlo, N. Facione, and J. Gainen. 1995. The disposition toward critical thinking. Journal of General Education 44(1): 1-25.
Fisher, R. 1990. Teaching children to think. Oxford: Basil Blackwell.
Giancarlo, C. 2006. California Measure of Mental Motivation (CM3): An inventory of critical thinking dispositions. User Manual Supporting Levels IA, IB, and III Grades K-2, 3-5, 6-12, and Adults. Millbrae, CA: Academic Press.
Giancarlo, C., S. Blohm, and T. Urdan. 2004. Assessing secondary students’ disposition toward critical thinking: Development of the California measure of mental motivation. Educational and Psychological Measurement 64 (2): 347-64.
Giancarlo, C., and N. Facione. 1994. A study of the critical thinking disposition and skill of Spanish and English speaking students at Camelback High School. Millbrae, CA: Phoenix Union High School District.
Giancarlo, C., and P. Facione. 2001. A look across four years at the disposition toward critical thinking among undergraduate students. Journal of General Education 50(1): 29-55.
Goertz, M., and M. Duffy. 2003. Mapping the landscape of high-stakes testing and accountability programs. Theory into Practice 42(1): 4–11. Goldberg, M. 2004. The high-stakes testing mess. The Educational Digest 69(8): 8-15.
Graham, S., and S. Golan. 1991. Motivational influences on cognition: Task involvement, ego involvement and depth of information processing. Journal of Educational Psychology 83: 187-94.
Hoffman, J., L. Assaf, and S. Paris. 2001. High-stakes testing in reading: Today in Texas, tomorrow? The Reading Teacher 54(5): 482-92.
Johnson, L.S. 2008. Relationship of instructional methods to student engagement in two public schools. American Secondary Education 36(2): 69.
Jones, E., S. Corrallo, P. Facione, and G. Ratcliff. 1994. Developing consensus for critical thinking. Washington, DC: American Association of Higher Education.
Jones, E., S. Hoffman, L. Moore, G. Ratcliff, S. Tibbetts, and B. Click. 1995. National assessment of college student learning: Identifying the college graduate’s essential skills in writing, speech and listening, and critical thinking. Washington, DC: National Center for Educational Statistics.
Jones, K., and B. Whitford. 1997. Kentucky’s conflicting reform principles: High-stakes school accountability and student performance assessment. Phi Delta Kappan 79(4): 276-81.
Kakai, H. 2000. The use of cross-cultural studies and experiences as a way of fostering critical thinking dispositions among college students. Journal of General Education 49(2): 110-31.
Koretz, D., S. Barron, K. Mitchell, and B. Stecher. 1996. The perceived effects of the Kentucky Instructional Results Information System (KIRIS). Santa Monica, CA: RAND.
Kuhn, D. 1990. Education for thinking: What can psychology contribute? In Promoting cognitive growth over the lifespan, ed. M. Schwebel, C.A. Mahler, and N.S. Fagley, 35-45 Hillsdale, NJ: Lawrence Erlbaum.
Kurfiss, J. 1988. Critical thinking: Theory, research, practice and possibilities. In ASHE-ERIC Higher Education Report. Washington, DC: Association for the Study of Higher Education.
Lampart, N. 2006. Critical thinking dispositions as an outcome of art education. Studies in Art Education 47(3): 215-28.
Lipman, M. 1987. Some thoughts on the foundations of reflective education. In Teaching thinking skills: Theory and practice, ed. J.B. Baron and R.J. Sternberg, 151-61. New York: W.H. Freeman.
McCall, K., E. MacLaughlin, D. Fike, and B. Ruiz. 2007. Preadmission predictors of PharmD graduates’ performance on the NAPLEX. American Journal of Pharma-critical Education 71(1): 5.
Naglieri, J. 1988. Naglieri nonverbal ability test: Individual administration kit. San Antonio: Harcourt Assessment.
Neill, M. 2003. The dangers of testing. Educational Leadership 60(5): 43-6.
Nokes, K., D. Nickitas, and R. Keida. 2005. Does service-learning increase cultural competency, critical thinking and civic engagement? Journal of Nursing Education 44(2): 65-70.
Palincsar, A. 1998. Social constructivist perspectives on teaching and learning. Annual Review of Psychology 49: 345-75.
Paul, R. 1984. Critical thinking: Foundation to education for a free society. Educational Leadership 42(15): 4-14.
Pedulla, J. 2003. State-mandated testing: What do teachers think? Educational Leadership 61(3): 42-9.
Pedulla, J., L. Abrams, G. Madaus, M. Russell, M. Ramos, and J. Miao. 2003. Perceived effects of state-mandated testing programs on teaching and learning: Findings from a national survey of teachers. In National board on educational testing and public policy. Boston: Boston College.
Standford University. 1996. Stanford Achievement Test Series, Ninth Edition. Complete Battery. Online. Available at http://harcourtassessment.com/haiweb/cultures/enus/productdetail.htm?pid=E132C. Last retrieved October 12, 2007.
Stecher, B. 2002. Consequences of large-scale, high-stakes testing on school and classroom practice. In Making sense of test-based accountability in education, ed. L.S. Hamilton, B.M. Stecher, and S.P. Klein, 79-100. Santa Monica, CA: RAND.
Taylor, C., and S. Nolen. 2005. Classroom assessment: Supporting teaching and learning in real classrooms. Upper Saddle River, NJ: Pearson Education.
Taylor, G., L. Shepard, F. Kinner, and J. Rosenthal. 2003. A survey of teachers’ perspectives on high-stakes testing in Colorado: What gets taught, what gets lost. Los Angeles: Center for Research on Evaluation, Standards, and Student Testing.
The Collegiate Learning Assessment. 2007. Online. Available at http://wvvwcae.org/ content/pro_collegiate.htm#. Last retrieved October 10, 2007.
Tiwari, A., P. Lai, M. So, and K. Yuen. 2006. A comparison of the effects of problem-based learning and lecturing on the development of students’ critical thinking. Medical Education 40(6): 547-54.
United States Department of Education. 1990. National goals for education. Washington, DC: U.S. Government Printing Office.
Urdan, T., and C. Giancarlo. 2001. A comparison of motivational and critical thinking orientations across ethnic groups. In Research on sociocultural influences on motivation and learning, ed. D.M. McInerney and S. Van Etten. Greenwich: Information Age.
Vogler, K. 2002. The impact of high-stakes, state-mandated student performance assessment on teachers’ instructional practices. Education 123(1): 39-55.
Walsh, C., and R. Hardy. 1999. Dispositional differences in critical thinking related to gender and academic major. Journal of Nursing Education 38(4): 149-55.
Williams, K., C. Schmidt, T. Tilliss, K. Wilkins, and D. Glasnapp. 2006. Predictive validity of critical thinking skills and disposition for the national board dental hygiene examination: A preliminary investigation. Journal of Dental Education 70(5): 536-44.
Winn, I. 2004. The high cost of uncritical thinking. Phi Delta Kappan 85(7): 496.
Zoller, U., D. Ben-Chaim, and S. Ron. 2000. The disposition toward critical thinking of high school and university science students: An interintra Israeli-Italian study. International Journal of Science Education 22(6): 571-82.
- Reprinted, with permission, from the test manual for the California Measure of Mental Motivation. C.A. Giancarlo, California Measure of Mental Motivation (CM3): An inventory of critical thinking dispositions. User Manual Supporting Levels IA, IB, II, and III Grades K-2, 3-5, 6-12, and Adults (Millbrae, CA: The California Academic Press, 2006). ↵
- Reprinted, with permission, from the test manual for the California Measure of Mental Motivation, ibid. ↵
pp. 23-25
pp. 1-322
pp. 23-37, 238, 280-282
pp.27-28
pp. 23-37
pp. 31-34
pp. 34-35, 164, 201-202, 310, 314
p. 35
pp. 35-36