3 Conducting Psychology Research in the Real World
Original chapter by Matthias R. Mehl adapted by the Queen’s University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
Because of its ability to determine cause-and-effect relationships, the laboratory experiment is traditionally considered the method of choice for psychological science. One downside, however, is that as it carefully controls conditions and their effects, it can yield findings that are out of touch with reality and have limited use when trying to understand real-world behavior. This module highlights the importance of also conducting research outside the psychology laboratory, within participants’ natural, everyday environments, and reviews existing methodologies for studying daily life
Learning Objectives
- Identify limitations of the traditional laboratory experiment.
- Explain ways in which daily life research can further psychological science.
- Know what methods exist for conducting psychological research in the real world.
Introduction
The laboratory experiment is traditionally considered the “gold standard” in psychology research. This is because only laboratory experiments can clearly separate cause from effect and therefore establish causality. Despite this unique strength, it is also clear that a scientific field that is mainly based on controlled laboratory studies ends up lopsided. Specifically, it accumulates a lot of knowledge on what can happen—under carefully isolated and controlled circumstances—but it has little to say about what actually does happen under the circumstances that people actually encounter in their daily lives.
For example, imagine you are a participant in an experiment that looks at the effect of being in a good mood on generosity, a topic that may have a good deal of practical application. Researchers create an internally-valid, carefully-controlled experiment where they randomly assign you to watch either a happy movie or a neutral movie, and then you are given the opportunity to help the researcher out by staying longer and participating in another study. If people in a good mood are more willing to stay and help out, the researchers can feel confident that – since everything else was held constant – your positive mood led you to be more helpful. However, what does this tell us about helping behaviors in the real world? Does it generalize to other kinds of helping, such as donating money to a charitable cause? Would all kinds of happy movies produce this behavior, or only this one? What about other positive experiences that might boost mood, like receiving a compliment or a good grade? And what if you were watching the movie with friends, in a crowded theatre, rather than in a sterile research lab? Taking research out into the real world can help answer some of these sorts of important questions.
As one of the founding fathers of social psychology remarked, “Experimentation in the laboratory occurs, socially speaking, on an island quite isolated from the life of society” (Lewin, 1944, p. 286). This module highlights the importance of going beyond experimentation and also conducting research outside the laboratory (Reis & Gosling, 2010), directly within participants’ natural environments, and reviews existing methodologies for studying daily life.
Rationale for Conducting Psychology Research in the Real World
One important challenge researchers face when designing a study is to find the right balance between ensuring Internal Validity, or the degree to which a study allows unambiguous causal inferences, and External Validity, or the degree to which a study ensures that potential findings apply to settings and samples other than the ones being studied (Brewer, 2000). Unfortunately, these two kinds of validity tend to be difficult to achieve at the same time, in one study. This is because creating a controlled setting, in which all potentially influential factors (other than the experimentally-manipulated variable) are controlled, is bound to create an environment that is quite different from what people naturally encounter (e.g., using a happy movie clip to promote helpful behavior). However, it is the degree to which an experimental situation is comparable to the corresponding real-world situation of interest that determines how generalizable potential findings will be. In other words, if an experiment is very far-off from what a person might normally experience in everyday life, you might reasonably question just how useful its findings are.
Because of the incompatibility of the two types of validity, one is often—by design—prioritized over the other. Due to the importance of identifying true causal relationships, psychology has traditionally emphasized internal over external validity. However, in order to make claims about human behavior that apply across populations and environments, researchers complement traditional laboratory research, where participants are brought into the lab, with field research where, in essence, the psychological laboratory is brought to participants. Field studies allow for the important test of how psychological variables and processes of interest “behave” under real-world circumstances (i.e., what actually does happen rather than what can happen). They can also facilitate “downstream” operationalizations of constructs that measure life outcomes of interest directly rather than indirectly.
Take, for example, the fascinating field of psychoneuroimmunology, where the goal is to understand the interplay of psychological factors – such as personality traits or one’s stress level – and the immune system. Highly sophisticated and carefully controlled experiments offer ways to isolate the variety of neural, hormonal, and cellular mechanisms that link psychological variables such as chronic stress to biological outcomes such as immunosuppression (a state of impaired immune functioning; Sapolsky, 2004). Although these studies demonstrate impressively how psychological factors can affect health-relevant biological processes, they—because of their research design—remain mute about the degree to which these factors actually do undermine people’s everyday health in real life. It is certainly important to show that laboratory stress can alter the number of natural killer cells in the blood. But it is equally important to test to what extent the levels of stress that people experience on a day-to-day basis result in them catching a cold more often or taking longer to recover from one. The goal for researchers, therefore, must be to complement traditional laboratory experiments with less controlled studies under real-world circumstances. The term ecological validity is used to refer the degree to which an effect has been obtained under conditions that are typical for what happens in everyday life (Brewer, 2000). In this example, then, people might keep a careful daily log of how much stress they are under as well as noting physical symptoms such as headaches or nausea. Although many factors beyond stress level may be responsible for these symptoms, this more correlational approach can shed light on how the relationship between stress and health plays out outside of the laboratory.
An Overview of Research Methods for Studying Daily Life
Capturing “life as it is lived” has been a strong goal for some researchers for a long time. Wilhelm and his colleagues recently published a comprehensive review of early attempts to systematically document daily life (Wilhelm, Perrez, & Pawlik, 2012). Building onto these original methods, researchers have, over the past decades, developed a broad toolbox for measuring experiences, behavior, and physiology directly in participants’ daily lives (Mehl & Conner, 2012). Figure 1 provides a schematic overview of the methodologies described below.
Studying Daily Experiences
Starting in the mid-1970s, motivated by a growing skepticism toward highly-controlled laboratory studies, a few groups of researchers developed a set of new methods that are now commonly known as the experience-sampling method (Hektner, Schmidt, & Csikszentmihalyi, 2007), ecological momentary assessment (Stone & Shiffman, 1994), or the diary method (Bolger & Rafaeli, 2003). Although variations within this set of methods exist, the basic idea behind all of them is to collect in-the-moment (or, close-to-the-moment) self-report data directly from people as they go about their daily lives. This is typically accomplished by asking participants’ repeatedly (e.g., five times per day) over a period of time (e.g., a week) to report on their current thoughts and feelings. The momentary questionnaires often ask about their location (e.g., “Where are you now?”), social environment (e.g., “With whom are you now?”), activity (e.g., “What are you currently doing?”), and experiences (e.g., “How are you feeling?”). That way, researchers get a snapshot of what was going on in participants’ lives at the time at which they were asked to report.
Technology has made this sort of research possible, and recent technological advances have altered the different tools researchers are able to easily use. Initially, participants wore electronic wristwatches that beeped at preprogrammed but seemingly random times, at which they completed one of a stack of provided paper questionnaires. With the mobile computing revolution, both the prompting and the questionnaire completion were gradually replaced by handheld devices such as smartphones. Being able to collect the momentary questionnaires digitally and time-stamped (i.e., having a record of exactly when participants responded) had major methodological and practical advantages and contributed to experience sampling going mainstream (Conner, Tennen, Fleeson, & Barrett, 2009).
Over time, experience sampling and related momentary self-report methods have become very popular, and, by now, they are effectively the gold standard for studying daily life. They have helped make progress in almost all areas of psychology (Mehl & Conner, 2012). These methods ensure receiving many measurements from many participants, and has further inspired the development of novel statistical methods (Bolger & Laurenceau, 2013). Finally, and maybe most importantly, they accomplished what they sought out to accomplish: to bring attention to what psychology ultimately wants and needs to know about, namely “what people actually do, think, and feel in the various contexts of their lives” (Funder, 2001, p. 213). In short, these approaches have allowed researchers to do research that is more externally valid, or more generalizable to real life, than the traditional laboratory experiment.
To illustrate these techniques, consider a classic study, Stone, Reed, and Neale (1987), who tracked positive and negative experiences surrounding a respiratory infection using daily experience sampling. They found that undesirable experiences peaked and desirable ones dipped about four to five days prior to participants coming down with the cold. More recently, Killingsworth and Gilbert (2010) collected momentary self-reports from more than 2,000 participants via a smartphone app. They found that participants were less happy when their mind was in an idling, mind-wandering state, such as surfing the Internet or multitasking at work, than when it was in an engaged, task-focused one, such as working diligently on a paper. These are just two examples that illustrate how experience-sampling studies have yielded findings that could not be obtained with traditional laboratory methods.
Recently, the day reconstruction method (DRM) (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004) has been developed to obtain information about a person’s daily experiences without going through the burden of collecting momentary experience-sampling data. In the DRM, participants report their experiences of a given day retrospectively after engaging in a systematic, experiential reconstruction of the day on the following day. As a participant in this type of study, you might look back on yesterday, divide it up into a series of episodes such as “made breakfast,” “drove to work,” “had a meeting,” etc. You might then report who you were with in each episode and how you felt in each. This approach has shed light on what situations lead to moments of positive and negative mood throughout the course of a normal day.
Studying Daily Behavior
Experience sampling is often used to study everyday behavior (i.e., daily social interactions and activities). In the laboratory, behavior is best studied using direct behavioral observation (e.g., video recordings). In the real world, this is, of course, much more difficult. As Funder put it, it seems it would require a “detective’s report [that] would specify in exact detail everything the participant said and did, and with whom, in all of the contexts of the participant’s life” (Funder, 2007, p. 41).
As difficult as this may seem, Mehl and colleagues have developed a naturalistic observation methodology that is similar in spirit. Rather than following participants—like a detective—with a video camera (see Craik, 2000), they equip participants with a portable audio recorder that is programmed to periodically record brief snippets of ambient sounds (e.g., 30 seconds every 12 minutes). Participants carry the recorder (originally a microcassette recorder, now a smartphone app) on them as they go about their days and return it at the end of the study. The recorder provides researchers with a series of sound bites that, together, amount to an acoustic diary of participants’ days as they naturally unfold—and that constitute a representative sample of their daily activities and social encounters. Because it is somewhat similar to having the researcher’s ear at the participant’s lapel, they called their method the electronically activated recorder, or EAR (Mehl, Pennebaker, Crow, Dabbs, & Price, 2001). The ambient sound recordings can be coded for many things, including participants’ locations (e.g., at school, in a coffee shop), activities (e.g., watching TV, eating), interactions (e.g., in a group, on the phone), and emotional expressions (e.g., laughing, sighing). As unnatural or intrusive as it might seem, participants report that they quickly grow accustomed to the EAR and say they soon find themselves behaving as they normally would.
In a cross-cultural study, Ramírez-Esparza and her colleagues used the EAR method to study sociability in the United States and Mexico. Interestingly, they found that although American participants rated themselves significantly higher than Mexicans on the question, “I see myself as a person who is talkative,” they actually spent almost 10 percent less time talking than Mexicans did (Ramírez-Esparza, Mehl, Álvarez Bermúdez, & Pennebaker, 2009). In a similar way, Mehl and his colleagues used the EAR method to debunk the long-standing myth that women are considerably more talkative than men. Using data from six different studies, they showed that both sexes use on average about 16,000 words per day. The estimated sex difference of 546 words was trivial compared to the immense range of more than 46,000 words between the least and most talkative individual (695 versus 47,016 words; Mehl, Vazire, Ramírez-Esparza, Slatcher, & Pennebaker, 2007). Together, these studies demonstrate how naturalistic observation can be used to study objective aspects of daily behavior and how it can yield findings quite different from what other methods yield (Mehl, Robbins, & Deters, 2012).
A series of other methods and creative ways for assessing behavior directly and unobtrusively in the real world are described in a seminal book on real-world, subtle measures (Webb, Campbell, Schwartz, Sechrest, & Grove, 1981). For example, researchers have used time-lapse photography to study the flow of people and the use of space in urban public places (Whyte, 1980). More recently, they have observed people’s personal (e.g., dorm rooms) and professional (e.g., offices) spaces to understand how personality is expressed and detected in everyday environments (Gosling, Ko, Mannarelli, & Morris, 2002). They have even systematically collected and analyzed people’s garbage to measure what people actually consume (e.g., empty alcohol bottles or cigarette boxes) rather than what they say they consume (Rathje & Murphy, 2001). Because people often cannot and sometimes may not want to accurately report what they do, the direct—and ideally nonreactive—assessment of real-world behavior is of high importance for psychological research (Baumeister, Vohs, & Funder, 2007).
Studying Daily Physiology
In addition to studying how people think, feel, and behave in the real world, researchers are also interested in how our bodies respond to the fluctuating demands of our lives. What are the daily experiences that make our “blood boil”? How do our neurotransmitters and hormones respond to the stressors we encounter in our lives? What physiological reactions do we show to being loved—or getting ostracized? You can see how studying these powerful experiences in real life, as they actually happen, may provide more rich and informative data than one might obtain in an artificial laboratory setting that merely mimics these experiences.
Also, in pursuing these questions, it is important to keep in mind that what is stressful, engaging, or boring for one person might not be so for another. It is, in part, for this reason that researchers have found only limited correspondence between how people respond physiologically to a standardized laboratory stressor (e.g., giving a speech) and how they respond to stressful experiences in their lives. To give an example, Wilhelm and Grossman (2010) describe a participant who showed rather minimal heart rate increases in response to a laboratory stressor (about five to 10 beats per minute) but quite dramatic increases (almost 50 beats per minute) later in the afternoon while watching a soccer game. Of course, the reverse pattern can happen as well, such as when patients have high blood pressure in the doctor’s office but not in their home environment—the so-called white coat hypertension (White, Schulman, McCabe, & Dey, 1989).
Ambulatory physiological monitoring – that is, monitoring physiological reactions as people go about their daily lives – has a long history in biomedical research and an array of monitoring devices exist (Fahrenberg & Myrtek, 1996). Among the biological signals that can now be measured in daily life with portable signal recording devices are the electrocardiogram (ECG), blood pressure, electrodermal activity (or “sweat response”), body temperature, and even the electroencephalogram (EEG) (Wilhelm & Grossman, 2010). Most recently, researchers have added ambulatory assessment of hormones (e.g., cortisol) and other biomarkers (e.g., immune markers) to the list (Schlotz, 2012). The development of ever more sophisticated ways to track what goes on underneath our skins as we go about our lives is a fascinating and rapidly advancing field.
In a recent study, Lane, Zareba, Reis, Peterson, and Moss (2011) used experience sampling combined with ambulatory electrocardiography (a so-called Holter monitor) to study how emotional experiences can alter cardiac function in patients with a congenital heart abnormality (e.g., long QT syndrome). Consistent with the idea that emotions may, in some cases, be able to trigger a cardiac event, they found that typical—in most cases even relatively low intensity— daily emotions had a measurable effect on ventricular repolarization, an important cardiac indicator that, in these patients, is linked to risk of a cardiac event. In another study, Smyth and colleagues (1998) combined experience sampling with momentary assessment of cortisol, a stress hormone. They found that momentary reports of current or even anticipated stress predicted increased cortisol secretion 20 minutes later. Further, and independent of that, the experience of other kinds of negative affect (e.g., anger, frustration) also predicted higher levels of cortisol and the experience of positive affect (e.g., happy, joyful) predicted lower levels of this important stress hormone. Taken together, these studies illustrate how researchers can use ambulatory physiological monitoring to study how the little—and seemingly trivial or inconsequential—experiences in our lives leave objective, measurable traces in our bodily systems.
Studying Online Behavior
Another domain of daily life that has only recently emerged is virtual daily behavior or how people act and interact with others on the Internet. Irrespective of whether social media will turn out to be humanity’s blessing or curse (both scientists and laypeople are currently divided over this question), the fact is that people are spending an ever increasing amount of time online. In light of that, researchers are beginning to think of virtual behavior as being as serious as “actual” behavior and seek to make it a legitimate target of their investigations (Gosling & Johnson, 2010).
One way to study virtual behavior is to make use of the fact that most of what people do on the Web—emailing, chatting, tweeting, blogging, posting— leaves direct (and permanent) verbal traces. For example, differences in the ways in which people use words (e.g., subtle preferences in word choice) have been found to carry a lot of psychological information (Pennebaker, Mehl, & Niederhoffer, 2003). Therefore, a good way to study virtual social behavior is to study virtual language behavior. Researchers can download people’s—often public—verbal expressions and communications and analyze them using modern text analysis programs (e.g., Pennebaker, Booth, & Francis, 2007).
For example, Cohn, Mehl, and Pennebaker (2004) downloaded blogs of more than a thousand users of lifejournal.com, one of the first Internet blogging sites, to study how people responded socially and emotionally to the attacks of September 11, 2001. In going “the online route,” they could bypass a critical limitation of coping research, the inability to obtain baseline information; that is, how people were doing before the traumatic event occurred. Through access to the database of public blogs, they downloaded entries from two months prior to two months after the attacks. Their linguistic analyses revealed that in the first days after the attacks, participants expectedly expressed more negative emotions and were more cognitively and socially engaged, asking questions and sending messages of support. Already after two weeks, though, their moods and social engagement returned to baseline, and, interestingly, their use of cognitive-analytic words (e.g., “think,” “question”) even dropped below their normal level. Over the next six weeks, their mood hovered around their pre-9/11 baseline, but both their social engagement and cognitive-analytic processing stayed remarkably low. This suggests a social and cognitive weariness in the aftermath of the attacks. In using virtual verbal behavior as a marker of psychological functioning, this study was able to draw a fine timeline of how humans cope with disasters.
Reflecting their rapidly growing real-world importance, researchers are now beginning to investigate behavior on social networking sites such as Facebook (Wilson, Gosling, & Graham, 2012). Most research looks at psychological correlates of online behavior such as personality traits and the quality of one’s social life but, importantly, there are also first attempts to export traditional experimental research designs into an online setting. In a pioneering study of online social influence, Bond and colleagues (2012) experimentally tested the effects that peer feedback has on voting behavior. Remarkably, their sample consisted of 16 million (!) Facebook users. They found that online political-mobilization messages (e.g., “I voted” accompanied by selected pictures of their Facebook friends) influenced real-world voting behavior. This was true not just for users who saw the messages but also for their friends and friends of their friends. Although the intervention effect on a single user was very small, through the enormous number of users and indirect social contagion effects, it resulted cumulatively in an estimated 340,000 additional votes—enough to tilt a close election. In short, although still in its infancy, research on virtual daily behavior is bound to change social science, and it has already helped us better understand both virtual and “actual” behavior.
“Smartphone Psychology”?
A review of research methods for studying daily life would not be complete without a vision of “what’s next.” Given how common they have become, it is safe to predict that smartphones will not just remain devices for everyday online communication but will also become devices for scientific data collection and intervention (Kaplan & Stone, 2013; Yarkoni, 2012). These devices automatically store vast amounts of real-world user interaction data, and, in addition, they are equipped with sensors to track the physical (e. g., location, position) and social (e.g., wireless connections around the phone) context of these interactions. Miller (2012, p. 234) states, “The question is not whether smartphones will revolutionize psychology but how, when, and where the revolution will happen.” Obviously, their immense potential for data collection also brings with it big new challenges for researchers (e.g., privacy protection, data analysis, and synthesis). Yet it is clear that many of the methods described in this module—and many still to be developed ways of collecting real-world data—will, in the future, become integrated into the devices that people naturally and happily carry with them from the moment they get up in the morning to the moment they go to bed.
Conclusion
This module sought to make a case for psychology research conducted outside the lab. If the ultimate goal of the social and behavioral sciences is to explain human behavior, then researchers must also—in addition to conducting carefully controlled lab studies—deal with the “messy” real world and find ways to capture life as it naturally happens.
Mortensen and Cialdini (2010) refer to the dynamic give-and-take between laboratory and field research as “full-cycle psychology”. Going full cycle, they suggest, means that “researchers use naturalistic observation to determine an effect’s presence in the real world, theory to determine what processes underlie the effect, experimentation to verify the effect and its underlying processes, and a return to the natural environment to corroborate the experimental findings” (Mortensen & Cialdini, 2010, p. 53). To accomplish this, researchers have access to a toolbox of research methods for studying daily life that is now more diverse and more versatile than it has ever been before. So, all it takes is to go ahead and—literally—bring science to life.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
Ambulatory assessment
An overarching term to describe methodologies that assess the behavior, physiology, experience, and environments of humans in naturalistic settings.
Daily Diary method
A methodology where participants complete a questionnaire about their thoughts, feelings, and behavior of the day at the end of the day.
Day reconstruction method (DRM)
A methodology where participants describe their experiences and behavior of a given day retrospectively upon a systematic reconstruction on the following day.
Ecological momentary assessment
An overarching term to describe methodologies that repeatedly sample participants’ real-world experiences, behavior, and physiology in real time.
Ecological validity
The degree to which a study finding has been obtained under conditions that are typical for what happens in everyday life.
Electronically activated recorder, or EAR
A methodology where participants wear a small, portable audio recorder that intermittently records snippets of ambient sounds around them.
Experience-sampling method
A methodology where participants report on their momentary thoughts, feelings, and behaviors at different points in time over the course of a day.
External validity
The degree to which a finding generalizes from the specific sample and context of a study to some larger population and broader settings.
Full-cycle psychology
A scientific approach whereby researchers start with an observational field study to identify an effect in the real world, follow up with laboratory experimentation to verify the effect and isolate the causal mechanisms, and return to field research to corroborate their experimental findings.
Generalize
Generalizing, in science, refers to the ability to arrive at broad conclusions based on a smaller sample of observations. For these conclusions to be true the sample should accurately represent the larger population from which it is drawn.
Internal validity
The degree to which a cause-effect relationship between two variables has been unambiguously established.
Linguistic inquiry and word count
A quantitative text analysis methodology that automatically extracts grammatical and psychological information from a text by counting word frequencies.
Lived day analysis
A methodology where a research team follows an individual around with a video camera to objectively document a person’s daily life as it is lived.
White coat hypertension
A phenomenon in which patients exhibit elevated blood pressure in the hospital or doctor’s office but not in their everyday lives.
References
- Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2, 396–403.
- Bolger, N., & Laurenceau, J-P. (2013). Intensive longitudinal methods: An introduction to diary and experience sampling research. New York, NY: Guilford Press.
- Bolger, N., Davis, A., & Rafaeli, E. (2003). Diary methods: Capturing life as it is lived. Annual Review of Psychology, 54, 579–616.
- Bond, R. M., Jones, J. J., Kramer, A. D., Marlow, C., Settle, J. E., & Fowler, J. H. (2012). A 61 million-person experiment in social influence and political mobilization. Nature, 489, 295–298.
- Brewer, M. B. (2000). Research design and issues of validity. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social psychology (pp. 3–16). New York, NY: Cambridge University Press.
- Cohn, M. A., Mehl, M. R., & Pennebaker, J. W. (2004). Linguistic indicators of psychological change after September 11, 2001. Psychological Science, 15, 687–693.
- Conner, T. S., Tennen, H., Fleeson, W., & Barrett, L. F. (2009). Experience sampling methods: A modern idiographic approach to personality research. Social and Personality Psychology Compass, 3, 292–313.
- Craik, K. H. (2000). The lived day of an individual: A person-environment perspective. In W. B. Walsh, K. H. Craik, & R. H. Price (Eds.), Person-environment psychology: New directions and perspectives (pp. 233–266). Mahwah, NJ: Lawrence Erlbaum Associates.
- Fahrenberg, J., &. Myrtek, M. (Eds.) (1996). Ambulatory assessment: Computer-assisted psychological and psychophysiological methods in monitoring and field studies. Seattle, WA: Hogrefe & Huber.
- Funder, D. C. (2007). The personality puzzle. New York, NY: W. W. Norton & Co.
- Funder, D. C. (2001). Personality. Review of Psychology, 52, 197–221.
- Gosling, S. D., & Johnson, J. A. (2010). Advanced methods for conducting online behavioral research. Washington, DC: American Psychological Association.
- Gosling, S. D., Ko, S. J., Mannarelli, T., & Morris, M. E. (2002). A room with a cue: Personality judgments based on offices and bedrooms. Journal of Personality and Social Psychology, 82, 379–398.
- Hektner, J. M., Schmidt, J. A., & Csikszentmihalyi, M. (2007). Experience sampling method: Measuring the quality of everyday life. Thousand Oaks, CA: Sage.
- Kahneman, D., Krueger, A., Schkade, D., Schwarz, N., and Stone, A. (2004). A survey method for characterizing daily life experience: The Day Reconstruction Method. Science, 306, 1776–780.
- Kaplan, R. M., & Stone A. A. (2013). Bringing the laboratory and clinic to the community: Mobile technologies for health promotion and disease prevention. Annual Review of Psychology, 64, 471-498.
- Killingsworth, M. A., & Gilbert, D. T. (2010). A wandering mind is an unhappy mind. Science, 330, 932.
- Lane, R. D., Zareba, W., Reis, H., Peterson, D., &, Moss, A. (2011). Changes in ventricular repolarization duration during typical daily emotion in patients with Long QT Syndrome. Psychosomatic Medicine, 73, 98–105.
- Lewin, K. (1944) Constructs in psychology and psychological ecology. University of Iowa Studies in Child Welfare, 20, 23–27.
- Mehl, M. R., & Conner, T. S. (Eds.) (2012). Handbook of research methods for studying daily life. New York, NY: Guilford Press.
- Mehl, M. R., Pennebaker, J. W., Crow, M., Dabbs, J., & Price, J. (2001). The electronically activated recorder (EAR): A device for sampling naturalistic daily activities and conversations. Behavior Research Methods, Instruments, and Computers, 33, 517–523.
- Mehl, M. R., Robbins, M. L., & Deters, G. F. (2012). Naturalistic observation of health-relevant social processes: The electronically activated recorder (EAR) methodology in psychosomatics. Psychosomatic Medicine, 74, 410–417.
- Mehl, M. R., Vazire, S., Ramírez-Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are women really more talkative than men? Science, 317, 82.
- Miller, G. (2012). The smartphone psychology manifesto. Perspectives in Psychological Science, 7, 221–237.
- Mortenson, C. R., & Cialdini, R. B. (2010). Full-cycle social psychology for theory and application. Social and Personality Psychology Compass, 4, 53–63.
- Pennebaker, J. W., Mehl, M. R., Niederhoffer, K. (2003). Psychological aspects of natural language use: Our words, our selves. Annual Review of Psychology, 54, 547–577.
- Ramírez-Esparza, N., Mehl, M. R., Álvarez Bermúdez, J., & Pennebaker, J. W. (2009). Are Mexicans more or less sociable than Americans? Insights from a naturalistic observation study. Journal of Research in Personality, 43, 1–7.
- Rathje, W., & Murphy, C. (2001). Rubbish! The archaeology of garbage. New York, NY: Harper Collins.
- Reis, H. T., & Gosling, S. D. (2010). Social psychological methods outside the laboratory. In S. T. Fiske, D. T. Gilbert, & G. Lindzey, (Eds.), Handbook of social psychology (5th ed., Vol. 1, pp. 82–114). New York, NY: Wiley.
- Sapolsky, R. (2004). Why zebras don’t get ulcers: A guide to stress, stress-related diseases and coping. New York, NY: Henry Holt and Co.
- Schlotz, W. (2012). Ambulatory psychoneuroendocrinology: Assessing salivary cortisol and other hormones in daily life. In M.R. Mehl & T.S. Conner (Eds.), Handbook of research methods for studying daily life (pp. 193–209). New York, NY: Guilford Press.
- Smyth, J., Ockenfels, M. C., Porter, L., Kirschbaum, C., Hellhammer, D. H., & Stone, A. A. (1998). Stressors and mood measured on a momentary basis are associated with salivary cortisol secretion. Psychoneuroendocrinology, 23, 353–370.
- Stone, A. A., & Shiffman, S. (1994). Ecological momentary assessment (EMA) in behavioral medicine. Annals of Behavioral Medicine, 16, 199–202.
- Stone, A. A., Reed, B. R., Neale, J. M. (1987). Changes in daily event frequency precede episodes of physical symptoms. Journal of Human Stress, 13, 70–74.
- Webb, E. J., Campbell, D. T., Schwartz, R. D., Sechrest, L., & Grove, J. B. (1981). Nonreactive measures in the social sciences. Boston, MA: Houghton Mifflin Co.
- White, W. B., Schulman, P., McCabe, E. J., & Dey, H. M. (1989). Average daily blood pressure, not office blood pressure, determines cardiac function in patients with hypertension. Journal of the American Medical Association, 261, 873–877.
- Whyte, W. H. (1980). The social life of small urban spaces. Washington, DC: The Conservation Foundation.
- Wilhelm, F.H., & Grossman, P. (2010). Emotions beyond the laboratory: Theoretical fundaments, study design, and analytic strategies for advanced ambulatory assessment. Biological Psychology, 84, 552–569.
- Wilhelm, P., Perrez, M., & Pawlik, K. (2012). Conducting research in daily life: A historical review. In M. R. Mehl & T. S. Conner (Eds.), Handbook of research methods for studying daily life. New York, NY: Guilford Press.
- Wilson, R., & Gosling, S. D., & Graham, L. (2012). A review of Facebook research in the social sciences. Perspectives on Psychological Science, 7, 203–220.
- Yarkoni, T. (2012). Psychoinformatics: New horizons at the interface of the psychological and computing sciences. Current Directions in Psychological Science, 21, 391–397.
How to cite this Chapter using APA Style:
Mehl, M. R. (2019). Conducting psychology research in the real world. Adapted for use by Queen’s University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/hsfe5k3d
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/hsfe5k3d.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Parents’ manipulation of and intrusion into adolescents’ emotional and cognitive world through invalidating adolescents’ feelings and pressuring them to think in particular ways.
Ways in which development occurs in a gradual incremental manner, rather than through sudden jumps.
Baillargeon, R. (1987). Object permanence in 3 1/2- and 4 1/2-month-old infants. Developmental Psychology, 23, 655–664.
Period within Piagetian theory from birth to age 2 years, during which children come to represent the enduring reality of objects.
Original chapter by Eric Turkheimer adapted by the Queen's University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
People have a deep intuition about what has been called the “nature–nurture question.” Some aspects of our behavior feel as though they originate in our genetic makeup, while others feel like the result of our upbringing or our own hard work. The scientific field of behavior genetics attempts to study these differences empirically, either by examining similarities among family members with different degrees of genetic relatedness, or, more recently, by studying differences in the DNA of people with different behavioral traits. The scientific methods that have been developed are ingenious, but often inconclusive. Many of the difficulties encountered in the empirical science of behavior genetics turn out to be conceptual, and our intuitions about nature and nurture get more complicated the harder we think about them. In the end, it is an oversimplification to ask how “genetic” some particular behavior is. Genes and environments always combine to produce behavior, and the real science is in the discovery of how they combine for a given behavior.
Learning Objectives
- Understand what the nature–nurture debate is and why the problem fascinates us.
- Understand why nature–nurture questions are difficult to study empirically.
- Know the major research designs that can be used to study nature–nurture questions.
- Appreciate the complexities of nature–nurture and why questions that seem simple turn out not to have simple answers.
Introduction
There are three related problems at the intersection of philosophy and science that are fundamental to our understanding of our relationship to the natural world: the mind–body problem, the free will problem, and the nature–nurture problem. These great questions have a lot in common. Everyone, even those without much knowledge of science or philosophy, has opinions about the answers to these questions that come simply from observing the world we live in. Our feelings about our relationship with the physical and biological world often seem incomplete. We are in control of our actions in some ways, but at the mercy of our bodies in others; it feels obvious that our consciousness is some kind of creation of our physical brains, at the same time we sense that our awareness must go beyond just the physical. This incomplete knowledge of our relationship with nature leaves us fascinated and a little obsessed, like a cat that climbs into a paper bag and then out again, over and over, mystified every time by a relationship between inner and outer that it can see but can’t quite understand.
It may seem obvious that we are born with certain characteristics while others are acquired, and yet of the three great questions about humans’ relationship with the natural world, only nature–nurture gets referred to as a “debate.” In the history of psychology, no other question has caused so much controversy and offense: We are so concerned with nature–nurture because our very sense of moral character seems to depend on it. While we may admire the athletic skills of a great basketball player, we think of his height as simply a gift, a payoff in the “genetic lottery.” For the same reason, no one blames a short person for his height or someone’s congenital disability on poor decisions: To state the obvious, it’s “not their fault.” But we do praise the concert violinist (and perhaps her parents and teachers as well) for her dedication, just as we condemn cheaters, slackers, and bullies for their bad behavior.
The problem is, most human characteristics aren’t usually as clear-cut as height or instrument-mastery, affirming our nature–nurture expectations strongly one way or the other. In fact, even the great violinist might have some inborn qualities—perfect pitch, or long, nimble fingers—that support and reward her hard work. And the basketball player might have eaten a diet while growing up that promoted his genetic tendency for being tall. When we think about our own qualities, they seem under our control in some respects, yet beyond our control in others. And often the traits that don’t seem to have an obvious cause are the ones that concern us the most and are far more personally significant. What about how much we drink or worry? What about our honesty, or religiosity, or sexual orientation? They all come from that uncertain zone, neither fixed by nature nor totally under our own control.
One major problem with answering nature-nurture questions about people is, how do you set up an experiment? In nonhuman animals, there are relatively straightforward experiments for tackling nature–nurture questions. Say, for example, you are interested in aggressiveness in dogs. You want to test for the more important determinant of aggression: being born to aggressive dogs or being raised by them. You could mate two aggressive dogs—angry Chihuahuas—together, and mate two nonaggressive dogs—happy beagles—together, then switch half the puppies from each litter between the different sets of parents to raise. You would then have puppies born to aggressive parents (the Chihuahuas) but being raised by nonaggressive parents (the Beagles), and vice versa, in litters that mirror each other in puppy distribution. The big questions are: Would the Chihuahua parents raise aggressive beagle puppies? Would the beagle parents raise nonaggressive Chihuahua puppies? Would the puppies’ nature win out, regardless of who raised them? Or... would the result be a combination of nature and nurture? Much of the most significant nature–nurture research has been done in this way (Scott & Fuller, 1998), and animal breeders have been doing it successfully for thousands of years. In fact, it is fairly easy to breed animals for behavioral traits.
With people, however, we can’t assign babies to parents at random, or select parents with certain behavioral characteristics to mate, merely in the interest of science (though history does include horrific examples of such practices, in misguided attempts at “eugenics,” the shaping of human characteristics through intentional breeding). In typical human families, children’s biological parents raise them, so it is very difficult to know whether children act like their parents due to genetic (nature) or environmental (nurture) reasons. Nevertheless, despite our restrictions on setting up human-based experiments, we do see real-world examples of nature-nurture at work in the human sphere—though they only provide partial answers to our many questions.
The science of how genes and environments work together to influence behavior is called behavioral genetics. The easiest opportunity we have to observe this is the adoption study. When children are put up for adoption, the parents who give birth to them are no longer the parents who raise them. This setup isn’t quite the same as the experiments with dogs (children aren’t assigned to random adoptive parents in order to suit the particular interests of a scientist) but adoption still tells us some interesting things, or at least confirms some basic expectations. For instance, if the biological child of tall parents were adopted into a family of short people, do you suppose the child’s growth would be affected? What about the biological child of a Spanish-speaking family adopted at birth into an English-speaking family? What language would you expect the child to speak? And what might these outcomes tell you about the difference between height and language in terms of nature-nurture?
Another option for observing nature-nurture in humans involves twin studies. There are two types of twins: monozygotic (MZ) and dizygotic (DZ). Monozygotic twins, also called “identical” twins, result from a single zygote (fertilized egg) and have the same DNA. They are essentially clones. Dizygotic twins, also known as “fraternal” twins, develop from two zygotes and share 50% of their DNA. Fraternal twins are ordinary siblings who happen to have been born at the same time. To analyze nature–nurture using twins, we compare the similarity of MZ and DZ pairs. Sticking with the features of height and spoken language, let’s take a look at how nature and nurture apply: Identical twins, unsurprisingly, are almost perfectly similar for height. The heights of fraternal twins, however, are like any other sibling pairs: more similar to each other than to people from other families, but hardly identical. This contrast between twin types gives us a clue about the role genetics plays in determining height. Now consider spoken language. If one identical twin speaks Spanish at home, the co-twin with whom she is raised almost certainly does too. But the same would be true for a pair of fraternal twins raised together. In terms of spoken language, fraternal twins are just as similar as identical twins, so it appears that the genetic match of identical twins doesn’t make much difference.
Twin and adoption studies are two instances of a much broader class of methods for observing nature-nurture called quantitative genetics, the scientific discipline in which similarities among individuals are analyzed based on how biologically related they are. We can do these studies with siblings and half-siblings, cousins, twins who have been separated at birth and raised separately (Bouchard, Lykken, McGue, & Segal, 1990; such twins are very rare and play a smaller role than is commonly believed in the science of nature–nurture), or with entire extended families (see Plomin, DeFries, Knopik, & Neiderhiser, 2012, for a complete introduction to research methods relevant to nature–nurture).
For better or for worse, contentions about nature–nurture have intensified because quantitative genetics produces a number called a heritability coefficient, varying from 0 to 1, that is meant to provide a single measure of genetics’ influence of a trait. In a general way, a heritability coefficient measures how strongly differences among individuals are related to differences among their genes. But beware: Heritability coefficients, although simple to compute, are deceptively difficult to interpret. Nevertheless, numbers that provide simple answers to complicated questions tend to have a strong influence on the human imagination, and a great deal of time has been spent discussing whether the heritability of intelligence or personality or depression is equal to one number or another.
One reason nature–nurture continues to fascinate us so much is that we live in an era of great scientific discovery in genetics, comparable to the times of Copernicus, Galileo, and Newton, with regard to astronomy and physics. Every day, it seems, new discoveries are made, new possibilities proposed. When Francis Galton first started thinking about nature–nurture in the late-19th century he was very influenced by his cousin, Charles Darwin, but genetics per se was unknown. Mendel’s famous work with peas, conducted at about the same time, went undiscovered for 20 years; quantitative genetics was developed in the 1920s; DNA was discovered by Watson and Crick in the 1950s; the human genome was completely sequenced at the turn of the 21st century; and we are now on the verge of being able to obtain the specific DNA sequence of anyone at a relatively low cost. No one knows what this new genetic knowledge will mean for the study of nature–nurture, but as we will see in the next section, answers to nature–nurture questions have turned out to be far more difficult and mysterious than anyone imagined.
What Have We Learned About Nature–Nurture?
It would be satisfying to be able to say that nature–nurture studies have given us conclusive and complete evidence about where traits come from, with some traits clearly resulting from genetics and others almost entirely from environmental factors, such as childrearing practices and personal will; but that is not the case. Instead, everything has turned out to have some footing in genetics. The more genetically-related people are, the more similar they are—for everything: height, weight, intelligence, personality, mental illness, etc. Sure, it seems like common sense that some traits have a genetic bias. For example, adopted children resemble their biological parents even if they have never met them, and identical twins are more similar to each other than are fraternal twins. And while certain psychological traits, such as personality or mental illness (e.g., schizophrenia), seem reasonably influenced by genetics, it turns out that the same is true for political attitudes, how much television people watch (Plomin, Corley, DeFries, & Fulker, 1990), and whether or not they get divorced (McGue & Lykken, 1992).
It may seem surprising, but genetic influence on behavior is a relatively recent discovery. In the middle of the 20th century, psychology was dominated by the doctrine of behaviorism, which held that behavior could only be explained in terms of environmental factors. Psychiatry concentrated on psychoanalysis, which probed for roots of behavior in individuals’ early life-histories. The truth is, neither behaviorism nor psychoanalysis is incompatible with genetic influences on behavior, and neither Freud nor Skinner was naive about the importance of organic processes in behavior. Nevertheless, in their day it was widely thought that children’s personalities were shaped entirely by imitating their parents’ behavior, and that schizophrenia was caused by certain kinds of “pathological mothering.” Whatever the outcome of our broader discussion of nature–nurture, the basic fact that the best predictors of an adopted child’s personality or mental health are found in the biological parents he or she has never met, rather than in the adoptive parents who raised him or her, presents a significant challenge to purely environmental explanations of personality or psychopathology. The message is clear: You can’t leave genes out of the equation. But keep in mind, no behavioral traits are completely inherited, so you can’t leave the environment out altogether, either.
Trying to untangle the various ways nature-nurture influences human behavior can be messy, and often common-sense notions can get in the way of good science. One very significant contribution of behavioral genetics that has changed psychology for good can be very helpful to keep in mind: When your subjects are biologically-related, no matter how clearly a situation may seem to point to environmental influence, it is never safe to interpret a behavior as wholly the result of nurture without further evidence. For example, when presented with data showing that children whose mothers read to them often are likely to have better reading scores in third grade, it is tempting to conclude that reading to your kids out loud is important to success in school; this may well be true, but the study as described is inconclusive, because there are genetic as well asenvironmental pathways between the parenting practices of mothers and the abilities of their children. This is a case where “correlation does not imply causation,” as they say. To establish that reading aloud causes success, a scientist can either study the problem in adoptive families (in which the genetic pathway is absent) or by finding a way to randomly assign children to oral reading conditions.
The outcomes of nature–nurture studies have fallen short of our expectations (of establishing clear-cut bases for traits) in many ways. The most disappointing outcome has been the inability to organize traits from more- to less-genetic. As noted earlier, everything has turned out to be at least somewhat heritable (passed down), yet nothing has turned out to be absolutely heritable, and there hasn’t been much consistency as to which traits are moreheritable and which are less heritable once other considerations (such as how accurately the trait can be measured) are taken into account (Turkheimer, 2000). The problem is conceptual: The heritability coefficient, and, in fact, the whole quantitative structure that underlies it, does not match up with our nature–nurture intuitions. We want to know how “important” the roles of genes and environment are to the development of a trait, but in focusing on “important” maybe we’re emphasizing the wrong thing. First of all, genes and environment are both crucial to every trait; without genes the environment would have nothing to work on, and too, genes cannot develop in a vacuum. Even more important, because nature–nurture questions look at the differences among people, the cause of a given trait depends not only on the trait itself, but also on the differences in that trait between members of the group being studied.
The classic example of the heritability coefficient defying intuition is the trait of having two arms. No one would argue against the development of arms being a biological, genetic process. But fraternal twins are just as similar for “two-armedness” as identical twins, resulting in a heritability coefficient of zero for the trait of having two arms. Normally, according to the heritability model, this result (coefficient of zero) would suggest all nurture, no nature, but we know that’s not the case. The reason this result is not a tip-off that arm development is less genetic than we imagine is because people do not vary in the genes related to arm development—which essentially upends the heritability formula. In fact, in this instance, the opposite is likely true: the extent that people differ in arm number is likely the result of accidents and, therefore, environmental. For reasons like these, we always have to be very careful when asking nature–nurture questions, especially when we try to express the answer in terms of a single number. The heritability of a trait is not simply a property of that trait, but a property of the trait in a particular context of relevant genes and environmental factors.
Another issue with the heritability coefficient is that it divides traits’ determinants into two portions—genes and environment—which are then calculated together for the total variability. This is a little like asking how much of the experience of a symphony comes from the horns and how much from the strings; the ways instruments or genes integrate is more complex than that. It turns out to be the case that, for many traits, genetic differences affect behavior under some environmental circumstances but not others—a phenomenon called gene-environment interaction, or G x E. In one well-known example, Caspi et al. (2002) showed that among maltreated children, those who carried a particular allele of the MAOA gene showed a predisposition to violence and antisocial behavior, while those with other alleles did not. Whereas, in children who had not been maltreated, the gene had no effect. Making matters even more complicated are very recent studies of what is known as epigenetics (see module, “Epigenetics”), a process in which the DNA itself is modified by environmental events, and those genetic changes transmitted to children.
Some common questions about nature–nurture are, how susceptible is a trait to change, how malleable is it, and do we “have a choice” about it? These questions are much more complex than they may seem at first glance. For example, phenylketonuria is an inborn error of metabolism caused by a single gene; it prevents the body from metabolizing phenylalanine. Untreated, it causes intellectual disability and death. But it can be treated effectively by a straightforward environmental intervention: avoiding foods containing phenylalanine. Height seems like a trait firmly rooted in our nature and unchangeable, but the average height of many populations in Asia and Europe has increased significantly in the past 100 years, due to changes in diet and the alleviation of poverty. Even the most modern genetics has not provided definitive answers to nature–nurture questions. When it was first becoming possible to measure the DNA sequences of individual people, it was widely thought that we would quickly progress to finding the specific genes that account for behavioral characteristics, but that hasn’t happened. There are a few rare genes that have been found to have significant (almost always negative) effects, such as the single gene that causes Huntington’s disease, or the Apolipoprotein gene that causes early onset dementia in a small percentage of Alzheimer’s cases. Aside from these rare genes of great effect, however, the genetic impact on behavior is broken up over many genes, each with very small effects. For most behavioral traits, the effects are so small and distributed across so many genes that we have not been able to catalog them in a meaningful way. In fact, the same is true of environmental effects. We know that extreme environmental hardship causes catastrophic effects for many behavioral outcomes, but fortunately extreme environmental hardship is very rare. Within the normal range of environmental events, those responsible for differences (e.g., why some children in a suburban third-grade classroom perform better than others) are much more difficult to grasp.
The difficulties with finding clear-cut solutions to nature–nurture problems bring us back to the other great questions about our relationship with the natural world: the mind-body problem and free will. Investigations into what we mean when we say we are aware of something reveal that consciousness is not simply the product of a particular area of the brain, nor does choice turn out to be an orderly activity that we can apply to some behaviors but not others. So it is with nature and nurture: What at first may seem to be a straightforward matter, able to be indexed with a single number, becomes more and more complicated the closer we look. The many questions we can ask about the intersection among genes, environments, and human traits—how sensitive are traits to environmental change, and how common are those influential environments; are parents or culture more relevant; how sensitive are traits to differences in genes, and how much do the relevant genes vary in a particular population; does the trait involve a single gene or a great many genes; is the trait more easily described in genetic or more-complex behavioral terms?—may have different answers, and the answer to one tells us little about the answers to the others.
It is tempting to predict that the more we understand the wide-ranging effects of genetic differences on all human characteristics—especially behavioral ones—our cultural, ethical, legal, and personal ways of thinking about ourselves will have to undergo profound changes in response. Perhaps criminal proceedings will consider genetic background. Parents, presented with the genetic sequence of their children, will be faced with difficult decisions about reproduction. These hopes or fears are often exaggerated. In some ways, our thinking may need to change—for example, when we consider the meaning behind the fundamental American principle that all men are created equal. Human beings differ, and like all evolved organisms they differ genetically. The Declaration of Independence predates Darwin and Mendel, but it is hard to imagine that Jefferson—whose genius encompassed botany as well as moral philosophy—would have been alarmed to learn about the genetic diversity of organisms. One of the most important things modern genetics has taught us is that almost all human behavior is too complex to be nailed down, even from the most complete genetic information, unless we’re looking at identical twins. The science of nature and nurture has demonstrated that genetic differences among people are vital to human moral equality, freedom, and self-determination, not opposed to them. As Mordecai Kaplan said about the role of the past in Jewish theology, genetics gets a vote, not a veto, in the determination of human behavior. We should indulge our fascination with nature–nurture while resisting the temptation to oversimplify it.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
Adoption study
A behavior genetic research method that involves comparison of adopted children to their adoptive and biological parents.
Behavioral genetics
The empirical science of how genes and environments combine to generate behavior.
Heritability coefficient
An easily misinterpreted statistical construct that purports to measure the role of genetics in the explanation of differences among individuals.
Quantitative genetics
Scientific and mathematical methods for inferring genetic and environmental processes based on the degree of genetic and environmental similarity among organisms.
Twin studies
A behavior genetic research method that involves comparison of the similarity of identical (monozygotic; MZ) and fraternal (dizygotic; DZ) twins.
References
- Bouchard, T. J., Lykken, D. T., McGue, M., & Segal, N. L. (1990). Sources of human psychological differences: The Minnesota study of twins reared apart. Science, 250(4978), 223–228.
- Caspi, A., McClay, J., Moffitt, T. E., Mill, J., Martin, J., Craig, I. W., Taylor, A. & Poulton, R. (2002). Role of genotype in the cycle of violence in maltreated children. Science, 297(5582), 851–854.
- McGue, M., & Lykken, D. T. (1992). Genetic influence on risk of divorce. Psychological Science, 3(6), 368–373.
- Plomin, R., Corley, R., DeFries, J. C., & Fulker, D. W. (1990). Individual differences in television viewing in early childhood: Nature as well as nurture. Psychological Science, 1(6), 371–377.
- Plomin, R., DeFries, J. C., Knopik, V. S., & Neiderhiser, J. M. (2012). Behavioral genetics. New York, NY: Worth Publishers.
- Scott, J. P., & Fuller, J. L. (1998). Genetics and the social behavior of the dog. Chicago, IL: University of Chicago Press.
- Turkheimer, E. (2000). Three laws of behavior genetics and what they mean. Current Directions in Psychological Science, 9(5), 160–164.
How to cite this Chapter using APA Style:
Turkheimer, E. (2019). The nature-nurture question. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/tvz92edh
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/tvz92edh.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Discontinuous development
Period within Piagetian theory from age 2 to 7 years, in which children can represent objects through drawing and language but cannot solve logical reasoning problems, such as the conservation problems.
Piagetian stage between ages 7 and 12 when children can think logically about concrete situations but not engage in systematic scientific reasoning.
Original chapter by David M. Buss adapted by the Queen's Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
Evolution or change over time occurs through the processes of natural and sexual selection. In response to problems in our environment, we adapt both physically and psychologically to ensure our survival and reproduction. Sexual selection theory describes how evolution has shaped us to provide a mating advantage rather than just a survival advantage and occurs through two distinct pathways: intrasexual competition and intersexual selection. Gene selection theory, the modern explanation behind evolutionary biology, occurs through the desire for gene replication. Evolutionary psychology connects evolutionary principles with modern psychology and focuses primarily on psychological adaptations: changes in the way we think in order to improve our survival. Two major evolutionary psychological theories are described: Sexual strategies theory describes the psychology of human mating strategies and the ways in which women and men differ in those strategies. Error management theory describes the evolution of biases in the way we think about everything.
Learning Objectives
- Learn what “evolution” means.
- Define the primary mechanisms by which evolution takes place.
- Identify the two major classes of adaptations.
- Define sexual selection and its two primary processes.
- Define gene selection theory.
- Understand psychological adaptations.
- Identify the core premises of sexual strategies theory.
- Identify the core premises of error management theory, and provide two empirical examples of adaptive cognitive biases.
Introduction
If you have ever been on a first date, you’re probably familiar with the anxiety of trying to figure out what clothes to wear or what perfume or cologne to put on. In fact, you may even consider flossing your teeth for the first time all year. When considering why you put in all this work, you probably recognize that you’re doing it to impress the other person. But how did you learn these particular behaviors? Where did you get the idea that a first date should be at a nice restaurant or someplace unique? It is possible that we have been taught these behaviors by observing others. It is also possible, however, that these behaviors—the fancy clothes, the expensive restaurant—are biologically programmed into us. That is, just as peacocks display their feathers to show how attractive they are, or some lizards do push-ups to show how strong they are, when we style our hair or bring a gift to a date, we’re trying to communicate to the other person: “Hey, I’m a good mate! Choose me! Choose me!"
However, we all know that our ancestors hundreds of thousands of years ago weren’t driving sports cars or wearing designer clothes to attract mates. So how could someone ever say that such behaviors are “biologically programmed” into us? Well, even though our ancestors might not have been doing these specific actions, these behaviors are the result of the same driving force: the powerful influence of evolution. Yes, evolution—certain traits and behaviors developing over time because they are advantageous to our survival. In the case of dating, doing something like offering a gift might represent more than a nice gesture. Just as chimpanzees will give food to mates to show they can provide for them, when you offer gifts to your dates, you are communicating that you have the money or “resources” to help take care of them. And even though the person receiving the gift may not realize it, the same evolutionary forces are influencing his or her behavior as well. The receiver of the gift evaluates not only the gift but also the gift-giver's clothes, physical appearance, and many other qualities, to determine whether the individual is a suitable mate. But because these evolutionary processes are hardwired into us, it is easy to overlook their influence.
To broaden your understanding of evolutionary processes, this module will present some of the most important elements of evolution as they impact psychology. Evolutionary theory helps us piece together the story of how we humans have prospered. It also helps to explain why we behave as we do on a daily basis in our modern world: why we bring gifts on dates, why we get jealous, why we crave our favorite foods, why we protect our children, and so on. Evolution may seem like a historical concept that applies only to our ancient ancestors but, in truth, it is still very much a part of our modern daily lives.
Basics of Evolutionary Theory
Evolution simply means change over time. Many think of evolution as the development of traits and behaviors that allow us to survive this “dog-eat-dog” world, like strong leg muscles to run fast, or fists to punch and defend ourselves. However, physical survival is only important if it eventually contributes to successful reproduction. That is, even if you live to be a 100-year-old, if you fail to mate and produce children, your genes will die with your body. Thus, reproductive success, not survival success, is the engine of evolution by natural selection. Every mating success by one person means the loss of a mating opportunity for another. Yet every living human being is an evolutionary success story. Each of us is descended from a long and unbroken line of ancestors who triumphed over others in the struggle to survive (at least long enough to mate) and reproduce. However, in order for our genes to endure over time—to survive harsh climates, to defeat predators—we have inherited adaptive, psychological processes designed to ensure success.
At the broadest level, we can think of organisms, including humans, as having two large classes of adaptations—or traits and behaviors that evolved over time to increase our reproductive success. The first class of adaptations are called survival adaptations: mechanisms that helped our ancestors handle the “hostile forces of nature.” For example, in order to survive very hot temperatures, we developed sweat glands to cool ourselves. In order to survive very cold temperatures, we developed shivering mechanisms (the speedy contraction and expansion of muscles to produce warmth). Other examples of survival adaptations include developing a craving for fats and sugars, encouraging us to seek out particular foods rich in fats and sugars that keep us going longer during food shortages. Some threats, such as snakes, spiders, darkness, heights, and strangers, often produce fear in us, which encourages us to avoid them and thereby stay safe. These are also examples of survival adaptations. However, all of these adaptations are for physical survival, whereas the second class of adaptations are for reproduction, and help us compete for mates. These adaptations are described in an evolutionary theory proposed by Charles Darwin, called sexual selection theory.
Sexual Selection Theory
Darwin noticed that there were many traits and behaviors of organisms that could not be explained by “survival selection.” For example, the brilliant plumage of peacocks should actually lower their rates of survival. That is, the peacocks’ feathers act like a neon sign to predators, advertising “Easy, delicious dinner here!” But if these bright feathers only lower peacocks’ chances at survival, why do they have them? The same can be asked of similar characteristics of other animals, such as the large antlers of male stags or the wattles of roosters, which also seem to be unfavorable to survival. Again, if these traits only make the animals less likely to survive, why did they develop in the first place? And how have these animals continued to survive with these traits over thousands and thousands of years? Darwin’s answer to this conundrum was the theory of sexual selection: the evolution of characteristics, not because of survival advantage, but because of mating advantage.
Sexual selection occurs through two processes. The first, intrasexual competition, occurs when members of one sex compete against each other, and the winner gets to mate with a member of the opposite sex. Male stags, for example, battle with their antlers, and the winner (often the stronger one with larger antlers) gains mating access to the female. That is, even though large antlers make it harder for the stags to run through the forest and evade predators (which lowers their survival success), they provide the stags with a better chance of attracting a mate (which increases their reproductive success). Similarly, human males sometimes also compete against each other in physical contests: boxing, wrestling, karate, or group-on-group sports, such as football. Even though engaging in these activities poses a "threat" to their survival success, as with the stag, the victors are often more attractive to potential mates, increasing their reproductive success. Thus, whatever qualities lead to success in intrasexual competition are then passed on with greater frequency due to their association with greater mating success.
The second process of sexual selection is preferential mate choice, also called intersexual selection. In this process, if members of one sex are attracted to certain qualities in mates—such as brilliant plumage, signs of good health, or even intelligence—those desired qualities get passed on in greater numbers, simply because their possessors mate more often. For example, the colorful plumage of peacocks exists due to a long evolutionary history of peahens’ (the term for female peacocks) attraction to males with brilliantly colored feathers.
In all sexually-reproducing species, adaptations in both sexes (males and females) exist due to survival selection and sexual selection. However, unlike other animals where one sex has dominant control over mate choice, humans have “mutual mate choice.” That is, both women and men typically have a say in choosing their mates. And both mates value qualities such as kindness, intelligence, and dependability that are beneficial to long-term relationships—qualities that make good partners and good parents.
Gene Selection Theory
In modern evolutionary theory, all evolutionary processes boil down to an organism’s genes. Genes are the basic “units of heredity,” or the information that is passed along in DNA that tells the cells and molecules how to “build” the organism and how that organism should behave. Genes that are better able to encourage the organism to reproduce, and thus replicate themselves in the organism’s offspring, have an advantage over competing genes that are less able. For example, take female sloths: In order to attract a mate, they will scream as loudly as they can, to let potential mates know where they are in the thick jungle. Now, consider two types of genes in female sloths: one gene that allows them to scream extremely loudly, and another that only allows them to scream moderately loudly. In this case, the sloth with the gene that allows her to shout louder will attract more mates—increasing reproductive success—which ensures that her genes are more readily passed on than those of the quieter sloth.
Essentially, genes can boost their own replicative success in two basic ways. First, they can influence the odds for survival and reproduction of the organism they are in (individual reproductive success or fitness—as in the example with the sloths). Second, genes can also influence the organism to help other organisms who also likely contain those genes—known as “genetic relatives”—to survive and reproduce (which is called inclusive fitness). For example, why do human parents tend to help their own kids with the financial burdens of a college education and not the kids next door? Well, having a college education increases one’s attractiveness to other mates, which increases one’s likelihood for reproducing and passing on genes. And because parents’ genes are in their own children (and not the neighborhood children), funding their children’s educations increases the likelihood that the parents’ genes will be passed on.
Understanding gene replication is the key to understanding modern evolutionary theory. It also fits well with many evolutionary psychological theories. However, for the time being, we’ll ignore genes and focus primarily on actual adaptations that evolved because they helped our ancestors survive and/or reproduce.
Evolutionary Psychology
Evolutionary psychology aims the lens of modern evolutionary theory on the workings of the human mind. It focuses primarily on psychological adaptations: mechanisms of the mind that have evolved to solve specific problems of survival or reproduction. These kinds of adaptations are in contrast to physiological adaptations, which are adaptations that occur in the body as a consequence of one’s environment. One example of a physiological adaptation is how our skin makes calluses. First, there is an “input,” such as repeated friction to the skin on the bottom of our feet from walking. Second, there is a “procedure,” in which the skin grows new skin cells at the afflicted area. Third, an actual callus forms as an “output” to protect the underlying tissue—the final outcome of the physiological adaptation (i.e., tougher skin to protect repeatedly scraped areas). On the other hand, a psychological adaptation is a development or change of a mechanism in the mind. For example, take sexual jealousy. First, there is an “input,” such as a romantic partner flirting with a rival. Second, there is a “procedure,” in which the person evaluates the threat the rival poses to the romantic relationship. Third, there is a behavioral output, which might range from vigilance (e.g., snooping through a partner’s email) to violence (e.g., threatening the rival).
Evolutionary psychology is fundamentally an interactionist framework, or a theory that takes into account multiple factors when determining the outcome. For example, jealousy, like a callus, doesn’t simply pop up out of nowhere. There is an “interaction” between the environmental trigger (e.g., the flirting; the repeated rubbing of the skin) and the initial response (e.g., evaluation of the flirter’s threat; the forming of new skin cells) to produce the outcome.
In evolutionary psychology, culture also has a major effect on psychological adaptations. For example, status within one’s group is important in all cultures for achieving reproductive success, because higher status makes someone more attractive to mates. In individualistic cultures, such as the United States, status is heavily determined by individual accomplishments. But in more collectivist cultures, such as Japan, status is more heavily determined by contributions to the group and by that group’s success. For example, consider a group project. If you were to put in most of the effort on a successful group project, the culture in the United States reinforces the psychological adaptation to try to claim that success for yourself (because individual achievements are rewarded with higher status). However, the culture in Japan reinforces the psychological adaptation to attribute that success to the whole group (because collective achievements are rewarded with higher status). Another example of cultural input is the importance of virginity as a desirable quality for a mate. Cultural norms that advise against premarital sex persuade people to ignore their own basic interests because they know that virginity will make them more attractive marriage partners. Evolutionary psychology, in short, does not predict rigid robotic-like “instincts.” That is, there isn’t one rule that works all the time. Rather, evolutionary psychology studies flexible, environmentally-connected and culturally-influenced adaptations that vary according to the situation.
Psychological adaptations are hypothesized to be wide-ranging, and include food preferences, habitat preferences, mate preferences, and specialized fears. These psychological adaptations also include many traits that improve people's ability to live in groups, such as the desire to cooperate and make friends, or the inclination to spot and avoid frauds, punish rivals, establish status hierarchies, nurture children, and help genetic relatives. Research programs in evolutionary psychology develop and empirically test predictions about the nature of psychological adaptations. Below, we highlight a few evolutionary psychological theories and their associated research approaches.
Sexual Strategies Theory
Sexual strategies theory is based on sexual selection theory. It proposes that humans have evolved a list of different mating strategies, both short-term and long-term, that vary depending on culture, social context, parental influence, and personal mate value (desirability in the “mating market”).
In its initial formulation, sexual strategies theory focused on the differences between men and women in mating preferences and strategies (Buss & Schmitt, 1993). It started by looking at the minimum parental investment needed to produce a child. For women, even the minimum investment is significant: after becoming pregnant, they have to carry that child for nine months inside of them. For men, on the other hand, the minimum investment to produce the same child is considerably smaller—simply the act of sex.
These differences in parental investment have an enormous impact on sexual strategies. For a woman, the risks associated with making a poor mating choice is high. She might get pregnant by a man who will not help to support her and her children, or who might have poor-quality genes. And because the stakes are higher for a woman, wise mating decisions for her are much more valuable. For men, on the other hand, the need to focus on making wise mating decisions isn’t as important. That is, unlike women, men 1) don’t biologically have the child growing inside of them for nine months, and 2) do not have as high a cultural expectation to raise the child. This logic leads to a powerful set of predictions: In short-term mating, women will likely be choosier than men (because the costs of getting pregnant are so high), while men, on average, will likely engage in more casual sexual activities (because this cost is greatly lessened). Due to this, men will sometimes deceive women about their long-term intentions for the benefit of short-term sex, and men are more likely than women to lower their mating standards for short-term mating situations.
An extensive body of empirical evidence supports these and related predictions (Buss & Schmitt, 2011). Men express a desire for a larger number of sex partners than women do. They let less time elapse before seeking sex. They are more willing to consent to sex with strangers and are less likely to require emotional involvement with their sex partners. They have more frequent sexual fantasies and fantasize about a larger variety of sex partners. They are more likely to regret missed sexual opportunities. And they lower their standards in short-term mating, showing a willingness to mate with a larger variety of women as long as the costs and risks are low.
However, in situations where both the man and woman are interested in long-term mating, both sexes tend to invest substantially in the relationship and in their children. In these cases, the theory predicts that both sexes will be extremely choosy when pursuing a long-term mating strategy. Much empirical research supports this prediction, as well. In fact, the qualities women and men generally look for when choosing long-term mates are very similar: both want mates who are intelligent, kind, understanding, healthy, dependable, honest, loyal, loving, and adaptable.
Nonetheless, women and men do differ in their preferences for a few key qualities in long-term mating, because of somewhat distinct adaptive problems. Modern women have inherited the evolutionary trait to desire mates who possess resources, have qualities linked with acquiring resources (e.g., ambition, wealth, industriousness), and are willing to share those resources with them. On the other hand, men more strongly desire youth and health in women, as both are cues to fertility. These male and female differences are universal in humans. They were first documented in 37 different cultures, from Australia to Zambia (Buss, 1989), and have been replicated by dozens of researchers in dozens of additional cultures (for summaries, see Buss, 2012).
As we know, though, just because we have these mating preferences (e.g., men with resources; fertile women), people don't always get what they want. There are countless other factors which influence who people ultimately select as their mate. For example, the sex ratio (the percentage of men to women in the mating pool), cultural practices (such as arranged marriages, which inhibit individuals’ freedom to act on their preferred mating strategies), the strategies of others (e.g., if everyone else is pursuing short-term sex, it’s more difficult to pursue a long-term mating strategy), and many others all influence who we select as our mates.
Sexual strategies theory—anchored in sexual selection theory— predicts specific similarities and differences in men and women’s mating preferences and strategies. Whether we seek short-term or long-term relationships, many personality, social, cultural, and ecological factors will all influence who our partners will be.
Error Management Theory
Error management theory (EMT) deals with the evolution of how we think, make decisions, and evaluate uncertain situations—that is, situations where there's no clear answer how we should behave. Consider, for example, walking through the woods at dusk. You hear a rustle in the leaves on the path in front of you. It could be a snake. Or, it could just be the wind blowing the leaves. Because you can't really tell why the leaves rustled, it’s an uncertain situation. The important question then is, what are the costs of errors in judgment? That is, if you conclude that it’s a dangerous snake so you avoid the leaves, the costs are minimal (i.e., you simply make a short detour around them). However, if you assume the leaves are safe and simply walk over them—when in fact it is a dangerous snake—the decision could cost you your life.
Now, think about our evolutionary history and how generation after generation was confronted with similar decisions, where one option had low cost but great reward (walking around the leaves and not getting bitten) and the other had a low reward but high cost (walking through the leaves and getting bitten). These kinds of choices are called “cost asymmetries.” If during our evolutionary history we encountered decisions like these generation after generation, over time an adaptive bias would be created: we would make sure to err in favor of the least costly (in this case, least dangerous) option (e.g., walking around the leaves). To put it another way, EMT predicts that whenever uncertain situations present us with a safer versus more dangerous decision, we will psychologically adapt to prefer choices that minimize the cost of errors.
EMT is a general evolutionary psychological theory that can be applied to many different domains of our lives, but a specific example of it is the visual descent illusion. To illustrate: Have you ever thought it would be no problem to jump off of a ledge, but as soon as you stood up there, it suddenly looked much higher than you thought? The visual descent illusion (Jackson & Cormack, 2008) states that people will overestimate the distance when looking down from a height (compared to looking up) so that people will be especially wary of falling from great heights—which would result in injury or death. Another example of EMT is the auditory looming bias: Have you ever noticed how an ambulance seems closer when it's coming toward you, but suddenly seems far away once it's immediately passed? With the auditory looming bias, people overestimate how close objects are when the sound is moving toward them compared to when it is moving away from them. From our evolutionary history, humans learned, "It’s better to be safe than sorry." Therefore, if we think that a threat is closer to us when it’s moving toward us (because it seems louder), we will be quicker to act and escape. In this regard, there may be times we ran away when we didn’t need to (a false alarm), but wasting that time is a less costly mistake than not acting in the first place when a real threat does exist.
EMT has also been used to predict adaptive biases in the domain of mating. Consider something as simple as a smile. In one case, a smile from a potential mate could be a sign of sexual or romantic interest. On the other hand, it may just signal friendliness. Because of the costs to men of missing out on chances for reproduction, EMT predicts that men have a sexual overperception bias: they often misread sexual interest from a woman, when really it’s just a friendly smile or touch. In the mating domain, the sexual overperception bias is one of the best-documented phenomena. It’s been shown in studies in which men and women rated the sexual interest between people in photographs and videotaped interactions. As well, it’s been shown in the laboratory with participants engaging in actual “speed dating,” where the men interpret sexual interest from the women more often than the women actually intended it (Perilloux, Easton, & Buss, 2012). In short, EMT predicts that men, more than women, will over-infer sexual interest based on minimal cues, and empirical research confirms this adaptive mating bias.
Where Does Evolutionary Psychology Stand Today?
This chapter places a focus on heteronormative relationships with the purpose of sexual reproduction. What about relationships that are not heteronormative? We are lucky to have Dr. Sari van Anders as part of our department, and in this video, Dr. van Anders addresses this important question:
View video in full screen (opens in a new tab)
This is a large area of research. If you are interested in learning more, the Queen's University Department of Psychology offers courses including Human Sexuality, Sexuality & Gender, and Gender, Hormones, & Behaviour.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- Adaptations
- Evolved solutions to problems that historically contributed to reproductive success.
- Error management theory (EMT)
- A theory of selection under conditions of uncertainty in which recurrent cost asymmetries of judgment or inference favor the evolution of adaptive cognitive biases that function to minimize the more costly errors.
- Evolution
- Change over time. Is the definition changing?
- Gene Selection Theory
- The modern theory of evolution by selection by which differential gene replication is the defining process of evolutionary change.
- Intersexual selection
- A process of sexual selection by which evolution (change) occurs as a consequences of the mate preferences of one sex exerting selection pressure on members of the opposite sex.
- Intrasexual competition
- A process of sexual selection by which members of one sex compete with each other, and the victors gain preferential mating access to members of the opposite sex.
- Natural selection
- Differential reproductive success as a consequence of differences in heritable attributes.
- Psychological adaptations
- Mechanisms of the mind that evolved to solve specific problems of survival or reproduction; conceptualized as information processing devices.
- Sexual selection
- The evolution of characteristics because of the mating advantage they give organisms.
- Sexual strategies theory
- A comprehensive evolutionary theory of human mating that defines the menu of mating strategies humans pursue (e.g., short-term casual sex, long-term committed mating), the adaptive problems women and men face when pursuing these strategies, and the evolved solutions to these mating problems.
References
- Buss, D. M. (2012). Evolutionary psychology: The new science of the mind (4th ed.). Boston, MA: Allyn & Bacon.
- Buss, D. M. (1989). Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures. Behavioral & Brain Sciences, 12, 1–49.
- Buss, D. M., & Schmitt, D. P. (2011). Evolutionary psychology and feminism. Sex Roles, 64, 768–787.
- Buss, D. M., & Schmitt, D. P. (1993). Sexual strategies theory: An evolutionary perspective on human mating. Psychological Review, 100, 204–232.
- Haselton, M. G., & Buss, D. M. (2000). Error management theory: A new perspective on biases in cross-sex mind reading. Journal of Personality and Social Psychology, 78, 81–91.
- Haselton, M. G., Nettle, D., & Andrews, P. W. (2005). The evolution of cognitive bias. In D. M. Buss (Ed.), The handbook of evolutionary psychology (pp. 724–746). New York, NY: Wiley.
- Jackson, R. E., & Cormack, J. K. (2008). Evolved navigation theory and the environmental vertical illusion. Evolution and Human Behavior, 29, 299–304.
- Perilloux, C., Easton, J. A., & Buss, D. M. (2012). The misperception of sexual interest. Psychological Science, 23, 146–151.
This course makes use of Open Educational Resources. Information on the original source of this chapter can be found below.
Introduction to Psychology: 1st Canadian Edition was adapted by Jennifer Walinga from Charles Stangor’s textbook, Introduction to Psychology. For information about what was changed in this adaptation, refer to the Copyright statement at the bottom of the home page. This adaptation is a part of the B.C. Open Textbook Project.
In October 2012, the B.C. Ministry of Advanced Education announced its support for the creation of open textbooks for the 40 highest-enrolled first and second year subject areas in the province’s public post-secondary system.
Open textbooks are open educational resources (OER); they are instructional resources created and shared in ways so that more people have access to them. This is a different model than traditionally copyrighted materials. OER are defined as “teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and re-purposing by others.” (Hewlett Foundation).
BCcampus’ open textbooks are openly licensed using a Creative Commons license, and are offered in various e-book formats free of charge, or as printed books that are available at cost.
For more information about this project, please contact opentext@bccampus.ca.
If you are an instructor who is using this book for a course, please fill out an adoption form.
Copyright and Acknowledgements
This material is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
Piagetian stage starting at age 12 years and continuing for the rest of life, in which adolescents may gain the reasoning powers of educated adults.
Original chapter by Aaron Benjamin adapted by the Queen's University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
Learning is a complex process that defies easy definition and description. This module reviews some of the philosophical issues involved with defining learning and describes in some detail the characteristics of learners and of encoding activities that seem to affect how well people can acquire new memories, knowledge, or skills. At the end, we consider a few basic principles that guide whether a particular attempt at learning will be successful or not.
Learning Objectives
- Consider what kinds of activities constitute learning.
- Name multiple forms of learning.
- List some individual differences that affect learning.
- Describe the effect of various encoding activities on learning.
- Describe three general principles of learning.
Introduction
What do you do when studying for an exam? Do you read your class notes and textbook (hopefully not for the very first time)? Do you try to find a quiet place without distraction? Do you use flash cards to test your knowledge? The choices you make reveal your theory of learning, but there is no reason for you to limit yourself to your own intuitions. There is a vast and vibrant science of learning, in which researchers from psychology, education, and neuroscience study basic principles of learning and memory.
In fact, learning is a much broader domain than you might think. Consider: Is listening to music a form of learning? More often, it seems listening to music is a way of avoiding learning. But we know that your brain’s response to auditory information changes with your experience with that information, a form of learning called auditory perceptual learning (Polley, Steinberg, & Merzenich, 2006). Each time we listen to a song, we hear it differently because of our experience. When we exhibit changes in behavior without having intended to learn something, that is called implicit learning (Seger, 1994), and when we exhibit changes in our behavior that reveal the influence of past experience even though we are not attempting to use that experience, that is called implicit memory (Richardson-Klavehn & Bjork, 1988).
Other well-studied forms of learning include the types of learning that are general across species. We can’t ask a slug to learn a poem or a lemur to learn to bat left-handed, but we can assess learning in other ways. For example, we can look for a change in our responses to things when we are repeatedly stimulated. If you live in a house with a grandfather clock, you know that what was once an annoying and intrusive sound is now probably barely audible to you. Similarly, poking an earthworm again and again is likely to lead to a reduction in its retraction from your touch. These phenomena are forms of nonassociative learning, in which single repeated exposure leads to a change in behavior (Pinsker, Kupfermann, Castelluci, & Kandel, 1970). When our response lessens with exposure, it is called habituation, and when it increases (like it might with a particularly annoying laugh), it is called sensitization. Animals can also learn about relationships between things, such as when an alley cat learns that the sound of janitors working in a restaurant precedes the dumping of delicious new garbage (an example of stimulus-stimulus learning called classical conditioning), or when a dog learns to roll over to get a treat (a form of stimulus-response learning called operant conditioning). These forms of learning will be covered in the module on Conditioning and Learning.
Here, we’ll review some of the conditions that affect learning, with an eye toward the type of explicit learning we do when trying to learn something. Jenkins (1979) classified experiments on learning and memory into four groups of factors (renamed here): learners, encoding activities, materials, and retrieval. In this module, we’ll focus on the first two categories; the module on Memory will consider other factors more generally.
Learners
People bring numerous individual differences with them into memory experiments, and many of these variables affect learning. In the classroom, motivation matters (Pintrich, 2003), though experimental attempts to induce motivation with money yield only modest benefits (Heyer & O’Kelly, 1949). Learners are, however, quite able to allocate more effort to learning prioritized over unimportant materials (Castel, Benjamin, Craik, & Watkins, 2002).
In addition, the organization and planning skills that a learner exhibits matter a lot (Garavalia & Gredler, 2002), suggesting that the efficiency with which one organizes self-guided learning is an important component of learning. We will return to this topic soon.
One well-studied and important variable is working memory capacity. Working memory describes the form of memory we use to hold onto information temporarily. Working memory is used, for example, to keep track of where we are in the course of a complicated math problem, and what the relevant outcomes of prior steps in that problem are. Higher scores on working memory measures are predictive of better reasoning skills (Kyllonen & Christal, 1990), reading comprehension (Daneman & Carpenter, 1980), and even better control of attention (Kane, Conway, Hambrick, & Engle, 2008).
Anxiety also affects the quality of learning. For example, people with math anxiety have a smaller capacity for remembering math-related information in working memory, such as the results of carrying a digit in arithmetic (Ashcraft & Kirk, 2001). Having students write about their specific anxiety seems to reduce the worry associated with tests and increases performance on math tests (Ramirez & Beilock, 2011).
One good place to end this discussion is to consider the role of expertise. Though there probably is a finite capacity on our ability to store information (Landauer, 1986), in practice, this concept is misleading. In fact, because the usual bottleneck to remembering something is our ability to access information, not our space to store it, having more knowledge or expertise actually enhances our ability to learn new information. A classic example can be seen in comparing a chess master with a chess novice on their ability to learn and remember the positions of pieces on a chessboard (Chase & Simon, 1973). In that experiment, the master remembered the location of many more pieces than the novice, even after only a very short glance. Maybe chess masters are just smarter than the average chess beginner, and have better memory? No: The advantage the expert exhibited only was apparent when the pieces were arranged in a plausible format for an ongoing chess game; when the pieces were placed randomly, both groups did equivalently poorly. Expertise allowed the master to chunk (Simon, 1974) multiple pieces into a smaller number of pieces of information—but only when that information was structured in such a way so as to allow the application of that expertise.
Encoding Activities
What we do when we’re learning is very important. We’ve all had the experience of reading something and suddenly coming to the realization that we don’t remember a single thing, even the sentence that we just read. How we go about encoding information determines a lot about how much we remember.
You might think that the most important thing is to try to learn. Interestingly, this is not true, at least not completely. Trying to learn a list of words, as compared to just evaluating each word for its part of speech (i.e., noun, verb, adjective) does help you recall the words—that is, it helps you remember and write down more of the words later. But it actually impairs your ability to recognize the words—to judge on a later list which words are the ones that you studied (Eagle & Leiter, 1964). So this is a case in which incidental learning—that is, learning without the intention to learn—is better than intentional learning.
Such examples are not particularly rare and are not limited to recognition. Nairne, Pandeirada, and Thompson (2008) showed, for example, that survival processing—thinking about and rating each word in a list for its relevance in a survival scenario—led to much higher recall than intentional learning (and also higher, in fact, than other encoding activities that are also known to lead to high levels of recall). Clearly, merely intending to learn something is not enough. How a learner actively processes the material plays a large role; for example, reading words and evaluating their meaning leads to better learning than reading them and evaluating the way that the words look or sound (Craik & Lockhart, 1972). These results suggest that individual differences in motivation will not have a large effect on learning unless learners also have accurate ideas about how to effectively learn material when they care to do so.
So, do learners know how to effectively encode material? People allowed to freely allocate their time to study a list of words do remember those words better than a group that doesn’t have control over their own study time, though the advantage is relatively small and is limited to the subset of learners who choose to spend more time on the more difficult material (Tullis & Benjamin, 2011). In addition, learners who have an opportunity to review materials that they select for restudy often learn more than another group that is asked to restudy the materials that they didn’t select for restudy (Kornell & Metcalfe, 2006). However, this advantage also appears to be relatively modest (Kimball, Smith, & Muntean, 2012) and wasn’t apparent in a group of older learners (Tullis & Benjamin, 2012). Taken together, all of the evidence seems to support the claim that self-control of learning can be effective, but only when learners have good ideas about what an effective learning strategy is.
One factor that appears to have a big effect and that learners do not always appear to understand is the effect of scheduling repetitions of study. If you are studying for a final exam next week and plan to spend a total of five hours, what is the best way to distribute your study? The evidence is clear that spacing one’s repetitions apart in time is superior than massing them all together (Baddeley & Longman, 1978; Bahrick, Bahrick, Bahrick, & Bahrick, 1993; Melton, 1967). Increasing the spacing between consecutive presentations appears to benefit learning yet further (Landauer & Bjork, 1978).
A similar advantage is evident for the practice of interleaving multiple skills to be learned: For example, baseball batters improved more when they faced a mix of different types of pitches than when they faced the same pitches blocked by type (Hall, Domingues, & Cavazos, 1994). Students also showed better performance on a test when different types of mathematics problems were interleaved rather than blocked during learning (Taylor & Rohrer, 2010).
One final factor that merits discussion is the role of testing. Educators and students often think about testing as a way of assessing knowledge, and this is indeed an important use of tests. But tests themselves affect memory, because retrieval is one of the most powerful ways of enhancing learning (Roediger & Butler, 2013). Self-testing is an underutilized and potent means of making learning more durable.
General Principles of Learning
We’ve only begun to scratch the surface here of the many variables that affect the quality and content of learning (Mullin, Herrmann, & Searleman, 1993). But even within this brief examination of the differences between people and the activities they engage in can we see some basic principles of the learning process.
The value of effective metacognition
To be able to guide our own learning effectively, we must be able to evaluate the progress of our learning accurately and choose activities that enhance learning efficiently. It is of little use to study for a long time if a student cannot discern between what material she has or has not mastered, and if additional study activities move her no closer to mastery. Metacognition describes the knowledge and skills people have in monitoring and controlling their own learning and memory. We can work to acquire better metacognition by paying attention to our successes and failures in estimating what we do and don’t know, and by using testing often to monitor our progress.
Transfer-appropriate processing
Sometimes, it doesn’t make sense to talk about whether a particular encoding activity is good or bad for learning. Rather, we can talk about whether that activity is good for learning as revealed by a particular test. For example, although reading words for meaning leads to better performance on a test of recall or recognition than paying attention to the pronunciation of the word, it leads to worse performance on a test that taps knowledge of that pronunciation, such as whether a previously studied word rhymes with another word (Morris, Bransford, & Franks, 1977). The principle of transfer-appropriate processing states that memory is “better” when the test taps the same type of knowledge as the original encoding activity. When thinking about how to learn material, we should always be thinking about the situations in which we are likely to need access to that material. An emergency responder who needs access to learned procedures under conditions of great stress should learn differently from a hobbyist learning to use a new digital camera.
The value of forgetting
Forgetting is sometimes seen as the enemy of learning, but, in fact, forgetting is a highly desirable part of the learning process. The main bottleneck we face in using our knowledge is being able to access it. We have all had the experience of retrieval failure—that is, not being able to remember a piece of information that we know we have, and that we can access easily once the right set of cues is provided. Because access is difficult, it is important to jettison information that is not needed—that is, to forget it. Without forgetting, our minds would become cluttered with out-of-date or irrelevant information. And, just imagine how complicated life would be if we were unable to forget the names of past acquaintances, teachers, or romantic partners.
But the value of forgetting is even greater than that. There is lots of evidence that some forgetting is a prerequisite for more learning. For example, the previously discussed benefits of distributing practice opportunities may arise in part because of the greater forgetting that takes places between those spaced learning events. It is for this reason that some encoding activities that are difficult and lead to the appearance of slow learning actually lead to superior learning in the long run (Bjork, 2011). When we opt for learning activities that enhance learning quickly, we must be aware that these are not always the same techniques that lead to durable, long-term learning.
Conclusion
To wrap things up, let’s think back to the questions we began the module with. What might you now do differently when preparing for an exam? Hopefully, you will think about testing yourself frequently, developing an accurate sense of what you do and do not know, how you are likely to use the knowledge, and using the scheduling of tasks to your advantage. If you are learning a new skill or new material, using the scientific study of learning as a basis for the study and practice decisions you make is a good bet.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- Chunk
- The process of grouping information together using our knowledge.
- Classical conditioning
- Describes stimulus-stimulus associative learning.
- Encoding
- The pact of putting information into memory.
- Habituation
- Occurs when the response to a stimulus decreases with exposure.
- Implicit learning
- Occurs when we acquire information without intent that we cannot easily express.
- Implicit memory
- A type of long-term memory that does not require conscious thought to encode. It's the type of memory one makes without intent.
- Incidental learning
- Any type of learning that happens without the intention to learn.
- Intentional learning
- Any type of learning that happens when motivated by intention.
- Metacognition
- Describes the knowledge and skills people have in monitoring and controlling their own learning and memory.
- Nonassociative learning
- Occurs when a single repeated exposure leads to a change in behavior.
- Operant conditioning
- Describes stimulus-response associative learning.
- Perceptual learning
- Occurs when aspects of our perception changes as a function of experience.
- Sensitization
- Occurs when the response to a stimulus increases with exposure
- Transfer-appropriate processing
- A principle that states that memory performance is superior when a test taps the same cognitive processes as the original encoding activity.
- Working memory
- The form of memory we use to hold onto information temporarily, usually for the purposes of manipulation.
References
- Ashcraft, M. H., & Kirk, E. P. (2001). The relationships among working memory, math anxiety, and performance. Journal of Experimental Psychology: General, 130, 224–237.
- Baddeley, A. D., & Longman, D. J. A. (1978). The influence of length and frequency of training session on the rate of learning to type. Ergonomics, 21, 627–635.
- Bahrick, H. P., Bahrick, L. E., Bahrick, A. S., & Bahrick, P. O. (1993). Maintenance of foreign language vocabulary and the spacing effect. Psychological Science, 4, 316–321.
- Bjork, R. A. (2011). On the symbiosis of learning, remembering, and forgetting. In A. S. Benjamin (Ed.), Successful remembering and successful forgetting: A Festschrift in honor of Robert A. Bjork (pp. 1–22). London, UK: Psychology Press.
- Castel, A. D., Benjamin, A. S., Craik, F. I. M., & Watkins, M. J. (2002). The effects of aging on selectivity and control in short-term recall. Memory & Cognition, 30, 1078–1085.
- Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4, 55–81.
- Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11, 671–684.
- Daneman, M., & Carpenter, P. A. (1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior, 19, 450–466.
- Eagle, M., & Leiter, E. (1964). Recall and recognition in intentional and incidental learning. Journal of Experimental Psychology, 68, 58–63.
- Garavalia, L. S., & Gredler, M. E. (2002). Prior achievement, aptitude, and use of learning strategies as predictors of college student achievement. College Student Journal, 36, 616–626.
- Hall, K. G., Domingues, D. A., & Cavazos, R. (1994). Contextual interference effects with skilled baseball players. Perceptual and Motor Skills, 78, 835–841.
- Heyer, A. W., Jr., & O’Kelly, L. I. (1949). Studies in motivation and retention: II. Retention of nonsense syllables learned under different degrees of motivation. Journal of Psychology: Interdisciplinary and Applied, 27, 143–152.
- Jenkins, J. J. (1979). Four points to remember: A tetrahedral model of memory experiments. In L. S. Cermak & F. I. M. Craik (Eds.), Levels of processing and human memory (pp. 429–446). Hillsdale, NJ: Erlbaum.
- Kane, M. J., Conway, A. R. A., Hambrick, D. Z., & Engle, R. W. (2008). Variation in working memory capacity as variation in executive attention and control. In A. R. A. Conway, C. Jarrold, M. J. Kane, A. Miyake, & J. N. Towse (Eds.), Variation in Working Memory (pp. 22–48). New York, NY: Oxford University Press.
- Kimball, D. R., Smith, T. A., & Muntean, W. J. (2012). Does delaying judgments of learning really improve the efficacy of study decisions? Not so much. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38, 923–954.
- Kornell, N., & Metcalfe, J. (2006). Study efficacy and the region of proximal learning framework. Journal of Experimental Psychology: Learning, Memory, & Cognition, 32, 609–622.
- Kyllonen, P. C., & Christal, R. E. (1990). Reasoning ability is (little more than) working memory capacity. Intelligence, 14, 389–433.
- Landauer, T. K. (1986). How much do people remember? Some estimates of the quantity of learned information in long-term memory. Cognitive Science, 10, 477–493.
- Landauer, T. K., & Bjork, R. A. (1978). Optimum rehearsal patterns and name learning. In M. M. Gruneberg, P. E. Morris, & R. N. Sykes (Eds.), Practical aspects of memory (pp. 625–632). London: Academic Press.
- Melton, A. W. (1967). Repetition and retrieval from memory. Science, 158, 532.
- Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer appropriate processing. Journal of Verbal Learning and Verbal Behavior, 16, 519–533.
- Mullin, P. A., Herrmann, D. J., & Searleman, A. (1993). Forgotten variables in memory theory and research. Memory, 1, 43–64.
- Nairne, J. S., Pandeirada, J. N. S., & Thompson, S. R. (2008). Adaptive memory: the comparative value of survival processing. Psychological Science, 19, 176–180.
- Pinsker, H., Kupfermann, I., Castelluci, V., & Kandel, E. (1970). Habituation and dishabituation of the gill-withdrawal reflex in Aplysia. Science, 167, 1740–1742.
- Pintrich, P. R. (2003). A motivational science perspective on the role of student motivation in learning and teaching contexts. Journal of Educational Psychology, 95, 667–686.
- Polley, D. B., Steinberg, E. E., & Merzenich, M. M. (2006). Perceptual learning directs auditory cortical map reorganization through top-down influences. The Journal of Neuroscience, 26, 4970–4982.
- Ramirez, G., & Beilock, S. L. (2011). Writing about testing worries boosts exam performance in the classroom. Science, 331, 211–213.
- Richardson-Klavehn, A. & Bjork, R.A. (1988). Measures of memory. Annual Review of Psychology, 39, 475–543.
- Roediger, H. L., & Butler, A.C. (2013). Retrieval practice (testing) effect. In H. L. Pashler (Ed.), Encyclopedia of the mind. Los Angeles, CA: Sage Publishing Co.
- Seger, C. A. (1994). Implicit learning. Psychological Bulletin, 115, 163–196.
- Simon, H. A. (1974). How big is a chunk? Science, 4124, 482–488.
- Taylor, K., & Rohrer, D. (2010). The effects of interleaved practice. Applied Cognitive Psychology, 24, 837–848.
- Tullis, J. G., & Benjamin, A. S. (2012). Consequences of restudy choices in younger and older learners. Psychonomic Bulletin & Review, 19, 743–749.
- Tullis, J. G., & Benjamin, A. S. (2011). On the effectiveness of self-paced learning. Journal of Memory and Language, 64, 109–118.
How to cite this Chapter using APA Style:
Benjamin, A. (2019). Factors influencing learning. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/rnxyg6wp
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/rnxyg6wp.
Additional information about the Diener Education Fund (DEF) can be accessed here.
The Piagetian task in which infants below about 9 months of age fail to search for an object that is removed from their sight and, if not allowed to search immediately for the object, act as if they do not know that it continues to exist.
Original chapter by Paul Silvia adapted by the Queen's University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
When people think of emotions they usually think of the obvious ones, such as happiness, fear, anger, and sadness. This module looks at the knowledge emotions, a family of emotional states that foster learning, exploring, and reflecting. Surprise, interest, confusion, and awe come from events that are unexpected, complicated, and mentally challenging, and they motivate learning in its broadest sense, be it learning over the course of seconds (finding the source of a loud crash, as in surprise) or over a lifetime (engaging with hobbies, pastimes, and intellectual pursuits, as in interest). The module reviews research on each emotion, with an emphasis on causes, consequences, and individual differences. As a group, the knowledge emotions motivate people to engage with new and puzzling things rather than avoid them. Over time, engaging with new things, ideas, and people broadens someone’s experiences and cultivates expertise. The knowledge emotions thus don’t gear up the body like fear, anger, and happiness do, but they do gear up the mind—a critical task for humans, who must learn essentially everything that they know.
Learning Objectives
- Identify the four knowledge emotions.
- Describe the patterns of appraisals that bring about these emotions.
- Discuss how the knowledge emotions promote learning.
- Apply the knowledge emotions to enhancing learning and education, and to one’s own life.
Introduction
What comes to mind when you think of emotions? It’s probably the elation of happiness, the despair of sadness, or the freak-out fright of fear. Emotions such as happiness, anger, sadness, and fear are important emotions, but human emotional experience is vast—people are capable of experiencing a wide range of feelings.
This module considers the knowledge emotions, a profoundly important family of emotions associated with learning, exploring, and reflecting. The family of knowledge emotions has four main members: surprise, interest, confusion, and awe. These are considered knowledge emotions for two reasons. First, the events that bring them about involve knowledge: These emotions happen when something violates what people expected or believed. Second, these emotions are fundamental to learning: Over time, they build useful knowledge about the world.
Some Background About Emotions
Before jumping into the knowledge emotions, we should consider what emotions do and when emotions happen. According to functionalist theories of emotion, emotions help people manage important tasks (Keltner & Gross, 1999; Parrott, 2001). Fear, for example, mobilizes the body to fight or flee; happiness rewards achieving goals and builds attachments to other people. What do knowledge emotions do? As we’ll see in detail later, they motivate learning, viewed in its broadest sense, during times that the environment is puzzling or erratic. Sometimes the learning is on a short time scale. Surprise, for example, makes people stop what they are doing, pay attention to the surprising thing, and evaluate whether it is dangerous (Simons, 1996). After a couple seconds, people have learned what they needed to know and get back to what they were doing. But sometimes the learning takes place over the lifespan. Interest, for example, motivates people to learn about things over days, weeks, and years. Finding something interesting motivates “for its own sake” learning and is probably the major engine of human competence (Izard, 1977; Silvia, 2006).
What causes emotions to happen in the first place? Although it usually feels like something in the world—a good hug, a snake slithering across the driveway, a hot-air balloon shaped like a question mark—causes an emotion directly, emotion theories contend that emotions come from how we think about what is happening in the world, not what is literally happening. After all, if things in the world directly caused emotions, everyone would always have the same emotion in response to something. Appraisal theories (Ellsworth & Scherer, 2003; Lazarus, 1991) propose that each emotion is caused by a group of appraisals, which are evaluations and judgments of what events in the world mean for our goals and well-being: Is this relevant to me? Does it further or hinder my goals? Can I deal with it or do something about it? Did someone do it on purpose? Different emotions come from different answers to these appraisal questions.
With that as a background, in the following sections we’ll consider the nature, causes, and effects of each knowledge emotion. Afterward, we will consider some of their practical implications.
Surprise
Nothing gets people’s attention like something startling. Surprise, a simple emotion, hijacks a person’s mind and body and focuses them on a source of possible danger (Simons, 1996). When there’s a loud, unexpected crash, people stop, freeze, and orient to the source of the noise. Their minds are wiped clean—after something startling, people usually can’t remember what they had been talking about—and attention is focused on what just happened. By focusing all the body’s resources on the unexpected event, surprise helps people respond quickly (Simons, 1996).
Surprise has only one appraisal: A single “expectedness check” (Scherer, 2001) seems to be involved. When an event is “high contrast”—it sticks out against the background of what people expected to perceive or experience—people become surprised (Berlyne, 1960; Teigen & Keren, 2003). Figure 1 shows this pattern visually: Surprise is high when unexpectedness is high.
Emotions are momentary states, but people vary in their propensity to experience them. Just as some people experience happiness, anger, and fear more readily, some people are much more easily surprised than others. At one end, some people are hard to surprise; at the other end, people are startled by minor noises, flashes, and changes. Like other individual differences in emotion, extreme levels of surprise propensity can be dysfunctional. When people have extreme surprise responses to mundane things—known as hyperstartling (Simons, 1996) and hyperekplexia (Bakker, van Dijk, van den Maagdenberg, & Tijssen, 2006)—everyday tasks such as driving or swimming become dangerous.
Interest
People are curious creatures. Interest—an emotion that motivates exploration and learning (Silvia, 2012)—is one of the most commonly experienced emotions in everyday life (Izard, 1977). Humans must learn virtually everything they know, from how to cook pasta to how the brain works, and interest is an engine of this massive undertaking of learning across the lifespan.
The function of interest is to engage people with things that are new, odd, or unfamiliar. Unfamiliar things can be scary or unsettling, which makes people avoid them. But if people always avoided new things they would learn and experience nothing. It’s hard to imagine what life would be like if people weren’t curious to try new things: We would never feel like watching a different movie, trying a different restaurant, or meeting new people. Interest is thus a counterweight to anxiety—by making unfamiliar things appealing, it motivates people to experience and think about new things. As a result, interest is an intrinsically motivated form of learning. When curious, people want to learn something for its own sake, to know it for the simple pleasure of knowing it, not for an external reward, such as learning to get money, impress a peer, or receive the approval of a teacher or parent.
Figure 1 shows the two appraisals that create interest. Like surprise, interest involves appraisals of novelty: Things that are unexpected, unfamiliar, novel, and complex can evoke interest (Berlyne, 1960; Hidi & Renninger, 2006; Silvia, 2008). But unlike surprise, interest involves an additional appraisal of coping potential. In appraisal theories, coping potential refers to people’s evaluations of their ability to manage what is happening (Lazarus, 1991). When coping potential is high, people feel capable of handling the challenge at hand. For interest, this challenge is mental: Something odd and unexpected happened, and people can either feel able to understand it or not. When people encounter something that they appraise as both novel (high novelty and complexity) and comprehensible (high coping potential), they will find it interesting (Silvia, 2005).
The primary effect of interest is exploration: People will explore and think about the new and intriguing thing, be it an interesting object, person, or idea. By stimulating people to reflect and learn, interest builds knowledge and, in the long run, deep expertise. Consider, for example, the sometimes scary amount of knowledge people have about their hobbies. People who find cars, video games, high fashion, and soccer intrinsically interesting know an amazing amount about their passions—it would be hard to learn so much so quickly if people found it boring.
A huge amount of research shows that interest promotes learning that is faster, deeper, better, and more enjoyable (Hidi, 2001; Silvia, 2006). When people find material more interesting, they engage with it more deeply and learn it more thoroughly. This is true for simple kinds of learning—sentences and paragraphs are easier to remember when they are interesting (Sadoski, 2001; Schiefele, 1999)—and for broader academic success—people get better grades and feel more intellectually engaged in classes they find interesting (Krapp, 1999, 2002; Schiefele, Krapp, & Winteler, 1992).
Individual differences in interest are captured by trait curiosity (Kashdan, 2004; Kashdan et al., 2009). People low in curiosity prefer activities and ideas that are tried and true and familiar; people high in curiosity, in contrast, prefer things that are offbeat and new. Trait curiosity is a facet of openness to experience, a broader trait that is one of the five major factors of personality (McCrae, 1996; McCrae & Sutin, 2009). Not surprisingly, being high in openness to experience involves exploring new things and findings quirky things appealing. Research shows that curious, open people ask more questions in class, own and read more books, eat a wider range of food, and—not surprisingly, given their lifetime of engaging with new things—are a bit higher in intelligence (DeYoung, 2011; Kashdan & Silvia, 2009; Peters, 1978; Raine, Reynolds, Venables, & Mednick, 2002).
Confusion
Sometimes the world is weird. Interest is a wonderful resource when people encounter new and unfamiliar things, but those things aren’t always comprehensible. Confusion happens when people are learning something that is both unfamiliar and hard to understand. In the appraisal space shown in Figure 1, confusion comes from appraising an event as high in novelty, complexity, and unfamiliarity as well as appraising it as hard to comprehend (Silvia, 2010, 2013).
Confusion, like interest, promotes thinking and learning. This isn’t an obvious idea—our intuitions would suggest that confusion makes people frustrated and thus more likely to tune out and quit. But as odd as it sounds, making students confused can help them learn better. In an approach to learning known as impasse-driven learning(VanLehn, Siler, Murray, Yamauchi, & Baggett, 2003), making students confused motivates them to think through a problem instead of passively sitting and listening to what a teacher is saying. By actively thinking through the problem, students are learning actively and thus learning the material more deeply. In one experiment, for example, students learned about scientific research methods from two virtual reality tutors (D’Mello, Lehman, Pekrun, & Graesser, in press). The tutors sometimes contradicted each other, however, which made the students confused. Measures of simple learning (memory for basic concepts) and deep learning (being able to transfer an idea to a new area) showed that students who had to work through confusion learned more deeply—they were better at correctly applying what they learned to new problems.
In a study of facial expressions, Rozin and Cohen (2003) demonstrated what all college teachers know: It’s easy to spot confusion on someone’s face. When people are confused, they usually furrow, scrunch, or lower their eyebrows and purse or bite their lips (Craig, D’Mello, Witherspoon, & Graesser, 2008; Durso, Geldbach, & Corballis, 2012). In a clever application of these findings, researchers have developed artificial intelligence (AI) teaching and tutoring systems that can detect expressions of confusion (Craig et al., 2008). When the AI system detects confusion, it can ask questions and give hints that help the student work through the problem.
Not much is known about individual differences related to confusion, but differences in how much people know are important. In one research study, people viewed short film clips from movies submitted to a local film festival (Silvia & Berg, 2011). Some of the people were film experts, such as professors and graduate students in media studies and film theory; others were novices, such as the rest of us who simply watch movies for fun. The experts found the clips much more interesting and much less confusing than the novices did. A similar study discovered that experts in the arts found experimental visual art more interesting and less confusing than novices did (Silvia, 2013).
Awe
Awe—a state of fascination and wonder—is the deepest and probably least common of the knowledge emotions. When people are asked to describe profound experiences, such as the experience of beauty or spiritual transformation, awe is usually mentioned (Cohen, Gruber, & Keltner, 2010). People are likely to report experiencing awe when they are alone, engaged with art and music, or in nature (Shiota, Keltner, & Mossman, 2007).
Awe comes from two appraisals (Keltner & Haidt, 2003). First, people appraise something as vast, as beyond the normal scope of their experience. Thus, like the other knowledge emotions, awe involves appraising an event as inconsistent with one’s existing knowledge, but the degree of inconsistency is huge, usually when people have never encountered something like the event before (Bonner & Friedman, 2011). Second, people engage in accommodation, which is changing their beliefs—about themselves, other people, or the world in general—to fit in the new experience. When something is massive (in size, scope, sound, creativity, or anything else) and when people change their beliefs to accommodate it, they’ll experience awe.
A mild, everyday form of awe is chills, sometimes known as shivers or thrills. Chills involve getting goosebumps on the skin, especially the scalp, neck, back, and arms, usually as a wave that starts at the head and moves downward. Chills are part of strong awe experiences, but people often experience them in response to everyday events, such as compelling music and movies (Maruskin, Thrash, & Elliot, 2012; Nusbaum & Silvia, 2011). Music that evokes chills, for example, tends to be loud, have a wide frequency range (such as both low and high frequencies), and major dynamic shifts, such as a shift from quiet to loud or a shift from few to many instruments (Huron & Margulis, 2010).
Like the other knowledge emotions, awe motivates people to engage with something outside the ordinary. Awe is thus a powerful educational tool. In science education, it is common to motivate learning by inspiring wonder. One example comes from a line of research on astronomy education, which seeks to educate the public about astronomy by using awe-inspiring images of deep space (Arcand, Watzke, Smith, & Smith, 2010). When people see beautiful and striking color images of supernovas, black holes, and planetary nebulas, they usually report feelings of awe and wonder. These feelings then motivate them to learn about what they are seeing and their scientific importance (Smith et al., 2011).
Regarding individual differences, some people experience awe much more often than others. One study that developed a brief scale to measure awe—the items included statements such as “I often feel awe” and “I feel wonder almost every day”—found that people who often experience awe are much higher in openness to experience (a trait associated with openness to new things and a wide emotional range) and in extraversion (a trait associated with positive emotionality) (Shiota, Keltner, & John, 2006). Similar findings appear for when people are asked how often they experience awe in response to the arts (Nusbaum & Silvia, in press). For example, people who say that they often “feel a sense of awe and wonder” when listening to music are much higher in openness to experience (Silvia & Nusbaum, 2011).
Implications of the Knowledge Emotions
Learning about the knowledge emotions expands our ideas about what emotions are and what they do. Emotions clearly play important roles in everyday challenges such as responding to threats and building relationships. But emotions also aid in other, more intellectual challenges for humans. Compared with other animals, we are born with little knowledge but have the potential for enormous intelligence. Emotions such as surprise, interest, confusion, and awe first signal that something awry has happened that deserves our attention. They then motivate us to engage with the new things that strain our understanding of the world and how it works. Emotions surely aid fighting and fleeing, but for most of the hours of most of our days, they mostly aid in learning, exploring, and reflecting.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- Accommodation
- Changing one's beliefs about the world and how it works in light of new experience.
- Appraisal structure
- The set of appraisals that bring about an emotion.
- Appraisal theories
- Evaluations that relate what is happening in the environment to people’s values, goals, and beliefs. Appraisal theories of emotion contend that emotions are caused by patterns of appraisals, such as whether an event furthers or hinders a goal and whether an event can be coped with.
- Awe
- An emotion associated with profound, moving experiences. Awe comes about when people encounter an event that is vast (far from normal experience) but that can be accommodated in existing knowledge.
- Chills
- A feeling of goosebumps, usually on the arms, scalp, and neck, that is often experienced during moments of awe.
- Confusion
- An emotion associated with conflicting and contrary information, such as when people appraise an event as unfamiliar and as hard to understand. Confusion motivates people to work through the perplexing information and thus fosters deeper learning.
- Coping potential
- People's beliefs about their ability to handle challenges.
- Facial expressions
- Part of the expressive component of emotions, facial expressions of emotion communicate inner feelings to others.
- Functionalist theories of emotion
- Theories of emotion that emphasize the adaptive role of an emotion in handling common problems throughout evolutionary history.
- Impasse-driven learning
- An approach to instruction that motivates active learning by having learners work through perplexing barriers.
- Interest
- An emotion associated with curiosity and intrigue, interest motivates engaging with new things and learning more about them. It is one of the earliest emotions to develop and a resource for intrinsically motivated learning across the life span.
- Intrinsically motivated learning
- Learning that is “for its own sake”—such as learning motivated by curiosity and wonder—instead of learning to gain rewards or social approval.
- Knowledge emotions
- A family of emotions associated with learning, reflecting, and exploring. These emotions come about when unexpected and unfamiliar events happen in the environment. Broadly speaking, they motivate people to explore unfamiliar things, which builds knowledge and expertise over the long run.
- Openness to experience
- One of the five major factors of personality, this trait is associated with higher curiosity, creativity, emotional breadth, and open-mindedness. People high in openness to experience are more likely to experience interest and awe.
- Surprise
- An emotion rooted in expectancy violation that orients people toward the unexpected event.
- Trait curiosity
- Stable individual-differences in how easily and how often people become curious.
References
- Arcand, K. K., Watzke, M., Smith, L. F., & Smith, J. K. (2010). Surveying aesthetics and astronomy: A project exploring the public’s perception of astronomical images and the science within. CAP Journal, 10, 13–16.
- Bakker, M. J., van Dijk, J. G., van den Maagdenberg, A. M. J. M., & Tijssen, M. A. J. (2006). Startle syndromes. The Lancet Neurology, 5, 513–524.
- Berlyne, D. E. (1960). Conflict, arousal, and curiosity. New York, NY: McGraw-Hill.
- Bonner, E. T., & Friedman, H. L. (2011). A conceptual clarification of the experience of awe: An interpretative phenomenological analysis. The Humanistic Psychologist, 39, 222–235.
- Cohen, A. B., Gruber, J., & Keltner, D. (2010). Comparing spiritual transformations and experiences of profound beauty. Psychology of Religion and Spirituality, 2, 127–135
- Craig, S. D., D’Mello, S., Witherspoon, A., & Graesser, A. (2008). Emote aloud during learning with AutoTutor: Applying the facial action coding system to cognitive-affective states during learning. Cognition and Emotion, 22, 777–788.
- DeYoung, C. G. (2011). Intelligence and personality. In R. J. Sternberg & S. B. Kaufman (Eds.), The Cambridge handbook of intelligence (pp. 711–737). New York, NY: Cambridge University Press.
- Durso, F. T., Geldbach, K. M., & Corballis, P. (2012). Detecting confusion using facial electromyography. Human Factors, 54, 60–69.
- D’Mello, S., Lehman, B., Pekrun, R., & Graesser, A. (in press). Confusion can be beneficial for learning. Learning and Instruction.
- Ellsworth, P. C., & Scherer, K. R. (2003). Appraisal processes in emotion. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 572–595). New York, NY: Oxford University Press.
- Hidi, S. (2001). Interest, reading, and learning: Theoretical and practical considerations. Educational Psychology Review, 13, 191–209.
- Hidi, S., & Renninger, K. A. (2006). The four-phase model of interest development. Educational Psychologist, 41, 111–127.
- Huron, D., & Margulis, E. H. (2010). Musical expectancy and thrills. In P. N. Juslin & J. A. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp. 575–604). New York, NY: Oxford University Press.
- Izard, C. E. (1977). Human emotions. New York, NY: Plenum.
- Kashdan, T. B. (2004). Curiosity. In C. Peterson & M. E. P. Seligman (Eds.), Character strengths and virtues: A handbook and classification (pp. 125–141). New York, NY: Oxford University Press.
- Kashdan, T. B., & Silvia, P. J. (2009). Curiosity and interest: The benefits of thriving on novelty and challenge. In C. R. Snyder & S. J. Lopez (Eds.), Handbook of positive psychology (2nd ed., pp. 367–374). New York, NY: Oxford University Press.
- Kashdan, T.B., Gallagher, M. W., Silvia, P. J., Winterstein, B. P., Breen, W. E., Terhar, D., & Steger, M. F. (2009). The curiosity and exploration inventory–II: Development, factor structure, and psychometrics. Journal of Research in Personality, 43, 987–998.
- Keltner, D., & Gross, J. J. (1999). Functional accounts of emotions. Cognition and Emotion, 13, 467–480.
- Keltner, D., & Haidt, J. (2003). Approaching awe, a moral, spiritual, and aesthetic emotion. Cognition and Emotion, 17, 297–314.
- Krapp, A. (2002). An educational-psychological theory of interest and its relation to self-determination theory. In E. L. Deci & R. M. Ryan (Eds.), Handbook of self-determination research (pp. 405–427). Rochester, NY: University of Rochester Press.
- Krapp, A. (1999). Interest, motivation and learning: An educational-psychological perspective. European Journal of Psychology of Education, 14, 23–40.
- Lazarus, R. S. (1991). Emotion and adaptation. New York, NY: Oxford University Press.
- Maruskin, L. A., Thrash, T. M., & Elliot, A. J. (2012). The chills as a psychological construct: Content universe, factor structure, affective composition, elicitors, trait antecedents, and consequences. Journal of Personality and Social Psychology, 103, 135–157.
- McCrae, R. R. (1996). Social consequences of experiential openness. Psychological Bulletin, 120, 323–337.
- McCrae, R. R., & Sutin, A. R. (2009). Openness to experience. In M. R. Leary & R. H. Hoyle (Eds.), Handbook of individual differences in social behavior (pp. 257–273). New York, NY: Guilford.
- Nusbaum, E. C., & Silvia, P. J. (2011). Shivers and timbres: Personality and the experience of chills from music. Social Psychological and Personality Science, 2, 199–204.
- Nusbaum, E. C., & Silvia, P. J. (in press). Unusual aesthetic states. In P. Tinio & J. Smith (Eds.), Cambridge handbook of the psychology of aesthetics and the arts. New York, NY: Cambridge University Press.
- Parrott, W. G. (2001). Implications of dysfunctional emotions for understanding how emotions function. Review of General Psychology, 5, 180–186.
- Peters, R. A. (1978). Effects of anxiety, curiosity, and perceived instructor threat on student verbal behavior in the college classroom. Journal of Educational Psychology, 70, 388–395.
- Raine, A., Reynolds, C., Venables, P. H., & Mednick, S. A. (2002). Stimulation-seeking and intelligence: A prospective longitudinal study. Journal of Personality and Social Psychology, 82, 663–674.
- Rozin, P., & Cohen, A. B. (2003). High frequency of facial expressions corresponding to confusion, concentration, and worry in an analysis of naturally occurring facial expressions of Americans. Emotion, 3, 68–75.
- Sadoski, M. (2001). Resolving the effects of concreteness on interest, comprehension, and learning important ideas from text. Educational Psychology Review, 13, 263–281.
- Scherer, K. R. (2001). Appraisal considered as a process of multilevel sequential checking. In K. R. Scherer, A. Schorr, & T. Johnstone (Eds.), Appraisal processes in emotion: Theory, methods, research (pp. 92–120). New York, NY: Oxford University Press.
- Schiefele, U. (1999). Interest and learning from text. Scientific Studies of Reading, 3, 257–279.
- Schiefele, U., Krapp, A., & Winteler, A. (1992). Interest as a predictor of academic achievement: A meta-analysis of research. In K. A. Renninger, S. Hidi, & A. Krapp (Eds.), The role of interest in learning and development(pp. 183–212). Hillsdale, NJ: Lawrence Erlbaum Associates.
- Shiota, M. N., Keltner, D., & John, O. P. (2006). Positive emotion dispositions differentially associated with Big Five personality and attachment style. Journal of Positive Psychology, 1, 61–71.
- Shiota, M. N., Keltner, D., & Mossman, A. (2007). The nature of awe: Elicitors, appraisals, and effects on self-concept. Cognition and Emotion, 21, 944–963.
- Silvia, P. J. (2013). Interested experts, confused novices: Art expertise and the knowledge emotions. Empirical Studies of the Arts, 31, 107–116.
- Silvia, P. J. (2012). Curiosity and motivation. In R. M. Ryan (Ed.), The Oxford handbook of human motivation (pp. 157–166). New York, NY: Oxford University Press.
- Silvia, P. J. (2010). Confusion and interest: The role of knowledge emotions in aesthetic experience. Psychology of Aesthetics, Creativity, and the Arts, 4, 75–80.
- Silvia, P. J. (2008). Interest—The curious emotion. Current Directions in Psychological Science, 17, 57–60.
- Silvia, P. J. (2006). Exploring the psychology of interest. New York, NY: Oxford University Press.
- Silvia, P. J. (2005). What is interesting? Exploring the appraisal structure of interest. Emotion, 5, 89–102.
- Silvia, P. J., & Berg, C. (2011). Finding movies interesting: How expertise and appraisals influence the aesthetic experience of film. Empirical Studies of the Arts, 29, 73–88.
- Silvia, P. J., & Nusbaum, E. C. (2011). On personality and piloerection: Individual differences in aesthetic chills and other unusual aesthetic experiences. Psychology of Aesthetics, Creativity, and the Arts, 5, 208–214.
- Simons, R. C. (1996). Boo! Culture, experience, and the startle reflex. New York, NY: Oxford University Press.
- Smith, L. F., Smith, J. K., Arcand, K. K., Smith, R. K., Bookbinder, J., & Keach, K. (2011). Aesthetics and astronomy: Studying the public’s perception and understanding of imagery from space. Science Communication, 33, 201–238.
- Teigen, K. H., & Keren, G. (2003). Surprises: Low probabilities or high contrasts? Cognition, 87, 55–71.
- VanLehn, K., Siler, S., Murray, C., Yamauchi, T., & Baggett, W. (2003). Why do only some events cause learning during human tutoring? Cognition and Instruction, 21, 209–249
How to cite this Chapter using APA Style:
Silvia, P. (2019). Knowledge emotions: feelings that foster learning, exploring, and reflecting. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/f7rvqp54
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/f7rvqp54
Additional information about the Diener Education Fund (DEF) can be accessed here.
Problems pioneered by Piaget in which physical transformation of an object or set of objects changes a perceptually salient dimension but not the quantity that is being asked about.
Awareness of the component sounds within words.
A numerical board game that seems to be useful for building numerical knowledge.
The sizes of numbers.
The set of neuroanatomical structures that allows us to understand the actions and intentions of other people.
Measures the firing of groups of neurons in the cortex. As a person views or listens to specific types of information, neuronal activity creates small electrical currents that can be recorded from non-invasive sensors placed on the scalp. ERP provides excellent information about the timing of processing, clarifying brain activity at the millisecond pace at which it unfolds.
Original chapter by Mark E. Bouton adapted by the Queen's University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
Basic principles of learning are always operating and always influencing human behavior. This module discusses the two most fundamental forms of learning -- classical (Pavlovian) and instrumental (operant) conditioning. Through them, we respectively learn to associate 1) stimuli in the environment, or 2) our own behaviors, with significant events, such as rewards and punishments. The two types of learning have been intensively studied because they have powerful effects on behavior, and because they provide methods that allow scientists to analyze learning processes rigorously. This module describes some of the most important things you need to know about classical and instrumental conditioning, and it illustrates some of the many ways they help us understand normal and disordered behavior in humans. The module concludes by introducing the concept of observational learning, which is a form of learning that is largely distinct from classical and operant conditioning.
Learning Objectives
- Distinguish between classical (Pavlovian) conditioning and instrumental (operant) conditioning.
- Understand some important facts about each that tell us how they work.
- Understand how they work separately and together to influence human behavior in the world outside the laboratory.
- Students will be able to list the four aspects of observational learning according to Social Learning Theory.
Two Types of Conditioning
Although Ivan Pavlov won a Nobel Prize for studying digestion, he is much more famous for something else: working with a dog, a bell, and a bowl of saliva. Many people are familiar with the classic study of “Pavlov’s dog,” but rarely do they understand the significance of its discovery. In fact, Pavlov’s work helps explain why some people get anxious just looking at a crowded bus, why the sound of a morning alarm is so hated, and even why we swear off certain foods we’ve only tried once. Classical (or Pavlovian) conditioning is one of the fundamental ways we learn about the world around us. But it is far more than just a theory of learning; it is also arguably a theory of identity. For, once you understand classical conditioning, you’ll recognize that your favorite music, clothes, even political candidate, might all be a result of the same process that makes a dog drool at the sound of bell.
Around the turn of the 20th century, scientists who were interested in understanding the behavior of animals and humans began to appreciate the importance of two very basic forms of learning. One, which was first studied by the Russian physiologist Ivan Pavlov, is known as classical, or Pavlovian conditioning. In his famous experiment, Pavlov rang a bell and then gave a dog some food. After repeating this pairing multiple times, the dog eventually treated the bell as a signal for food, and began salivating in anticipation of the treat. This kind of result has been reproduced in the lab using a wide range of signals (e.g., tones, light, tastes, settings) paired with many different events besides food (e.g., drugs, shocks, illness; see below).
We now believe that this same learning process is engaged, for example, when humans associate a drug they’ve taken with the environment in which they’ve taken it; when they associate a stimulus (e.g., a symbol for vacation, like a big beach towel) with an emotional event (like a burst of happiness); and when they associate the flavor of a food with getting food poisoning. Although classical conditioning may seem “old” or “too simple” a theory, it is still widely studied today for at least two reasons: First, it is a straightforward test of associative learning that can be used to study other, more complex behaviors. Second, because classical conditioning is always occurring in our lives, its effects on behavior have important implications for understanding normal and disordered behavior in humans.
In a general way, classical conditioning occurs whenever neutral stimuli are associated with psychologically significant events. With food poisoning, for example, although having fish for dinner may not normally be something to be concerned about (i.e., a “neutral stimuli”), if it causes you to get sick, you will now likely associate that neutral stimuli (the fish) with the psychologically significant event of getting sick. These paired events are often described using terms that can be applied to any situation.
The dog food in Pavlov’s experiment is called the unconditioned stimulus (US) because it elicits an unconditioned response (UR). That is, without any kind of “training” or “teaching,” the stimulus produces a natural or instinctual reaction. In Pavlov’s case, the food (US) automatically makes the dog drool (UR). Other examples of unconditioned stimuli include loud noises (US) that startle us (UR), or a hot shower (US) that produces pleasure (UR).On the other hand, a conditioned stimulus produces a conditioned response. A conditioned stimulus (CS) is a signal that has no importance to the organism until it is paired with something that does have importance. For example, in Pavlov’s experiment, the bell is the conditioned stimulus. Before the dog has learned to associate the bell (CS) with the presence of food (US), hearing the bell means nothing to the dog. However, after multiple pairings of the bell with the presentation of food, the dog starts to drool at the sound of the bell. This drooling in response to the bell is the conditioned response (CR). Although it can be confusing, the conditioned response is almost always the same as the unconditioned response. However, it is called the conditioned response because it is conditional on (or, depends on) being paired with the conditioned stimulus (e.g., the bell). To help make this clearer, consider becoming really hungry when you see the logo for a fast food restaurant. There’s a good chance you’ll start salivating. Although it is the actual eating of the food (US) that normally produces the salivation (UR), simply seeing the restaurant’s logo (CS) can trigger the same reaction (CR).
Another example you are probably very familiar with involves your alarm clock. If you’re like most people, waking up early usually makes you unhappy. In this case, waking up early (US) produces a natural sensation of grumpiness (UR). Rather than waking up early on your own, though, you likely have an alarm clock that plays a tone to wake you. Before setting your alarm to that particular tone, let’s imagine you had neutral feelings about it (i.e., the tone had no prior meaning for you). However, now that you use it to wake up every morning, you psychologically “pair” that tone (CS) with your feelings of grumpiness in the morning (UR). After enough pairings, this tone (CS) will automatically produce your natural response of grumpiness (CR). Thus, this linkage between the unconditioned stimulus (US; waking up early) and the conditioned stimulus (CS; the tone) is so strong that the unconditioned response (UR; being grumpy) will become a conditioned response (CR; e.g., hearing the tone at any point in the day—whether waking up or walking down the street—will make you grumpy). Modern studies of classical conditioning use a very wide range of CSs and USs and measure a wide range of conditioned responses.
Although classical conditioning is a powerful explanation for how we learn many different things, there is a second form of conditioning that also helps explain how we learn. First studied by Edward Thorndike, and later extended by B. F. Skinner, this second type of conditioning is known as instrumental or operant conditioning. Operant conditioning occurs when a behavior (as opposed to a stimulus) is associated with the occurrence of a significant event. In the best-known example, a rat in a laboratory learns to press a lever in a cage (called a “Skinner box”) to receive food. Because the rat has no “natural” association between pressing a lever and getting food, the rat has to learn this connection. At first, the rat may simply explore its cage, climbing on top of things, burrowing under things, in search of food. Eventually while poking around its cage, the rat accidentally presses the lever, and a food pellet drops in. This voluntary behavior is called an operant behavior, because it “operates” on the environment (i.e., it is an action that the animal itself makes).
Now, once the rat recognizes that it receives a piece of food every time it presses the lever, the behavior of lever-pressing becomes reinforced. That is, the food pellets serve as reinforcers because they strengthen the rat’s desire to engage with the environment in this particular manner. In a parallel example, imagine that you’re playing a street-racing video game. As you drive through one city course multiple times, you try a number of different streets to get to the finish line. On one of these trials, you discover a shortcut that dramatically improves your overall time. You have learned this new path through operant conditioning. That is, by engaging with your environment (operant responses), you performed a sequence of behaviors that that was positively reinforced (i.e., you found the shortest distance to the finish line). And now that you’ve learned how to drive this course, you will perform that same sequence of driving behaviors (just as the rat presses on the lever) to receive your reward of a faster finish.
Operant conditioning research studies how the effects of a behavior influence the probability that it will occur again. For example, the effects of the rat’s lever-pressing behavior (i.e., receiving a food pellet) influences the probability that it will keep pressing the lever. For, according to Thorndike’s law of effect, when a behavior has a positive (satisfying) effect or consequence, it is likely to be repeated in the future. However, when a behavior has a negative (painful/annoying) consequence, it is less likely to be repeated in the future. Effects that increase behaviors are referred to as reinforcers, and effects that decrease them are referred to as punishers.
An everyday example that helps to illustrate operant conditioning is striving for a good grade in class—which could be considered a reward for students (i.e., it produces a positive emotional response). In order to get that reward (similar to the rat learning to press the lever), the student needs to modify their behavior. For example, the student may learn that speaking up in class gets him/her participation points (a reinforcer), so the student speaks up repeatedly. However, the student also learns that s/he shouldn’t speak up about just anything; talking about topics unrelated to school actually costs points. Therefore, through the student’s freely chosen behaviors, s/he learns which behaviors are reinforced and which are punished.An important distinction of operant conditioning is that it provides a method for studying how consequences influence “voluntary” behavior. The rat’s decision to press the lever is voluntary, in the sense that the rat is free to make and repeat that response whenever it wants. Classical conditioning, on the other hand, is just the opposite—depending instead on “involuntary” behavior (e.g., the dog doesn’t choose to drool; it just does). So, whereas the rat must actively participate and perform some kind of behavior to attain its reward, the dog in Pavlov’s experiment is a passive participant. One of the lessons of operant conditioning research, then, is that voluntary behavior is strongly influenced by its consequences.
The illustration above summarizes the basic elements of classical and instrumental conditioning. The two types of learning differ in many ways. However, modern thinkers often emphasize the fact that they differ—as illustrated here—in what is learned. In classical conditioning, the animal behaves as if it has learned to associate a stimulus with a significant event. In operant conditioning, the animal behaves as if it has learned to associate a behaviorwith a significant event. Another difference is that the response in the classical situation (e.g., salivation) is elicited by a stimulus that comes before it, whereas the response in the operant case is not elicited by any particular stimulus. Instead, operant responses are said to be emitted. The word “emitted” further conveys the idea that operant behaviors are essentially voluntary in nature.
Understanding classical and operant conditioning provides psychologists with many tools for understanding learning and behavior in the world outside the lab. This is in part because the two types of learning occur continuously throughout our lives. It has been said that “much like the laws of gravity, the laws of learning are always in effect” (Spreat & Spreat, 1982).
Useful Things to Know about Classical Conditioning
Classical Conditioning Has Many Effects on Behavior
A classical CS (e.g., the bell) does not merely elicit a simple, unitary reflex. Pavlov emphasized salivation because that was the only response he measured. But his bell almost certainly elicited a whole system of responses that functioned to get the organism ready for the upcoming US (food) (see Timberlake, 2001). For example, in addition to salivation, CSs (such as the bell) that signal that food is near also elicit the secretion of gastric acid, pancreatic enzymes, and insulin (which gets blood glucose into cells). All of these responses prepare the body for digestion. Additionally, the CS elicits approach behavior and a state of excitement. And presenting a CS for food can also cause animals whose stomachs are full to eat more food if it is available. In fact, food CSs are so prevalent in modern society, humans are likewise inclined to eat or feel hungry in response to cues associated with food, such as the sound of a bag of potato chips opening, the sight of a well-known logo (e.g., Coca-Cola), or the feel of the couch in front of the television.
Classical conditioning is also involved in other aspects of eating. Flavors associated with certain nutrients (such as sugar or fat) can become preferred without arousing any awareness of the pairing. For example, protein is a US that your body automatically craves more of once you start to consume it (UR): since proteins are highly concentrated in meat, the flavor of meat becomes a CS (or cue, that proteins are on the way), which perpetuates the cycle of craving for yet more meat (this automatic bodily reaction now a CR).
In a similar way, flavors associated with stomach pain or illness become avoided and disliked. For example, a person who gets sick after drinking too much tequila may acquire a profound dislike of the taste and odor of tequila—a phenomenon called taste aversion conditioning. The fact that flavors are often associated with so many consequences of eating is important for animals (including rats and humans) that are frequently exposed to new foods. And it is clinically relevant. For example, drugs used in chemotherapy often make cancer patients sick. As a consequence, patients often acquire aversions to foods eaten just before treatment, or even aversions to such things as the waiting room of the chemotherapy clinic itself (see Bernstein, 1991; Scalera & Bavieri, 2009).
Classical conditioning occurs with a variety of significant events. If an experimenter sounds a tone just before applying a mild shock to a rat’s feet, the tone will elicit fear or anxiety after one or two pairings. Similar fear conditioning plays a role in creating many anxiety disorders in humans, such as phobias and panic disorders, where people associate cues (such as closed spaces, or a shopping mall) with panic or other emotional trauma (see Mineka & Zinbarg, 2006). Here, rather than a physical response (like drooling), the CS triggers an emotion.
Another interesting effect of classical conditioning can occur when we ingest drugs. That is, when a drug is taken, it can be associated with the cues that are present at the same time (e.g., rooms, odors, drug paraphernalia). In this regard, if someone associates a particular smell with the sensation induced by the drug, whenever that person smells the same odor afterward, it may cue responses (physical and/or emotional) related to taking the drug itself. But drug cues have an even more interesting property: They elicit responses that often “compensate” for the upcoming effect of the drug (see Siegel, 1989). For example, morphine itself suppresses pain; however, if someone is used to taking morphine, a cue that signals the “drug is coming soon” can actually make the person more sensitive to pain. Because the person knows a pain suppressant will soon be administered, the body becomes more sensitive, anticipating that “the drug will soon take care of it.” Remarkably, such conditioned compensatory responses in turn decrease the impact of the drug on the body—because the body has become more sensitive to pain.
This conditioned compensatory response has many implications. For instance, a drug user will be most “tolerant” to the drug in the presence of cues that have been associated with it (because such cues elicit compensatory responses). As a result, overdose is usually not due to an increase in dosage, but to taking the drug in a new place without the familiar cues—which would have otherwise allowed the user to tolerate the drug (see Siegel, Hinson, Krank, & McCully, 1982). Conditioned compensatory responses (which include heightened pain sensitivity and decreased body temperature, among others) might also cause discomfort, thus motivating the drug user to continue usage of the drug to reduce them. This is one of several ways classical conditioning might be a factor in drug addiction and dependence.
A final effect of classical cues is that they motivate ongoing operant behavior (see Balleine, 2005). For example, if a rat has learned via operant conditioning that pressing a lever will give it a drug, in the presence of cues that signal the “drug is coming soon” (like the sound of the lever squeaking), the rat will work harder to press the lever than if those cues weren’t present (i.e., there is no squeaking lever sound). Similarly, in the presence of food-associated cues (e.g., smells), a rat (or an overeater) will work harder for food. And finally, even in the presence of negative cues (like something that signals fear), a rat, a human, or any other organism will work harder to avoid those situations that might lead to trauma. Classical CSs thus have many effects that can contribute to significant behavioral phenomena.
The Learning Process
As mentioned earlier, classical conditioning provides a method for studying basic learning processes. Somewhat counterintuitively, though, studies show that pairing a CS and a US together is not sufficient for an association to be learned between them. Consider an effect called blocking (see Kamin, 1969). In this effect, an animal first learns to associate one CS—call it stimulus A—with a US. In the illustration above, the sound of a bell (stimulus A) is paired with the presentation of food. Once this association is learned, in a second phase, a second stimulus—stimulus B—is presented alongside stimulus A, such that the two stimuli are paired with the US together. In the illustration, a light is added and turned on at the same time the bell is rung. However, because the animal has already learned the association between stimulus A (the bell) and the food, the animal doesn’t learn an association between stimulus B (the light) and the food. That is, the conditioned response only occurs during the presentation of stimulus A, because the earlier conditioning of A “blocks” the conditioning of B when B is added to A. The reason? Stimulus A already predicts the US, so the US is not surprising when it occurs with Stimulus B.
Learning depends on such a surprise, or a discrepancy between what occurs on a conditioning trial and what is already predicted by cues that are present on the trial. To learn something through classical conditioning, there must first be some prediction error, or the chance that a conditioned stimulus won’t lead to the expected outcome. With the example of the bell and the light, because the bell always leads to the reward of food, there’s no “prediction error” that the addition of the light helps to correct. However, if the researcher suddenly requires that the bell and the light both occur in order to receive the food, the bell alone will produce a prediction error that the animal has to learn.
Blocking and other related effects indicate that the learning process tends to take in the most valid predictors of significant events and ignore the less useful ones. This is common in the real world. For example, imagine that your supermarket puts big star-shaped stickers on products that are on sale. Quickly, you learn that items with the big star-shaped stickers are cheaper. However, imagine you go into a similar supermarket that not only uses these stickers, but also uses bright orange price tags to denote a discount. Because of blocking (i.e., you already know that the star-shaped stickers indicate a discount), you don’t have to learn the color system, too. The star-shaped stickers tell you everything you need to know (i.e. there’s no prediction error for the discount), and thus the color system is irrelevant.
Classical conditioning is strongest if the CS and US are intense or salient. It is also best if the CS and US are relatively new and the organism hasn’t been frequently exposed to them before. And it is especially strong if the organism’s biology has prepared it to associate a particular CS and US. For example, rats and humans are naturally inclined to associate an illness with a flavor, rather than with a light or tone. Because foods are most commonly experienced by taste, if there is a particular food that makes us ill, associating the flavor (rather than the appearance—which may be similar to other foods) with the illness will more greatly ensure we avoid that food in the future, and thus avoid getting sick. This sorting tendency, which is set up by evolution, is called preparedness.
There are many factors that affect the strength of classical conditioning, and these have been the subject of much research and theory (see Rescorla & Wagner, 1972; Pearce & Bouton, 2001). Behavioral neuroscientists have also used classical conditioning to investigate many of the basic brain processes that are involved in learning (see Fanselow & Poulos, 2005; Thompson & Steinmetz, 2009).
Erasing Classical Learning
After conditioning, the response to the CS can be eliminated if the CS is presented repeatedly without the US. This effect is called extinction, and the response is said to become “extinguished.” For example, if Pavlov kept ringing the bell but never gave the dog any food afterward, eventually the dog’s CR (drooling) would no longer happen when it heard the CS (the bell), because the bell would no longer be a predictor of food. Extinction is important for many reasons. For one thing, it is the basis for many therapies that clinical psychologists use to eliminate maladaptive and unwanted behaviors. Take the example of a person who has a debilitating fear of spiders: one approach might include systematic exposure to spiders. Whereas, initially the person has a CR (e.g., extreme fear) every time s/he sees the CS (e.g., the spider), after repeatedly being shown pictures of spiders in neutral conditions, pretty soon the CS no longer predicts the CR (i.e., the person doesn’t have the fear reaction when seeing spiders, having learned that spiders no longer serve as a “cue” for that fear). Here, repeated exposure to spiders without an aversive consequence causes extinction.
Psychologists must accept one important fact about extinction, however: it does not necessarily destroy the original learning (see Bouton, 2004). For example, imagine you strongly associate the smell of chalkboards with the agony of middle school detention. Now imagine that, after years of encountering chalkboards, the smell of them no longer recalls the agony of detention (an example of extinction). However, one day, after entering a new building for the first time, you suddenly catch a whiff of a chalkboard and WHAM!, the agony of detention returns. This is called spontaneous recovery: following a lapse in exposure to the CS after extinction has occurred, sometimes re-exposure to the CS (e.g., the smell of chalkboards) can evoke the CR again (e.g., the agony of detention).
Another related phenomenon is the renewal effect: After extinction, if the CS is tested in a new context, such as a different room or location, the CR can also return. In the chalkboard example, the action of entering a new building—where you don’t expect to smell chalkboards—suddenly renews the sensations associated with detention. These effects have been interpreted to suggest that extinction inhibits rather than erases the learned behavior, and this inhibition is mainly expressed in the context in which it is learned (see “context” in the Key Vocabulary section below).
This does not mean that extinction is a bad treatment for behavior disorders. Instead, clinicians can increase its effectiveness by using basic research on learning to help defeat these relapse effects (see Craske et al., 2008). For example, conducting extinction therapies in contexts where patients might be most vulnerable to relapsing (e.g., at work), might be a good strategy for enhancing the therapy’s success.
Useful Things to Know about Instrumental Conditioning
Most of the things that affect the strength of classical conditioning also affect the strength of instrumental learning—whereby we learn to associate our actions with their outcomes. As noted earlier, the “bigger” the reinforcer (or punisher), the stronger the learning. And, if an instrumental behavior is no longer reinforced, it will also be extinguished. Most of the rules of associative learning that apply to classical conditioning also apply to instrumental learning, but other facts about instrumental learning are also worth knowing.
Instrumental Responses Come Under Stimulus Control
As you know, the classic operant response in the laboratory is lever-pressing in rats, reinforced by food. However, things can be arranged so that lever-pressing only produces pellets when a particular stimulus is present. For example, lever-pressing can be reinforced only when a light in the Skinner box is turned on; when the light is off, no food is released from lever-pressing. The rat soon learns to discriminate between the light-on and light-off conditions, and presses the lever only in the presence of the light (responses in light-off are extinguished). In everyday life, think about waiting in the turn lane at a traffic light. Although you know that green means go, only when you have the green arrow do you turn. In this regard, the operant behavior is now said to be under stimulus control. And, as is the case with the traffic light, in the real world, stimulus control is probably the rule.
The stimulus controlling the operant response is called a discriminative stimulus. It can be associated directly with the response, or the reinforcer (see below). However, it usually does not elicit the response the way a classical CS does. Instead, it is said to “set the occasion for” the operant response. For example, a canvas put in front of an artist does not elicit painting behavior or compel her to paint. It allows, or sets the occasion for, painting to occur.
Stimulus-control techniques are widely used in the laboratory to study perception and other psychological processes in animals. For example, the rat would not be able to respond appropriately to light-on and light-off conditions if it could not see the light. Following this logic, experiments using stimulus-control methods have tested how well animals see colors, hear ultrasounds, and detect magnetic fields. That is, researchers pair these discriminative stimuli with those they know the animals already understand (such as pressing the lever). In this way, the researchers can test if the animals can learn to press the lever only when an ultrasound is played, for example.
These methods can also be used to study “higher” cognitive processes. For example, pigeons can learn to peck at different buttons in a Skinner box when pictures of flowers, cars, chairs, or people are shown on a miniature TV screen (see Wasserman, 1995). Pecking button 1 (and no other) is reinforced in the presence of a flower image, button 2 in the presence of a chair image, and so on. Pigeons can learn the discrimination readily, and, under the right conditions, will even peck the correct buttons associated with pictures of new flowers, cars, chairs, and people they have never seen before. The birds have learned to categorize the sets of stimuli. Stimulus-control methods can be used to study how such categorization is learned.
Operant Conditioning Involves Choice
Another thing to know about operant conditioning is that the response always requires choosing one behavior over others. The student who goes to the bar on Thursday night chooses to drink instead of staying at home and studying. The rat chooses to press the lever instead of sleeping or scratching its ear in the back of the box. The alternative behaviors are each associated with their own reinforcers. And the tendency to perform a particular action depends on both the reinforcers earned for it and the reinforcers earned for its alternatives.
To investigate this idea, choice has been studied in the Skinner box by making two levers available for the rat (or two buttons available for the pigeon), each of which has its own reinforcement or payoff rate. A thorough study of choice in situations like this has led to a rule called the quantitative law of effect (see Herrnstein, 1970), which can be understood without going into quantitative detail: The law acknowledges the fact that the effects of reinforcing one behavior depend crucially on how much reinforcement is earned for the behavior’s alternatives. For example, if a pigeon learns that pecking one light will reward two food pellets, whereas the other light only rewards one, the pigeon will only peck the first light. However, what happens if the first light is more strenuous to reach than the second one? Will the cost of energy outweigh the bonus of food? Or will the extra food be worth the work? In general, a given reinforcer will be less reinforcing if there are many alternative reinforcers in the environment. For this reason, alcohol, sex, or drugs may be less powerful reinforcers if the person’s environment is full of other sources of reinforcement, such as achievement at work or love from family members.
Cognition in Instrumental Learning
Modern research also indicates that reinforcers do more than merely strengthen or “stamp in” the behaviors they are a consequence of, as was Thorndike’s original view. Instead, animals learn about the specific consequences of each behavior, and will perform a behavior depending on how much they currently want—or “value”—its consequence.
This idea is best illustrated by a phenomenon called the reinforcer devaluation effect (see Colwill & Rescorla, 1986). A rat is first trained to perform two instrumental actions (e.g., pressing a lever on the left, and on the right), each paired with a different reinforcer (e.g., a sweet sucrose solution, and a food pellet). At the end of this training, the rat tends to press both levers, alternating between the sucrose solution and the food pellet. In a second phase, one of the reinforcers (e.g., the sucrose) is then separately paired with illness. This conditions a taste aversion to the sucrose. In a final test, the rat is returned to the Skinner box and allowed to press either lever freely. No reinforcers are presented during this test (i.e., no sucrose or food comes from pressing the levers), so behavior during testing can only result from the rat’s memory of what it has learned earlier. Importantly here, the rat chooses not to perform the response that once produced the reinforcer that it now has an aversion to (e.g., it won’t press the sucrose lever). This means that the rat has learned and remembered the reinforcer associated with each response, and can combine that knowledge with the knowledge that the reinforcer is now “bad.” Reinforcers do not merely stamp in responses; the animal learns much more than that. The behavior is said to be “goal-directed” (see Dickinson & Balleine, 1994), because it is influenced by the current value of its associated goal (i.e., how much the rat wants/doesn’t want the reinforcer).
Things can get more complicated, however, if the rat performs the instrumental actions frequently and repeatedly. That is, if the rat has spent many months learning the value of pressing each of the levers, the act of pressing them becomes automatic and routine. And here, this once goal-directed action (i.e., the rat pressing the lever for the goal of getting sucrose/food) can become a habit. Thus, if a rat spends many months performing the lever-pressing behavior (turning such behavior into a habit), even when sucrose is again paired with illness, the rat will continue to press that lever (see Holland, 2004). After all the practice, the instrumental response (pressing the lever) is no longer sensitive to reinforcer devaluation. The rat continues to respond automatically, regardless of the fact that the sucrose from this lever makes it sick.
Habits are very common in human experience, and can be useful. You do not need to relearn each day how to make your coffee in the morning or how to brush your teeth. Instrumental behaviors can eventually become habitual, letting us get the job done while being free to think about other things.
Putting Classical and Instrumental Conditioning Together
Classical and operant conditioning are usually studied separately. But outside of the laboratory they almost always occur at the same time. For example, a person who is reinforced for drinking alcohol or eating excessively learns these behaviors in the presence of certain stimuli—a pub, a set of friends, a restaurant, or possibly the couch in front of the TV. These stimuli are also available for association with the reinforcer. In this way, classical and operant conditioning are always intertwined.
The figure below summarizes this idea, and helps review what we have discussed in this module. Generally speaking, any reinforced or punished operant response (R) is paired with an outcome (O) in the presence of some stimulus or set of stimuli (S).
The figure illustrates the types of associations that can be learned in this very general scenario. For one thing, the organism will learn to associate the response and the outcome (R – O). This is instrumental conditioning. The learning process here is probably similar to classical conditioning, with all its emphasis on surprise and prediction error. And, as we discussed while considering the reinforcer devaluation effect, once R – O is learned, the organism will be ready to perform the response if the outcome is desired or valued. The value of the reinforcer can also be influenced by other reinforcers earned for other behaviors in the situation. These factors are at the heart of instrumental learning.
Second, the organism can also learn to associate the stimulus with the reinforcing outcome (S – O). This is the classical conditioning component, and as we have seen, it can have many consequences on behavior. For one thing, the stimulus will come to evoke a system of responses that help the organism prepare for the reinforcer (not shown in the figure): The drinker may undergo changes in body temperature; the eater may salivate and have an increase in insulin secretion. In addition, the stimulus will evoke approach (if the outcome is positive) or retreat (if the outcome is negative). Presenting the stimulus will also prompt the instrumental response.
The third association in the diagram is the one between the stimulus and the response (S – R). As discussed earlier, after a lot of practice, the stimulus may begin to elicit the response directly. This is habit learning, whereby the response occurs relatively automatically, without much mental processing of the relation between the action and the outcome and the outcome’s current value.
The final link in the figure is between the stimulus and the response-outcome association [S – (R – O)]. More than just entering into a simple association with the R or the O, the stimulus can signal that the R – O relationship is now in effect. This is what we mean when we say that the stimulus can “set the occasion” for the operant response: It sets the occasion for the response-reinforcer relationship. Through this mechanism, the painter might begin to paint when given the right tools and the opportunity enabled by the canvas. The canvas theoretically signals that the behavior of painting will now be reinforced by positive consequences.
The figure provides a framework that you can use to understand almost any learned behavior you observe in yourself, your family, or your friends. If you would like to understand it more deeply, consider taking a course on learning in the future, which will give you a fuller appreciation of how classical learning, instrumental learning, habit learning, and occasion setting actually work and interact.
Observational Learning
Not all forms of learning are accounted for entirely by classical and operant conditioning. Imagine a child walking up to a group of children playing a game on the playground. The game looks fun, but it is new and unfamiliar. Rather than joining the game immediately, the child opts to sit back and watch the other children play a round or two. Observing the others, the child takes note of the ways in which they behave while playing the game. By watching the behavior of the other kids, the child can figure out the rules of the game and even some strategies for doing well at the game. This is called observational learning.
Observational learning is a component of Albert Bandura’s Social Learning Theory (Bandura, 1977), which posits that individuals can learn novel responses via observation of key others’ behaviors. Observational learning does not necessarily require reinforcement, but instead hinges on the presence of others, referred to as social models. Social models are typically of higher status or authority compared to the observer, examples of which include parents, teachers, and police officers. In the example above, the children who already know how to play the game could be thought of as being authorities—and are therefore social models—even though they are the same age as the observer. By observing how the social models behave, an individual is able to learn how to act in a certain situation. Other examples of observational learning might include a child learning to place her napkin in her lap by watching her parents at the dinner table, or a customer learning where to find the ketchup and mustard after observing other customers at a hot dog stand.
Bandura theorizes that the observational learning process consists of four parts. The first is attention—as, quite simply, one must pay attention to what s/he is observing in order to learn. The second part is retention: to learn one must be able to retain the behavior s/he is observing in memory. The third part of observational learning, initiation, acknowledges that the learner must be able to execute (or initiate) the learned behavior. Lastly, the observer must possess the motivation to engage in observational learning. In our vignette, the child must want to learn how to play the game in order to properly engage in observational learning.
Researchers have conducted countless experiments designed to explore observational learning, the most famous of which is Albert Bandura’s “Bobo doll experiment.”
In this experiment (Bandura, Ross & Ross 1961), Bandura had children individually observe an adult social model interact with a clown doll (“Bobo”). For one group of children, the adult interacted aggressively with Bobo: punching it, kicking it, throwing it, and even hitting it in the face with a toy mallet. Another group of children watched the adult interact with other toys, displaying no aggression toward Bobo. In both instances the adult left and the children were allowed to interact with Bobo on their own. Bandura found that children exposed to the aggressive social model were significantly more likely to behave aggressively toward Bobo, hitting and kicking him, compared to those exposed to the non-aggressive model. The researchers concluded that the children in the aggressive group used their observations of the adult social model’s behavior to determine that aggressive behavior toward Bobo was acceptable.
While reinforcement was not required to elicit the children’s behavior in Bandura’s first experiment, it is important to acknowledge that consequences do play a role within observational learning. A future adaptation of this study (Bandura, Ross, & Ross, 1963) demonstrated that children in the aggression group showed less aggressive behavior if they witnessed the adult model receive punishment for aggressing against Bobo. Bandura referred to this process as vicarious reinforcement, as the children did not experience the reinforcement or punishment directly, yet were still influenced by observing it.
Conclusion
We have covered three primary explanations for how we learn to behave and interact with the world around us. Considering your own experiences, how well do these theories apply to you? Maybe when reflecting on your personal sense of fashion, you realize that you tend to select clothes others have complimented you on (operant conditioning). Or maybe, thinking back on a new restaurant you tried recently, you realize you chose it because its commercials play happy music (classical conditioning). Or maybe you are now always on time with your assignments, because you saw how others were punished when they were late (observational learning). Regardless of the activity, behavior, or response, there’s a good chance your “decision” to do it can be explained based on one of the theories presented in this module.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- Blocking
- In classical conditioning, the finding that no conditioning occurs to a stimulus if it is combined with a previously conditioned stimulus during conditioning trials. Suggests that information, surprise value, or prediction error is important in conditioning.
- Categorize
- To sort or arrange different items into classes or categories.
- Classical conditioning
- The procedure in which an initially neutral stimulus (the conditioned stimulus, or CS) is paired with an unconditioned stimulus (or US). The result is that the conditioned stimulus begins to elicit a conditioned response (CR). Classical conditioning is nowadays considered important as both a behavioral phenomenon and as a method to study simple associative learning. Same as Pavlovian conditioning.
- Conditioned compensatory response
- In classical conditioning, a conditioned response that opposes, rather than is the same as, the unconditioned response. It functions to reduce the strength of the unconditioned response. Often seen in conditioning when drugs are used as unconditioned stimuli.
- Conditioned response (CR)
- The response that is elicited by the conditioned stimulus after classical conditioning has taken place.
- Conditioned stimulus (CS)
- An initially neutral stimulus (like a bell, light, or tone) that elicits a conditioned response after it has been associated with an unconditioned stimulus.
- Context
- Stimuli that are in the background whenever learning occurs. For instance, the Skinner box or room in which learning takes place is the classic example of a context. However, “context” can also be provided by internal stimuli, such as the sensory effects of drugs (e.g., being under the influence of alcohol has stimulus properties that provide a context) and mood states (e.g., being happy or sad). It can also be provided by a specific period in time—the passage of time is sometimes said to change the “temporal context.”
- Discriminative stimulus
- In operant conditioning, a stimulus that signals whether the response will be reinforced. It is said to “set the occasion” for the operant response.
- Extinction
- Decrease in the strength of a learned behavior that occurs when the conditioned stimulus is presented without the unconditioned stimulus (in classical conditioning) or when the behavior is no longer reinforced (in instrumental conditioning). The term describes both the procedure (the US or reinforcer is no longer presented) as well as the result of the procedure (the learned response declines). Behaviors that have been reduced in strength through extinction are said to be “extinguished.”
- Fear conditioning
- A type of classical or Pavlovian conditioning in which the conditioned stimulus (CS) is associated with an aversive unconditioned stimulus (US), such as a foot shock. As a consequence of learning, the CS comes to evoke fear. The phenomenon is thought to be involved in the development of anxiety disorders in humans.
- Goal-directed behavior
- Instrumental behavior that is influenced by the animal’s knowledge of the association between the behavior and its consequence and the current value of the consequence. Sensitive to the reinforcer devaluation effect.
- Habit
- Instrumental behavior that occurs automatically in the presence of a stimulus and is no longer influenced by the animal’s knowledge of the value of the reinforcer. Insensitive to the reinforcer devaluation effect.
- Instrumental conditioning
- Process in which animals learn about the relationship between their behaviors and their consequences. Also known as operant conditioning.
- Law of effect
- The idea that instrumental or operant responses are influenced by their effects. Responses that are followed by a pleasant state of affairs will be strengthened and those that are followed by discomfort will be weakened. Nowadays, the term refers to the idea that operant or instrumental behaviors are lawfully controlled by their consequences.
- Observational learning
- Learning by observing the behavior of others.
- Operant
- A behavior that is controlled by its consequences. The simplest example is the rat’s lever-pressing, which is controlled by the presentation of the reinforcer.
- Operant conditioning
- See instrumental conditioning.
- Pavlovian conditioning
- See classical conditioning.
- Prediction error
- When the outcome of a conditioning trial is different from that which is predicted by the conditioned stimuli that are present on the trial (i.e., when the US is surprising). Prediction error is necessary to create Pavlovian conditioning (and associative learning generally). As learning occurs over repeated conditioning trials, the conditioned stimulus increasingly predicts the unconditioned stimulus, and prediction error declines. Conditioning works to correct or reduce prediction error.
- Preparedness
- The idea that an organism’s evolutionary history can make it easy to learn a particular association. Because of preparedness, you are more likely to associate the taste of tequila, and not the circumstances surrounding drinking it, with getting sick. Similarly, humans are more likely to associate images of spiders and snakes than flowers and mushrooms with aversive outcomes like shocks.
- Punisher
- A stimulus that decreases the strength of an operant behavior when it is made a consequence of the behavior.
- Quantitative law of effect
- A mathematical rule that states that the effectiveness of a reinforcer at strengthening an operant response depends on the amount of reinforcement earned for all alternative behaviors. A reinforcer is less effective if there is a lot of reinforcement in the environment for other behaviors.
- Reinforcer
- Any consequence of a behavior that strengthens the behavior or increases the likelihood that it will be performed it again.
- Reinforcer devaluation effect
- The finding that an animal will stop performing an instrumental response that once led to a reinforcer if the reinforcer is separately made aversive or undesirable.
- Renewal effect
- Recovery of an extinguished response that occurs when the context is changed after extinction. Especially strong when the change of context involves return to the context in which conditioning originally occurred. Can occur after extinction in either classical or instrumental conditioning.
- The theory that people can learn new responses and behaviors by observing the behavior of others.
- Authorities that are the targets for observation and who model behaviors.
- Spontaneous recovery
- Recovery of an extinguished response that occurs with the passage of time after extinction. Can occur after extinction in either classical or instrumental conditioning.
- Stimulus control
- When an operant behavior is controlled by a stimulus that precedes it.
- Taste aversion learning
- The phenomenon in which a taste is paired with sickness, and this causes the organism to reject—and dislike—that taste in the future.
- Unconditioned response (UR)
- In classical conditioning, an innate response that is elicited by a stimulus before (or in the absence of) conditioning.
- Unconditioned stimulus (US)
- In classical conditioning, the stimulus that elicits the response before conditioning occurs.
- Vicarious reinforcement
- Learning that occurs by observing the reinforcement or punishment of another person.
References
- Balleine, B. W. (2005). Neural basis of food-seeking: Affect, arousal, and reward in corticostratolimbic circuits. Physiology & Behavior, 86, 717–730.
- Bandura, A. (1977). Social learning theory. Englewood Cliffs, NJ: Prentice Hall
- Bandura, A., Ross, D., Ross, S (1963). Imitation of film-mediated aggressive models. Journal of Abnormal and Social Psychology 66(1), 3 - 11.
- Bandura, A.; Ross, D.; Ross, S. A. (1961). "Transmission of aggression through the imitation of aggressive models". Journal of Abnormal and Social Psychology 63(3), 575–582.
- Bernstein, I. L. (1991). Aversion conditioning in response to cancer and cancer treatment. Clinical Psychology Review, 11, 185–191.
- Bouton, M. E. (2004). Context and behavioral processes in extinction. Learning & Memory, 11, 485–494.
- Colwill, R. M., & Rescorla, R. A. (1986). Associative structures in instrumental learning. In G. H. Bower (Ed.), The psychology of learning and motivation, (Vol. 20, pp. 55–104). New York, NY: Academic Press.
- Craske, M. G., Kircanski, K., Zelikowsky, M., Mystkowski, J., Chowdhury, N., & Baker, A. (2008). Optimizing inhibitory learning during exposure therapy. Behaviour Research and Therapy, 46, 5–27.
- Dickinson, A., & Balleine, B. W. (1994). Motivational control of goal-directed behavior. Animal Learning & Behavior, 22, 1–18.
- Fanselow, M. S., & Poulos, A. M. (2005). The neuroscience of mammalian associative learning. Annual Review of Psychology, 56, 207–234.
- Herrnstein, R. J. (1970). On the law of effect. Journal of the Experimental Analysis of Behavior, 13, 243–266.
- Holland, P. C. (2004). Relations between Pavlovian-instrumental transfer and reinforcer devaluation. Journal of Experimental Psychology: Animal Behavior Processes, 30, 104–117.
- Kamin, L. J. (1969). Predictability, surprise, attention, and conditioning. In B. A. Campbell & R. M. Church (Eds.), Punishment and aversive behavior (pp. 279–296). New York, NY: Appleton-Century-Crofts.
- Mineka, S., & Zinbarg, R. (2006). A contemporary learning theory perspective on the etiology of anxiety disorders: It’s not what you thought it was. American Psychologist, 61, 10–26.
- Pearce, J. M., & Bouton, M. E. (2001). Theories of associative learning in animals. Annual Review of Psychology, 52, 111–139.
- Rescorla, R. A., & Wagner, A. R. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A. H. Black & W. F. Prokasy (Eds.), Classical conditioning II: Current research and theory (pp. 64–99). New York, NY: Appleton-Century-Crofts.
- Scalera, G., & Bavieri, M. (2009). Role of conditioned taste aversion on the side effects of chemotherapy in cancer patients. In S. Reilly & T. R. Schachtman (Eds.), Conditioned taste aversion: Behavioral and neural processes (pp. 513–541). New York, NY: Oxford University Press.
- Siegel, S. (1989). Pharmacological conditioning and drug effects. In A. J. Goudie & M. Emmett-Oglesby (Eds.), Psychoactive drugs (pp. 115–180). Clifton, NY: Humana Press.
- Siegel, S., Hinson, R. E., Krank, M. D., & McCully, J. (1982). Heroin “overdose” death: Contribution of drug associated environmental cues. Science, 216, 436–437.
- Spreat, S., & Spreat, S. R. (1982). Learning principles. In V. Voith & P. L. Borchelt (Eds.), Veterinary clinics of North America: Small animal practice (pp. 593–606). Philadelphia, PA: W. B. Saunders.
- Thompson, R. F., & Steinmetz, J. E. (2009). The role of the cerebellum in classical conditioningof discrete behavioral responses. Neuroscience, 162, 732–755.
- Timberlake, W. L. (2001). Motivational modes in behavior systems. In R. R. Mowrer & S. B. Klein (Eds.), Handbook of contemporary learning theories (pp. 155–210). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
- Wasserman, E. A. (1995). The conceptual abilities of pigeons. American Scientist, 83, 246–255.
How to cite this Chapter using APA Style:
Bouton, M. E. (2019). Conditioning and learning. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/ajxhcqdr
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/ajxhcqdr.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Price-Williams, D. R., Gordon, W., & Ramirez, M. (1969). Skill and conservation: A study of pottery making children. Developmental Psychology, 1, 769.
Diamond, A. (1985). Development of the ability to use recall to guide action, as indicated by infants' performance on AB. Child Development, 56, 868–883.
Geschwind, D. H., & Levitt, P. (2007). Autism spectrum disorders: Developmental disconnection syndromes. Current Opinion in Neurobiology, 17(1), 103–111.
Original chapter by Sharon Furtak adapted by the Queen’s University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
This module on the biological basis of behavior provides an overview of the basic structure of neurons and their means of communication. Neurons, cells in the central nervous system, receive information from our sensory systems (vision, audition, olfaction, gustation, and somatosensation) about the world around us; in turn, they plan and execute appropriate behavioral responses, including attending to a stimulus, learning new information, speaking, eating, mating, and evaluating potential threats. The goal of this module is to become familiar with the anatomical structure of neurons and to understand how neurons communicate by electrochemical signals to process sensory information and produce complex behaviors through networks of neurons. Having a basic knowledge of the fundamental structure and function of neurons is a necessary foundation as you move forward in the field of psychology.
Learning Objectives
- Differentiate the functional roles between the two main cell classes in the brain, neurons and glia.
- Describe how the forces of diffusion and electrostatic pressure work collectively to facilitate electrochemical communication.
- Define resting membrane potential, excitatory postsynaptic potentials, inhibitory postsynaptic potentials, and action potentials.
- Explain features of axonal and synaptic communication in neurons.
Introduction
Imagine trying to string words together into a meaningful sentence without knowing the meaning of each word or its function (i.e., Is it a verb, a noun, or an adjective?). In a similar fashion, to appreciate how groups of cells work together in a meaningful way in the brain as a whole, we must first understand how individual cells in the brain function. Much like words, brain cells, called neurons, have an underlying structure that provides the foundation for their functional purpose. Have you ever seen a neuron? Did you know that the basic structure of a neuron is similar whether it is from the brain of a rat or a human? How do the billions of neurons in our brain allow us to do all the fun things we enjoy, such as texting a friend, cheering on our favorite sports team, or laughing?
Our journey in answering these questions begins more than 100 years ago with a scientist named Santiago Ramón y Cajal. Ramón y Cajal (1911) boldly concluded that discrete individual neurons are the structural and functional units of the nervous system. He based his conclusion on the numerous drawings he made of Golgi-stained tissue, a stain named after the scientist who discovered it, Camillo Golgi. Scientists use several types of stains to visualize cells. Each stain works in a unique way, which causes them to look differently when viewed under a microscope. For example, a very common Nissl stain labels only the main part of the cell (i.e., the cell body; see left and middle panels of Figure 1). In contrast, a Golgi stain fills the cell body and all the processes that extend outward from it (see right panel of Figure 1). A more notable characteristic of a Golgi stain is that it only stains approximately 1–2% of neurons (Pasternak & Woolsey, 1975; Smit & Colon, 1969), permitting the observer to distinguish one cell from another. These qualities allowed Cajal to examine the full anatomical structure of individual neurons for the first time. This significantly enhanced our appreciation of the intricate networks their processes form. Based on his observation of Golgi-stained tissue, Cajal suggested neurons were distinguishable processing units rather than continuous structures. This was in opposition to the dominant theory at the time proposed by Joseph von Gerlach, which stated that the nervous system was composed of a continuous network of nerves (for review see, Lopez-Munoz, Boya, & Alamo, 2006). Camillo Golgi himself had been an avid supporter of Gerlach’s theory. Despite their scientific disagreement, Cajal and Camillo Golgi shared the Nobel Prize for Medicine in 1906 for their combined contribution to the advancement of science and our understanding of the structure of the nervous system. This seminal work paved the pathway to our current understanding of the basic structure of the nervous system described in this module (for review see: De Carlos & Borrell, 2007; Grant, 2007).
Before moving forward, there will be an introduction to some basic terminology regarding the anatomy of neurons in the section called “The Structure of the Neuron,” below. Once we have reviewed this fundamental framework, the remainder of the module will focus on the electrochemical signals through which neurons communicate. While the electrochemical process might sound intimidating, it will be broken down into digestible sections. The first subsection, “Resting Membrane Potential,” describes what occurs in a neuron at rest, when it is theoretically not receiving or sending signals. Building upon this knowledge, we will examine the electrical conductance that occurs within a single neuron when it receives signals. Finally, the module will conclude with a description of the electrical conductance, which results in communication between neurons through a release of chemicals. At the end of the module, you should have a broad concept of how each cell and large groups of cells send and receive information by electrical and chemical signals.
A note of encouragement: This module introduces a vast amount of technical terminology that at times may feel overwhelming. Do not get discouraged or bogged down in the details. Utilize the glossary at the end of the module as a quick reference guide; tab the glossary page so that you can easily refer to it while reading the module. The glossary contains all terms in bold typing. Terms in italics are additional significant terms that may appear in other modules but are not contained within the glossary. On your first read of this module, I suggest focusing on the broader concepts and functional aspects of the terms instead of trying to commit all the terminology to memory. That is right, I said read first! I highly suggest reading this module at least twice, once prior to and again following the course lecture on this material. Repetition is the best way to gain clarity and commit to memory the challenging concepts and detailed vocabulary presented here.
The Structure of the Neuron
Basic Nomenclature
There are approximately 100 billion neurons in the human brain (Williams & Herrup, 1988). Each neuron has three main components: dendrites, the soma, and the axon (see Figure 2). Dendrites are processes that extend outward from the soma, or cell body, of a neuron and typically branch several times. Dendrites receive information from thousands of other neurons and are the main source of input of the neuron. The nucleus, which is located within the soma, contains genetic information, directs protein synthesis, and supplies the energy and the resources the neuron needs to function. The main source of output of the neuron is the axon. The axon is a process that extends far away from the soma and carries an important signal called an action potential to another neuron. The place at which the axon of one neuron comes in close contact to the dendrite of another neuron is a synapse (see Figures 2–3). Typically, the axon of a neuron is covered with an insulating substance called a myelin sheath that allows the signal and communication of one neuron to travel rapidly to another neuron.
The axon splits many times, so that it can communicate, or synapse, with several other neurons (see Figure 2). At the end of the axon is a terminal button, which forms synapses with spines, or protrusions, on the dendrites of neurons. Synapses form between the presynaptic terminal button (neuron sending the signal) and the postsynaptic membrane (neuron receiving the signal; see Figure 3). Here we will focus specifically on synapses between the terminal button of an axon and a dendritic spine; however, synapses can also form between the terminal button of an axon and the soma or the axon of another neuron.
A very small space called a synaptic gap or a synaptic cleft, approximately 5 nm (nanometers), exists between the presynaptic terminal button and the postsynaptic dendritic spine. To give you a better idea of the size, a dime is 1.35 mm (millimeter) thick. There are 1,350,000 nm in the thickness of a dime. In the presynaptic terminal button, there are synaptic vesicles that package together groups of chemicals called neurotransmitters (see Figure 3). Neurotransmitters are released from the presynaptic terminal button, travel across the synaptic gap, and activate ion channels on the postsynaptic spine by binding to receptor sites. We will discuss the role of receptors in more detail later in the module.
Types of Cells in the Brain
Not all neurons are created equal! There are neurons that help us receive information about the world around us, sensory neurons. There are motor neurons that allow us to initiate movement and behavior, ultimately allowing us to interact with the world around us. Finally, there are interneurons, which process the sensory input from our environment into meaningful representations, plan the appropriate behavioral response, and connect to the motor neurons to execute these behavioral plans.
There are three main categories of neurons, each defined by its specific structure. The structures of these three different types of neurons support their unique functions. Unipolar neurons are structured in such a way that is ideal for relaying information forward, so they have one neurite (axon) and no dendrites. They are involved in transmission of physiological information from the body’s periphery such as communicating body temperature through the spinal cord up to the brain. Bipolar neurons are involved in sensory perception such as perception of light in the retina of the eye. They have one axon and one dendrite which help acquire and pass sensory information to various centers in the brain. Finally, multipolar neurons are the most common and they communicate sensory and motor information in the brain. For example, their firing causes muscles in the body to contract. Multipolar neurons have one axon and many dendrites which allows them to communicate with other neurons. One of the most prominent neurons is a pyramidal neuron, which falls under the multipolar category. It gets its name from the triangular or pyramidal shape of its soma (for examples see, Furtak, Moyer, & Brown, 2007).
In addition to neurons, there is a second type of cell in the brain called glia cells. Glia cells have several functions, just a few of which we will discuss here. One type of glia cell, called oligodendroglia, forms the myelin sheaths mentioned above (Simons & Trotter, 2007; see Fig. 2). Oligodendroglia wrap their dendritic processes around the axons of neurons many times to form the myelin sheath. One cell will form the myelin sheath on several axons. Other types of glia cells, such as microglia and astrocytes, digest debris of dead neurons, carry nutritional support from blood vessels to the neurons, and help to regulate the ionic composition of the extracellular fluid. While glial cells play a vital role in neuronal support, they do not participate in the communication between cells in the same fashion as neurons do.
Communication Within and Between Neurons
Thus far, we have described the main characteristics of neurons, including how their processes come in close contact with one another to form synapses. In this section, we consider the conduction of communication within a neuron and how this signal is transmitted to the next neuron. There are two stages of this electrochemical action in neurons. The first stage is the electrical conduction of dendritic input to the initiation of an action potential within a neuron. The second stage is a chemical transmission across the synaptic gap between the presynaptic neuron and the postsynaptic neuron of the synapse. To understand these processes, we first need to consider what occurs within a neuron when it is at a steady state, called resting membrane potential.
Resting Membrane Potential
The intracellular (inside the cell) fluid and extracellular (outside the cell) fluid of neurons is composed of a combination of ions (electrically charged molecules; see Figure 4). Cations are positively charged ions, and anions are negatively charged ions. The composition of intracellular and extracellular fluid is similar to salt water, containing sodium (Na+), potassium (K+), chloride (Cl-), and anions (A-).
The cell membrane, which is composed of a lipid bilayer of fat molecules, separates the cell from the surrounding extracellular fluid. There are proteins that span the membrane, forming ion channels that allow particular ions to pass between the intracellular and extracellular fluid (see Figure 4). These ions are in different concentrations inside the cell relative to outside the cell, and the ions have different electrical charges. Due to this difference in concentration and charge, two forces act to maintain a steady state when the cell is at rest: diffusion and electrostatic pressure. Diffusion is the force on molecules to move from areas of high concentration to areas of low concentration. Electrostatic pressure is the force on two ions with similar charge to repel each other and the force of two ions with opposite charge to attract to one another. Remember the saying, opposites attract?
Regardless of the ion, there exists a membrane potential at which the force of diffusion is equal and opposite of the force of electrostatic pressure. This voltage, called the equilibrium potential, is the voltage at which no ions flow. Since there are several ions that can permeate the cell’s membrane, the baseline electrical charge inside the cell compared with outside the cell, referred to as resting membrane potential, is based on the collective drive of force on several ions. Relative to the extracellular fluid, the membrane potential of a neuron at rest is negatively charged at approximately -70 mV (see Figure 5). These are very small voltages compared with the voltages of batteries and electrical outlets, which we encounter daily, that range from 1.5 to 240 V.
Let us see how these two forces, diffusion and electrostatic pressure, act on the four groups of ions mentioned above.
- Anions (A-): Anions are highly concentrated inside the cell and contribute to the negative charge of the resting membrane potential. Diffusion and electrostatic pressure are not forces that determine A- concentration because A- is impermeable to the cell membrane. There are no ion channels that allow for A- to move between the intracellular and extracellular fluid.
- Potassium (K+): The cell membrane is very permeable to potassium at rest, but potassium remains in high concentrations inside the cell. Diffusion pushes K+ outside the cell because it is in high concentration inside the cell. However, electrostatic pressure pushes K+ inside the cell because the positive charge of K+ is attracted to the negative charge inside the cell. In combination, these forces oppose one another with respect to K+.
- Chloride (Cl-): The cell membrane is also very permeable to chloride at rest, but chloride remains in high concentration outside the cell. Diffusion pushes Cl- inside the cell because it is in high concentration outside the cell. However, electrostatic pressure pushes Cl- outside the cell because the negative charge of Cl- is attracted to the positive charge outside the cell. Similar to K+, these forces oppose one another with respect to Cl-.
- Sodium (Na+): The cell membrane is not very permeable to sodium at rest. Diffusion pushes Na+ inside the cell because it is in high concentration outside the cell. Electrostatic pressure also pushes Na+ inside the cell because the positive charge of Na+ is attracted to the negative charge inside the cell. Both of these forces push Na+ inside the cell; however, Na+ cannot permeate the cell membrane and remains in high concentration outside the cell. The small amounts of Na+ inside the cell are removed by a sodium-potassium pump, which uses the neuron’s energy (adenosine triphosphate, ATP) to pump 3 Na+ ions out of the cell in exchange for bringing 2 K+ ions inside the cell.
Action Potential
Now that we have considered what occurs in a neuron at rest, let us consider what changes occur to the resting membrane potential when a neuron receives input, or information, from the presynaptic terminal button of another neuron. Our understanding of the electrical signals or potentials that occurs within a neuron results from the seminal work of Hodgkin and Huxleythat began in the 1930s at a well-known marine biology lab in Woodshole, MA. Their work, for which they won the Nobel Prize in Medicine in 1963, has resulted in the general model of electrochemical transduction that is described here (Hodgkin & Huxley, 1952). Hodgkin and Huxley studied a very large axon in the squid, a common species for that region of the United States. The giant axon of the squid is roughly 100 times larger than that of axons in the mammalian brain, making it much easier to see. Activation of the giant axon is responsible for a withdrawal response the squid uses when trying to escape from a predator, such as large fish, birds, sharks, and even humans. When was the last time you had calamari? The large axon size is no mistake in nature’s design; it allows for very rapid transmission of an electrical signal, enabling a swift escape motion in the squid from its predators.
While studying this species, Hodgkin and Huxley noticed that if they applied an electrical stimulus to the axon, a large, transient electrical current conducted down the axon. This transient electrical current is known as an action potential (see Figure 5). An action potential is an all-or-nothing response that occurs when there is a change in the charge or potential of the cell from its resting membrane potential (-70 mV) in a more positive direction, which is a depolarization (see Figure 5). What is meant by an all-or-nothing response? I find that this concept is best compared to the binary code used in computers, where there are only two possibilities, 0 or 1. There is no halfway or in-between these possible values; for example, 0.5 does not exist in binary code. There are only two possibilities, either the value of 0 or the value of 1. The action potential is the same in this respect. There is no halfway; it occurs, or it does not occur. There is a specific membrane potential that the neuron must reach to initiate an action potential. This membrane potential, called the threshold of excitation, is typically around -50 mV. If the threshold of excitation is reached, then an action potential is triggered.
How is an action potential initiated? At any one time, each neuron is receiving hundreds of inputs from the cells that synapse with it. These inputs can cause several types of fluctuations in the neuron’s membrane potentials (see Figure 5):
- excitatory postsynaptic potentials (EPSPs): a depolarizing current that causes the membrane potential to become more positive and closer to the threshold of excitation; or
- inhibitory postsynaptic potentials (IPSPs): a hyperpolarizing current that causes the membrane potential to become more negative and further away from the threshold of excitation.
These postsynaptic potentials, EPSPs and IPSPs, summate or add together in time and space. The IPSPs make the membrane potential more negative, but how much so depends on the strength of the IPSPs. The EPSPs make the membrane potential more positive; again, how much more positive depends on the strength of the EPSPs. If you have two small EPSPs at the same time and the same synapse then the result will be a large EPSP. If you have a small EPSP and a small IPSP at the same time and the same synapse then they will cancel each other out. Unlike the action potential, which is an all-or-nothing response, IPSPs and EPSPs are smaller and graded potentials, varying in strength. The change in voltage during an action potential is approximately 100 mV. In comparison, EPSPs and IPSPs are changes in voltage between 0.1 to 40 mV. They can be different strengths, or gradients, and they are measured by how far the membrane potentials diverge from the resting membrane potential.
I know the concept of summation can be confusing. As a child, I use to play a game in elementary school with a very large parachute where you would try to knock balls out of the center of the parachute. This game illustrates the properties of summation rather well. In this game, a group of children next to one another would work in unison to produce waves in the parachute in order to cause a wave large enough to knock the ball out of the parachute. The children would initiate the waves at the same time and in the same direction. The additive result was a larger wave in the parachute, and the balls would bounce out of the parachute. However, if the waves they initiated occurred in the opposite direction or with the wrong timing, the waves would cancel each other out, and the balls would remain in the center of the parachute. EPSPs or IPSPs in a neuron work in the same fashion to the properties of the waves in the parachute; they either add or cancel each other out. If you have two EPSPs, then they sum together and become a larger depolarization. Similarly, if two IPSPs come into the cell at the same time, they will sum and become a larger hyperpolarization in membrane potential. However, if two inputs were opposing one another, moving the potential in opposite directions, such as an EPSP and an IPSP, their sum would cancel each other out.
At any moment in time, each cell is receiving mixed messages, both EPSPs and IPSPs. If the summation of EPSPs is strong enough to depolarize the membrane potential to reach the threshold of excitation, then it initiates an action potential. The action potential then travels down the axon, away from the soma, until it reaches the ends of the axon (the terminal button). In the terminal button, the action potential triggers the release of neurotransmitters from the presynaptic terminal button into the synaptic gap. These neurotransmitters, in turn, cause EPSPs and IPSPs in the postsynaptic dendritic spines of the next cell (see Figures 4 & 6). The neurotransmitter released from the presynaptic terminal button binds with ionotropic receptors in a lock-and-key fashion on the post-synaptic dendritic spine. Ionotropic receptors are receptors on ion channels that open, allowing some ions to enter or exit the cell, depending upon the presence of a particular neurotransmitter. The type of neurotransmitter and the permeability of the ion channel it activates will determine if an EPSP or IPSP occurs in the dendrite of the post-synaptic cell. These EPSPs and IPSPs summate in the same fashion described above and the entire process occurs again in another cell.
The Change in Membrane Potential During an Action Potential
We discussed previously which ions are involved in maintaining the resting membrane potential. Not surprisingly, some of these same ions are involved in the action potential. When the cell becomes depolarized (more positively charged) and reaches the threshold of excitation, this causes a voltage-dependent Na+ channel to open. A voltage-dependent ion channel is a channel that opens, allowing some ions to enter or exit the cell, depending upon when the cell reaches a particular membrane potential. When the cell is at resting membrane potential, these voltage-dependent Na+ channels are closed. As we learned earlier, both diffusion and electrostatic pressure are pushing Na+ inside the cells. However, Na+ cannot permeate the membrane when the cell is at rest. Now that these channels are open, Na+ rushes inside the cell, causing the cell to become very positively charged relative to the outside of the cell. This is responsible for the rising or depolarizing phase of the action potential (see Figure 5). The inside of the cell becomes very positively charged, +40mV. At this point, the Na+ channels close and become refractory. This means the Na+ channels cannot reopen again until after the cell returns to the resting membrane potential. Thus, a new action potential cannot occur during the refractory period. The refractory period also ensures the action potential can only move in one direction down the axon, away from the soma. As the cell becomes more depolarized, a second type of voltage-dependent channel opens; this channel is permeable to K+. With the cell very positive relative to the outside of the cell (depolarized) and the high concentration of K+ within the cell, both the force of diffusion and the force of electrostatic pressure drive K+ outside of the cell. The movement of K+ out of the cell causes the cell potential to return back to the resting membrane potential, the falling or hyperpolarizing phase of the action potential (see Figure 5). A short hyperpolarization occurs partially due to the gradual closing of the K+ channels. With the Na+ closed, electrostatic pressure continues to push K+ out of the cell. In addition, the sodium-potassium pump is pushing Na+ out of the cell. The cell returns to the resting membrane potential, and the excess extracellular K+ diffuses away. This exchange of Na+ and K+ ions happens very rapidly, in less than 1 msec. The action potential occurs in a wave-like motion down the axon until it reaches the terminal button. Only the ion channels in very close proximity to the action potential are affected.
Earlier you learned that axons are covered in myelin. Let us consider how myelin speeds up the process of the action potential. There are gaps in the myelin sheaths called nodes of Ranvier. The myelin insulates the axon and does not allow any fluid to exist between the myelin and cell membrane. Under the myelin, when the Na+ and K+ channels open, no ions flow between the intracellular and extracellular fluid. This saves the cell from having to expend the energy necessary to rectify or regain the resting membrane potential. (Remember, the pumps need ATP to run.) Under the myelin, the action potential degrades some, but is still large enough in potential to trigger a new action potential at the next node of Ranvier. Thus, the action potential actively jumps from node to node; this process is known as saltatory conduction.
In the presynaptic terminal button, the action potential triggers the release of neurotransmitters (see Figure 3). Neurotransmitters cross the synaptic gap and open subtypes of receptors in a lock-and-key fashion (see Figure 3). Depending on the type of neurotransmitter, an EPSP or IPSP occurs in the dendrite of the post-synaptic cell. Neurotransmitters that open Na+ or calcium (Ca+) channels cause an EPSP; an example is the NMDA receptors, which are activated by glutamate (the main excitatory neurotransmitter in the brain). In contrast, neurotransmitters that open Cl- or K+ channels cause an IPSP; an example is gamma-aminobutryric acid (GABA) receptors, which are activated by GABA, the main inhibitory neurotransmitter in the brain. Once the EPSPs and IPSPs occur in the postsynaptic site, the process of communication within and between neurons cycles on (see Figure 6). A neurotransmitter that does not bind to receptors is broken down and inactivated by enzymes or glial cells, or it is taken back into the presynaptic terminal button in a process called reuptake, which will be discussed further in the module on psychopharmacology.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- Action potential
- A transient all-or-nothing electrical current that is conducted down the axon when the membrane potential reaches the threshold of excitation.
- Axon
- Part of the neuron that extends off the soma, splitting several times to connect with other neurons; main output of the neuron.
- Cell membrane
- A bi-lipid layer of molecules that separates the cell from the surrounding extracellular fluid.
- Dendrite
- Part of a neuron that extends away from the cell body and is the main input to the neuron.
- Diffusion
- The force on molecules to move from areas of high concentration to areas of low concentration.
- Electrostatic pressure
- The force on two ions with similar charge to repel each other; the force of two ions with opposite charge to attract to one another.
- Excitatory postsynaptic potentials
- A depolarizing postsynaptic current that causes the membrane potential to become more positive and move towards the threshold of excitation.
- Inhibitory postsynaptic potentials
- A hyperpolarizing postsynaptic current that causes the membrane potential to become more negative and move away from the threshold of excitation.
- Ion channels
- Proteins that span the cell membrane, forming channels that specific ions can flow through between the intracellular and extracellular space.
- Ionotropic receptor
- Ion channel that opens to allow ions to permeate the cell membrane under specific conditions, such as the presence of a neurotransmitter or a specific membrane potential.
- Myelin sheath
- Substance around the axon of a neuron that serves as insulation to allow the action potential to conduct rapidly toward the terminal buttons.
- Neurotransmitters
- Chemical substance released by the presynaptic terminal button that acts on the postsynaptic cell.
- Nucleus
- Collection of nerve cells found in the brain which typically serve a specific function.
- Resting membrane potential
- The voltage inside the cell relative to the voltage outside the cell while the cell is at rest (approximately -70 mV).
- Sodium-potassium pump
- An ion channel that uses the neuron’s energy (adenosine triphosphate, ATP) to pump three Na+ ions outside the cell in exchange for bringing two K+ ions inside the cell.
- Soma
- Cell body of a neuron that contains the nucleus and genetic information, and directs protein synthesis.
- Spines
- Protrusions on the dendrite of a neuron that form synapses with terminal buttons of the presynaptic axon.
- Synapse
- Junction between the presynaptic terminal button of one neuron and the dendrite, axon, or soma of another postsynaptic neuron.
- Synaptic gap
- Also known as the synaptic cleft; the small space between the presynaptic terminal button and the postsynaptic dendritic spine, axon, or soma.
- Synaptic vesicles
- Groups of neurotransmitters packaged together and located within the terminal button.
- The part of the end of the axon that form synapses with postsynaptic dendrite, axon, or soma.
- Threshold of excitation
- Specific membrane potential that the neuron must reach to initiate an action potential.
References
- De Carlos, J. A., & Borrell, J. (2007). A historical reflection of the contributions of Cajal and Golgi to the foundations of neuroscience. Brain Res Rev, 55(1), 8-16. doi: 10.1016/j.brainresrev.2007.03.010
- Furtak, S. C., Moyer, J. R., Jr., & Brown, T. H. (2007). Morphology and ontogeny of rat perirhinal cortical neurons. J Comp Neurol, 505(5), 493-510. doi: 10.1002/cne.21516
- Grant, G. (2007). How the 1906 Nobel Prize in Physiology or Medicine was shared between Golgi and Cajal. Brain Res Rev, 55(2), 490-498. doi: 10.1016/j.brainresrev.2006.11.004
- Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol, 117(4), 500-544.
- Lopez-Munoz, F., Boya, J., & Alamo, C. (2006). Neuron theory, the cornerstone of neuroscience, on the centenary of the Nobel Prize award to Santiago Ramon y Cajal. Brain Res Bull, 70(4-6), 391-405. doi: 10.1016/j.brainresbull.2006.07.010
- Pasternak, J. F., & Woolsey, T. A. (1975). On the "selectivity" of the Golgi-Cox method. J Comp Neurol, 160(3), 307-312. doi: 10.1002/cne.901600304
- Ramón y Cajal, S. (1911). Histology of the nervous system of man and vertebrates. New York, NY: Oxford University Press.
- Simons, M., & Trotter, J. (2007). Wrapping it up: the cell biology of myelination. Curr Opin Neurobiol, 17(5), 533-540. doi: 10.1016/j.conb.2007.08.003
- Smit, G. J., & Colon, E. J. (1969). Quantitative analysis of the cerebral cortex. I. Aselectivity of the Golgi-Cox staining technique. Brain Res, 13(3), 485-510.
- Williams, R. W., & Herrup, K. (1988). The control of neuron number. Annu Rev Neurosci, 11, 423-453. doi: 10.1146/annurev.ne.11.030188.002231
How to cite this Chapter using APA Style:
Furtak, S. (2019). Neurons. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/s678why4
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/s678why4.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Epley, N., Morewedge, C. K., & Keysar, B. (2004). Perspective taking in children and adults: Equivalent egocentrism but differential correction. Journal of Experimental Social Psychology, 40, 760–768.
Yang, Z., & Schank, J. C. (2006). Women do not synchronize their menstrual cycles. Human Nature, 17, 433–447.
Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception–behavior link and social interaction. Journal of Personality and Social Psychology, 76, 893–910.
Rizzolatti, G., Fogassi, L., & Gallese, V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience, 2, 661–670.
Sonnby-Borgström, M., Jönsson, P., & Svensson, O. (2003). Emotional empathy as related to mimicry reactions at different levels of information processing. Journal of Nonverbal Behavior, 27, 3–23.
Michelon, P., & Zacks, J. M. (2006). Two kinds of visual perspective taking. Perception & Psychophysics, 68, 327–337.
Meltzoff, A. N. (2007). “Like me”: A foundation for social cognition. Developmental Science, 10, 126–134.
Krueger, J. I. (2007). From social projection to social behaviour. European Review of Social Psychology, 18, 1–35.
Keysar, B. (1994). The illusory transparency of intention: Linguistic perspective taking in text. Cognitive Psychology, 26, 165–208.
Gilovich, T., & Savitsky, K. (1999). The spotlight effect and the illusion of transparency: Egocentric assessments of how we are seen by others. Current Directions in Psychological Science, 8, 165–168.
Original chapter by Susan Barron adapted by by the Queen’s University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
Psychopharmacology is the study of how drugs affect behavior. If a drug changes your perception, or the way you feel or think, the drug exerts effects on your brain and nervous system. We call drugs that change the way you think or feel psychoactive or psychotropic drugs, and almost everyone has used a psychoactive drug at some point (yes, caffeine counts). Understanding some of the basics about psychopharmacology can help us better understand a wide range of things that interest psychologists and others. For example, the pharmacological treatment of certain neurodegenerative diseases such as Parkinson’s disease tells us something about the disease itself. The pharmacological treatments used to treat psychiatric conditions such as schizophrenia or depression have undergone amazing development since the 1950s, and the drugs used to treat these disorders tell us something about what is happening in the brain of individuals with these conditions. Finally, understanding something about the actions of drugs of abuse and their routes of administration can help us understand why some psychoactive drugs are so addictive. In this module, we will provide an overview of some of these topics as well as discuss some current controversial areas in the field of psychopharmacology.
Learning Objectives
- How do the majority of psychoactive drugs work in the brain?
- How does the route of administration affect how rewarding a drug might be?
- Why is grapefruit dangerous to consume with many psychotropic medications?
- Why might individualized drug doses based on genetic screening be helpful for treating conditions like depression?
- Why is there controversy regarding pharmacotherapy for children, adolescents, and the elderly?
Introduction
Psychopharmacology, the study of how drugs affect the brain and behavior, is a relatively new science, although people have probably been taking drugs to change how they feel from early in human history (consider the of eating fermented fruit, ancient beer recipes, chewing on the leaves of the cocaine plant for stimulant properties as just some examples). The word psychopharmacology itself tells us that this is a field that bridges our understanding of behavior (and brain) and pharmacology, and the range of topics included within this field is extremely broad.
Virtually any drug that changes the way you feel does this by altering how neurons communicate with each other. Neurons (more than 100 billion in your nervous system) communicate with each other by releasing a chemical (neurotransmitter) across a tiny space between two neurons (the synapse). When the neurotransmitter crosses the synapse, it binds to a postsynaptic receptor (protein) on the receiving neuron and the message may then be transmitted onward. Obviously, neurotransmission is far more complicated than this – links at the end of this module can provide some useful background if you want more detail – but the first step is understanding that virtually all psychoactive drugs interfere with or alter how neurons communicate with each other.
There are many neurotransmitters. Some of the most important in terms of psychopharmacological treatment and drugs of abuse are outlined in Table 1. The neurons that release these neurotransmitters, for the most part, are localized within specific circuits of the brain that mediate these behaviors. Psychoactive drugs can either increase activity at the synapse (these are called agonists) or reduce activity at the synapse (antagonists). Different drugs do this by different mechanisms, and some examples of agonists and antagonists are presented in Table 2. For each example, the drug’s trade name, which is the name of the drug provided by the drug company, and generic name (in parentheses) are provided.
A very useful link at the end of this module shows the various steps involved in neurotransmission and some ways drugs can alter this.
Table 2 provides examples of drugs and their primary mechanism of action, but it is very important to realize that drugs also have effects on other neurotransmitters. This contributes to the kinds of side effects that are observed when someone takes a particular drug. The reality is that no drugs currently available work only exactly where we would like in the brain or only on a specific neurotransmitter. In many cases, individuals are sometimes prescribed one psychotropic drug but then may also have to take additional drugs to reduce the side effects caused by the initial drug. Sometimes individuals stop taking medication because the side effects can be so profound.
Pharmacokinetics: What Is It – Why Is It Important?
While this section may sound more like pharmacology, it is important to realize how important pharmacokinetics can be when considering psychoactive drugs. Pharmacokinetics refers to how the body handles a drug that we take. As mentioned earlier, psychoactive drugs exert their effects on behavior by altering neuronal communication in the brain, and the majority of drugs reach the brain by traveling in the blood. The acronym ADME is often used with A standing for absorption (how the drug gets into the blood), Distribution (how the drug gets to the organ of interest – in this module, that is the brain), Metabolism (how the drug is broken down so it no longer exerts its psychoactive effects), and Excretion (how the drug leaves the body). We will talk about a couple of these to show their importance for considering psychoactive drugs.
Drug Administration
There are many ways to take drugs, and these routes of drug administration can have a significant impact on how quickly that drug reaches brain. The most common route of administration is oral administration, which is relatively slow and – perhaps surprisingly – often the most variable and complex route of administration. Drugs enter the stomach and then get absorbed by the blood supply and capillaries that line the small intestine. The rate of absorption can be affected by a variety of factors including the quantity and the type of food in the stomach (e.g., fats vs. proteins). This is why the medicine label for some drugs (like antibiotics) may specifically state foods that you should or should NOT consume within an hour of taking the drug because they can affect the rate of absorption. Two of the most rapid routes of administration include inhalation (i.e., smoking or gaseous anesthesia) and intravenous (IV) in which the drug is injected directly into the vein and hence the blood supply. Both of these routes of administration can get the drug to brain in less than 10 seconds. IV administration also has the distinction of being the most dangerous because if there is an adverse drug reaction, there is very little time to administer any antidote, as in the case of an IV heroin overdose.
Why might how quickly a drug gets to the brain be important? If a drug activates the reward circuits in the brain AND it reaches the brain very quickly, the drug has a high risk for abuse and addiction. Psychostimulants like amphetamine or cocaine are examples of drugs that have high risk for abuse because they are agonists at DA neurons involved in reward AND because these drugs exist in forms that can be either smoked or injected intravenously. Some argue that cigarette smoking is one of the hardest addictions to quit, and although part of the reason for this may be that smoking gets the nicotine into the brain very quickly (and indirectly acts on DA neurons), it is a more complicated story. For drugs that reach the brain very quickly, not only is the drug very addictive, but so are the cues associated with the drug (see Rohsenow, Niaura, Childress, Abrams, & Monti, 1990). For a crack user, this could be the pipe that they use to smoke the drug. For a cigarette smoker, however, it could be something as normal as finishing dinner or waking up in the morning (if that is when the smoker usually has a cigarette). For both the crack user and the cigarette smoker, the cues associated with the drug may actually cause craving that is alleviated by (you guessed it) – lighting a cigarette or using crack (i.e., relapse). This is one of the reasons individuals that enroll in drug treatment programs, especially out-of-town programs, are at significant risk of relapse if they later find themselves in proximity to old haunts, friends, etc. But this is much more difficult for a cigarette smoker. How can someone avoid eating? Or avoid waking up in the morning, etc. These examples help you begin to understand how important the route of administration can be for psychoactive drugs.
Drug Metabolism
Metabolism involves the breakdown of psychoactive drugs, and this occurs primarily in the liver. The liver produces enzymes (proteins that speed up a chemical reaction), and these enzymes help catalyze a chemical reaction that breaks down psychoactive drugs. Enzymes exist in “families,” and many psychoactive drugs are broken down by the same family of enzymes, the cytochrome P450 superfamily. There is not a unique enzyme for each drug; rather, certain enzymes can break down a wide variety of drugs. Tolerance to the effects of many drugs can occur with repeated exposure; that is, the drug produces less of an effect over time, so more of the drug is needed to get the same effect. This is particularly true for sedative drugs like alcohol or opiate-based painkillers. Metabolic tolerance is one kind of tolerance and it takes place in the liver. Some drugs (like alcohol) cause enzyme induction – an increase in the enzymes produced by the liver. For example, chronic drinking results in alcohol being broken down more quickly, so the alcoholic needs to drink more to get the same effect – of course, until so much alcohol is consumed that it damages the liver (alcohol can cause fatty liver or cirrhosis).
Recent Issues Related to Psychotropic Drugs and Metabolism
Grapefruit Juice and Metabolism
Certain types of food in the stomach can alter the rate of drug absorption, and other foods can also alter the rate of drug metabolism. The most well known is grapefruit juice. Grapefruit juice suppresses cytochrome P450 enzymes in the liver, and these liver enzymes normally break down a large variety of drugs (including some of the psychotropic drugs). If the enzymes are suppressed, drug levels can build up to potentially toxic levels. In this case, the effects can persist for extended periods of time after the consumption of grapefruit juice. As of 2013, there are at least 85 drugs shown to adversely interact with grapefruit juice (Bailey, Dresser, & Arnold, 2013). Some psychotropic drugs that are likely to interact with grapefruit juice include carbamazepine (Tegretol), prescribed for bipolar disorder; diazepam (Valium), used to treat anxiety, alcohol withdrawal, and muscle spasms; and fluvoxamine (Luvox), used to treat obsessive compulsive disorder and depression. A link at the end of this module gives the latest list of drugs reported to have this unusual interaction.
Individualized Therapy, Metabolic Differences, and Potential Prescribing Approaches for the Future
Mental illnesses contribute to more disability in western countries than all other illnesses including cancer and heart disease. Depression alone is predicted to be the second largest contributor to disease burden by 2020 (World Health Organization, 2004). The numbers of people affected by mental health issues are pretty astonishing, with estimates that 25% of adults experience a mental health issue in any given year, and this affects not only the individual but their friends and family. One in 17 adults experiences a serious mental illness (Kessler, Chiu, Demler, & Walters, 2005). Newer antidepressants are probably the most frequently prescribed drugs for treating mental health issues, although there is no “magic bullet” for treating depression or other conditions. Pharmacotherapy with psychological therapy may be the most beneficial treatment approach for many psychiatric conditions, but there are still many unanswered questions. For example, why does one antidepressant help one individual yet have no effect for another? Antidepressants can take 4 to 6 weeks to start improving depressive symptoms, and we don’t really understand why. Many people do not respond to the first antidepressant prescribed and may have to try different drugs before finding something that works for them. Other people just do not improve with antidepressants (Ioannidis, 2008). As we better understand why individuals differ, the easier and more rapidly we will be able to help people in distress.
One area that has received interest recently has to do with an individualized treatment approach. We now know that there are genetic differences in some of the cytochrome P450 enzymes and their ability to break down drugs. The general population falls into the following 4 categories: 1) ultra-extensive metabolizers break down certain drugs (like some of the current antidepressants) very, very quickly, 2) extensive metabolizers are also able to break down drugs fairly quickly, 3) intermediate metabolizers break down drugs more slowly than either of the two above groups, and finally 4) poor metabolizers break down drugs much more slowly than all of the other groups. Now consider someone receiving a prescription for an antidepressant – what would the consequences be if they were either an ultra-extensive metabolizer or a poor metabolizer? The ultra-extensive metabolizer would be given antidepressants and told it will probably take 4 to 6 weeks to begin working (this is true), but they metabolize the medication so quickly that it will never be effective for them. In contrast, the poor metabolizer given the same daily dose of the same antidepressant may build up such high levels in their blood (because they are not breaking the drug down), that they will have a wide range of side effects and feel really badly – also not a positive outcome. What if – instead – prior to prescribing an antidepressant, the doctor could take a blood sample and determine which type of metabolizer a patient actually was? They could then make a much more informed decision about the best dose to prescribe. There are new genetic tests now available to better individualize treatment in just this way. A blood sample can determine (at least for some drugs) which category an individual fits into, but we need data to determine if this actually is effective for treating depression or other mental illnesses (Zhou, 2009). Currently, this genetic test is expensive and not many health insurance plans cover this screen, but this may be an important component in the future of psychopharmacology.
Other Controversial Issues
Juveniles and Psychopharmacology
A recent Centers for Disease Control (CDC) report has suggested that as many as 1 in 5 children between the ages of 5 and 17 may have some type of mental disorder (e.g., ADHD, autism, anxiety, depression) (CDC, 2013). The incidence of bipolar disorder in children and adolescents has also increased 40 times in the past decade (Moreno, Laje, Blanco, Jiang, Schmidt, & Olfson, 2007), and it is now estimated that 1 in 88 children have been diagnosed with an autism spectrum disorder (CDC, 2011). Why has there been such an increase in these numbers? There is no single answer to this important question. Some believe that greater public awareness has contributed to increased teacher and parent referrals. Others argue that the increase stems from changes in criterion currently used for diagnosing. Still others suggest environmental factors, either prenatally or postnatally, have contributed to this upsurge.
We do not have an answer, but the question does bring up an additional controversy related to how we should treat this population of children and adolescents. Many psychotropic drugs used for treating psychiatric disorders have been tested in adults, but few have been tested for safety or efficacy with children or adolescents. The most well-established psychotropics prescribed for children and adolescents are the psychostimulant drugs used for treating attention deficit hyperactivity disorder (ADHD), and there are clinical data on how effective these drugs are. However, we know far less about the safety and efficacy in young populations of the drugs typically prescribed for treating anxiety, depression, or other psychiatric disorders. The young brain continues to mature until probably well after age 20, so some scientists are concerned that drugs that alter neuronal activity in the developing brain could have significant consequences. There is an obvious need for clinical trials in children and adolescents to test the safety and effectiveness of many of these drugs, which also brings up a variety of ethical questions about who decides what children and adolescents will participate in these clinical trials, who can give consent, who receives reimbursements, etc.
The Elderly and Psychopharmacology
Another population that has not typically been included in clinical trials to determine the safety or effectiveness of psychotropic drugs is the elderly. Currently, there is very little high-quality evidence to guide prescribing for older people – clinical trials often exclude people with multiple comorbidities (other diseases, conditions, etc.), which are typical for elderly populations (see Hilmer and Gnjidict, 2008; Pollock, Forsyth, & Bies, 2008). This is a serious issue because the elderly consume a disproportionate number of the prescription meds prescribed. The term polypharmacy refers to the use of multiple drugs, which is very common in elderly populations in the United States. As our population ages, some estimate that the proportion of people 65 or older will reach 20% of the U.S. population by 2030, with this group consuming 40% of the prescribed medications. As shown in Table 3 (from Schwartz and Abernethy, 2008), it is quite clear why the typical clinical trial that looks at the safety and effectiveness of psychotropic drugs can be problematic if we try to interpret these results for an elderly population.
Metabolism of drugs is often slowed considerably for elderly populations, so less drug can produce the same effect (or all too often, too much drug can result in a variety of side effects). One of the greatest risk factors for elderly populations is falling (and breaking bones), which can happen if the elderly person gets dizzy from too much of a drug. There is also evidence that psychotropic medications can reduce bone density (thus worsening the consequences if someone falls) (Brown & Mezuk, 2012). Although we are gaining an awareness about some of the issues facing pharmacotherapy in older populations, this is a very complex area with many medical and ethical questions.
This module provided an introduction of some of the important areas in the field of psychopharmacology. It should be apparent that this module just touched on a number of topics included in this field. It should also be apparent that understanding more about psychopharmacology is important to anyone interested in understanding behavior and that our understanding of issues in this field has important implications for society.
An Interactive Exploration
To explore psychopharmacology a bit further, we encourage you to check out this interactive from the Genetic Science Learning Center at the University of Utah. Click HERE to access.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- Agonists
- A drug that increases or enhances a neurotransmitter’s effect.
- Antagonist
- A drug that blocks a neurotransmitter’s effect.
- Enzyme
- A protein produced by a living organism that allows or helps a chemical reaction to occur.
- Enzyme induction
- Process through which a drug can enhance the production of an enzyme.
- Metabolism
- Breakdown of substances.
- Neurotransmitter
- A chemical substance produced by a neuron that is used for communication between neurons.
- Pharmacokinetics
- The action of a drug through the body, including absorption, distribution, metabolism, and excretion.
- Polypharmacy
- The use of many medications.
- Psychoactive drugs
- A drug that changes mood or the way someone feels.
- Psychotropic drug
- A drug that changes mood or emotion, usually used when talking about drugs prescribed for various mental conditions (depression, anxiety, schizophrenia, etc.).
- Synapse
- The tiny space separating neurons.
References
- Bailey D. G., Dresser G., & Arnold J. M. (2013). Grapefruit-medication interactions: forbidden fruit or avoidable consequences? Canadian Medical Association Journal, 185, 309–316.
- Brown, M. J., & Mezuk, B. (2012). Brains, bones, and aging: psychotropic medications and bone health among older adults. Current Osteoporosis Reports, 10, 303–311.
- Centers for Disease Control and Prevention (2011) Prevalence of autism spectrum disorders – autism and developmental disabilities monitoring network, 14 sites, United States, 2008. Morbidity and Mortality Weekly Report 61(SS03) 1–19.
- Centers for Disease Control and Prevention. (2013) Mental health surveillance among children – United States, 2005—2011. Morbidity and Mortality Weekly Report 62 Suppl, 1-35.
- Hilmer, N., & Gnjidict, D. (2008). The effects of polypharmacy in older adults. Clinical Pharmacology & Therapeutics, 85, 86–88.
- Ioannidis, J. P. A. (2008). Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials? Philosophy, Ethics and Humanities in Medicine, 3,14.
- Kessler, R. C., Chiu, W. T., Demler, O., & Walters, E. E. (2005). Prevalence, severity, and comorbidity of twelve-month DSM-IV disorders in the National Comorbidity Survey Replication (NCS-R). Archives of General Psychiatry, 62, 617–627.
- Moreno, C., Laje, G., Blanco, C., Jiang, H., Schmidt, A. B., & Olfson, M., (2007). National trends in the outpatient diagnosis and treatment of bipolar disorder in youth. Archives of General Psychiatry, 64(9), 1032–1039.
- Pollock, B. G., Forsyth, C. E., & Bies, R. R. (2008). The critical role of clinical pharmacology in geriatric psychopharmacology. Clinical Pharmacology & Therapeutics, 85, 89–93.
- Rohsenow, D. J., Niaura, R. S., Childress, A. R., Abrams, D. B., &, Monti, P. M. (1990). Cue reactivity in addictive behaviors: Theoretical and treatment implications. International Journal of Addiction, 25, 957–993.
- Schwartz, J. B., & Abernethy, D. R. (2008). Aging and medications: Past, present, future. Clinical Pharmacology & Therapeutics, 85, 3–10.
- World Health Organization. (2004). Promoting mental health: concepts, emerging evidence, practice (Summary Report). Geneva, Switzerland: Author. Retrieved from http://www.who.int/mental_health/evidence/en/promoting_mhh.pdf
- Zhou, S. F. (2009). Polymorphism of human cytochrome P450 2D6 and its clinical significance: Part II. Clinical Pharmacokinetics, 48, 761–804.
How to cite this Chapter using APA Style:
Barron, S. (2019). Psychopharmacology. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology.Champaign, IL: DEF publishers. Retrieved from http://noba.to/umx6f2t8
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/umx6f2t8.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Wellman, H. M., Cross, D., & Watson, J. (2001). Meta-analysis of theory-of-mind development: The truth about false belief. Child Development, 72, 655–684.
Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 13, 103–128.
Original chapter Randy J. Nelson adapted by the Queen’s University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
The goal of this module is to introduce you to the topic of hormones and behavior. This field of study is also called behavioral endocrinology, which is the scientific study of the interaction between hormones and behavior. This interaction is bidirectional: hormones can influence behavior, and behavior can sometimes influence hormone concentrations. Hormones are chemical messengers released from endocrine glands that travel through the blood system to influence the nervous system to regulate behaviors such as aggression, mating, and parenting of individuals.
Learning Objectives
- Define the basic terminology and basic principles of hormone–behavior interactions.
- Explain the role of hormones in behavioral sex differentiation.
- Explain the role of hormones in aggressive behavior.
- Explain the role of hormones in parental behavior.
- Provide examples of some common hormone–behavior interactions.
Introduction
This module describes the relationship between hormones and behavior. Many readers are likely already familiar with the general idea that hormones can affect behavior. Students are generally familiar with the idea that sex-hormone concentrations increase in the blood during puberty and decrease as we age, especially after about 50 years of age. Sexual behavior shows a similar pattern. Most people also know about the relationship between aggression and anabolic steroid hormones, and they know that administration of artificial steroid hormones sometimes results in uncontrollable, violent behavior called “roid rage.” Many different hormones can influence several types of behavior, but for the purpose of this module, we will restrict our discussion to just a few examples of hormones and behaviors. For example, are behavioral sex differences the result of hormones, the environment, or some combination of factors? Why are men much more likely than women to commit aggressive acts? Are hormones involved in mediating the so-called maternal “instinct”? Behavioral endocrinologists are interested in how the general physiological effects of hormones alter the development and expression of behavior and how behavior may influence the effects of hormones. This module describes, both phenomenologically and functionally, how hormones affect behavior.
To understand the hormone-behavior relationship, it is important briefly to describe hormones. Hormones are organic chemical messengers produced and released by specialized glands called endocrine glands. Hormones are released from these glands into the blood, where they may travel to act on target structures at some distance from their origin. Hormones are similar in function to neurotransmitters, the chemicals used by the nervous system in coordinating animals’ activities. However, hormones can operate over a greater distance and over a much greater temporal range than neurotransmitters (Focus Topic 1). Examples of hormones that influence behavior include steroid hormones such as testosterone (a common type of androgen), estradiol (a common type of estrogen), progesterone (a common type of progestin), and cortisol (a common type of glucocorticoid) (Table 1, A-B). Several types of protein or peptide (small protein) hormones also influence behavior, including oxytocin, vasopressin, prolactin, and leptin.
Focus Topic 1: Neural Transmission versus Hormonal Communication
Hormones coordinate the physiology and behavior of individuals by regulating, integrating, and controlling bodily functions. Over evolutionary time, hormones have often been co-opted by the nervous system to influence behavior to ensure reproductive success. For example, the same hormones, testosterone and estradiol, that cause gamete (egg or sperm) maturation also promote mating behavior. This dual hormonal function ensures that mating behavior occurs when animals have mature gametes available for fertilization. Another example of endocrine regulation of physiological and behavioral function is provided by pregnancy. Estrogens and progesterone concentrations are elevated during pregnancy, and these hormones are often involved in mediating maternal behavior in the mothers.
Not all cells are influenced by each and every hormone. Rather, any given hormone can directly influence only cells that have specific hormone receptors for that particular hormone. Cells that have these specific receptors are called target cells for the hormone. The interaction of a hormone with its receptor begins a series of cellular events that eventually lead to activation of enzymatic pathways or, alternatively, turns on or turns off gene activation that regulates protein synthesis. The newly synthesized proteins may activate or deactivate other genes, causing yet another cascade of cellular events. Importantly, sufficient numbers of appropriate hormone receptors must be available for a specific hormone to produce any effects. For example, testosterone is important for male sexual behavior. If men have too little testosterone, then sexual motivation may be low, and it can be restored by testosterone treatment. However, if men have normal or even elevated levels of testosterone yet display low sexual drive, then it might be possible for a lack of receptors to be the cause and treatment with additional hormones will not be effective.
How might hormones affect behavior? In terms of their behavior, one can think of humans and other animals conceptually as comprised of three interacting components: (1) input systems (sensory systems), (2) integrators (the central nervous system), and (3) output systems, or effectors (e.g., muscles). Hormones do not causebehavioral changes. Rather, hormones influence these three systems so that specific stimuli are more likely to elicit certain responses in the appropriate behavioral or social context. In other words, hormones change the probability that a particular behavior will be emitted in the appropriate situation (Nelson, 2011). This is a critical distinction that can affect how we think of hormone-behavior relationships.
We can apply this three-component behavioral scheme to a simple behavior, singing in zebra finches. Only male zebra finches sing. If the testes of adult male finches are removed, then the birds reduce singing, but castrated finches resume singing if the testes are reimplanted, or if the birds are treated with either testosterone or estradiol. Although we commonly consider androgens to be “male” hormones and estrogens to be “female” hormones, it is common for testosterone to be converted to estradiol in nerve cells (Figure 1). Thus, many male-like behaviors are associated with the actions of estrogens! Indeed, all estrogens must first be converted from androgens because of the typical biochemical synthesis process. If the converting enzyme is low or missing, then it is possible for females to produce excessive androgens and subsequently develop associated male traits. It is also possible for estrogens in the environment to affect the nervous system of animals, including people (e.g., Kidd et al., 2007). Again, singing behavior is most frequent when blood testosterone or estrogen concentrations are high. Males sing to attract mates or ward off potential competitors from their territories.
Although it is apparent from these observations that estrogens are somehow involved in singing, how might the three-component framework just introduced help us to formulate hypotheses to explore estrogen’s role in this behavior? By examining input systems, we could determine whether estrogens alter the birds’ sensory capabilities, making the environmental cues that normally elicit singing more salient. If this were the case, then females or competitors might be more easily seen or heard. Estrogens also could influence the central nervous system. Neuronal architecture or the speed of neural processing could change in the presence of estrogens. Higher neural processes (e.g., motivation, attention, or perception) also might be influenced. Finally, the effector organs, muscles in this case, could be affected by the presence of estrogens. Blood estrogen concentrations might somehow affect the muscles of a songbird’s syrinx (the vocal organ of birds). Estrogens, therefore, could affect birdsong by influencing the sensory capabilities, central processing system, or effector organs of an individual bird. We do not understand completely how estrogen, derived from testosterone, influences birdsong, but in most cases, hormones can be considered to affect behavior by influencing one, two, or all three of these components, and this three-part framework can aid in the design of hypotheses and experiments to explore these issues.
How might behaviors affect hormones? The birdsong example demonstrates how hormones can affect behavior, but as noted, the reciprocal relation also occurs; that is, behavior can affect hormone concentrations. For example, the sight of a territorial intruder may elevate blood testosterone concentrations in resident male birds and thereby stimulate singing or fighting behavior. Similarly, male mice or rhesus monkeys that lose a fight decrease circulating testosterone concentrations for several days or even weeks afterward. Comparable results have also been reported in humans. Testosterone concentrations are affected not only in humans involved in physical combat, but also in those involved in simulated battles. For example, testosterone concentrations were elevated in winners and reduced in losers of regional chess tournaments.
People do not have to be directly involved in a contest to have their hormones affected by the outcome of the contest. Male fans of both the Brazilian and Italian teams were recruited to provide saliva samples to be assayed for testosterone before and after the final game of the World Cup soccer match in 1994. Brazil and Italy were tied going into the final game, but Brazil won on a penalty kick at the last possible moment. The Brazilian fans were elated and the Italian fans were crestfallen. When the samples were assayed, 11 of 12 Brazilian fans who were sampled had increased testosterone concentrations, and 9 of 9 Italian fans had decreased testosterone concentrations, compared with pre-game baseline values (Dabbs, 2000).
In some cases, hormones can be affected by anticipation of behavior. For example, testosterone concentrations also influence sexual motivation and behavior in women. In one study, the interaction between sexual intercourse and testosterone was compared with other activities (cuddling or exercise) in women (van Anders, Hamilton, Schmidt, & Watson, 2007). On three separate occasions, women provided a pre-activity, post-activity, and next-morning saliva sample. After analysis, the women’s testosterone was determined to be elevated prior to intercourse as compared to other times. Thus, an anticipatory relationship exists between sexual behavior and testosterone. Testosterone values were higher post-intercourse compared to exercise, suggesting that engaging in sexual behavior may also influence hormone concentrations in women.
Sex Differences
Hens and roosters are different. Cows and bulls are different. Men and women are different. Even girls and boys are different. Humans, like many animals, are sexually dimorphic (di, “two”; morph, “type”) in the size and shape of their bodies, their physiology, and for our purposes, their behavior. The behavior of boys and girls differs in many ways. Girls generally excel in verbal abilities relative to boys; boys are nearly twice as likely as girls to suffer from dyslexia (reading difficulties) and stuttering and nearly 4 times more likely to suffer from autism. Boys are generally better than girls at tasks that require visuospatial abilities. Girls engage in nurturing behaviors more frequently than boys. More than 90% of all anorexia nervosa cases involve young women. Young men are twice as likely as young women to suffer from schizophrenia. Boys are much more aggressive and generally engage in more rough-and-tumble play than girls (Berenbaum, Martin, Hanish, Briggs, & Fabes, 2008). Many sex differences, such as the difference in aggressiveness, persist throughout adulthood. For example, there are many more men than women serving prison sentences for violent behavior. The hormonal differences between men and women may account for adult sex differences that develop during puberty, but what accounts for behavioral sex differences among children prior to puberty and activation of their gonads? Hormonal secretions from the developing gonads determine whether the individual develops in a male or female manner. The mammalian embryonic testes produce androgens, as well as peptide hormones, that steer the development of the body, central nervous system, and subsequent behavior in a male direction. The embryonic ovaries of mammals are virtually quiescent and do not secrete high concentrations of hormones. In the presence of ovaries, or in the complete absence of any gonads, morphological, neural, and, later, behavioral development follows a female pathway.
Gonadal steroid hormones have organizational (or programming) effects upon brain and behavior (Phoenix, Goy, Gerall, & Young, 1959). The organizing effects of steroid hormones are relatively constrained to the early stages of development. An asymmetry exists in the effects of testes and ovaries on the organization of behavior in mammals. Hormone exposure early in life has organizational effects on subsequent rodent behavior; early steroid hormone treatment causes relatively irreversible and permanent masculinization of rodent behavior (mating and aggressive). These early hormone effects can be contrasted with the reversible behavioral influences of steroid hormones provided in adulthood, which are called activational effects. The activational effects of hormones on adult behavior are temporary and may wane soon after the hormone is metabolized. Thus, typical male behavior requires exposure to androgens during gestation (in humans) or immediately after birth (in rodents) to somewhat masculinize the brain and also requires androgens during or after puberty to activate these neural circuits. Typical female behavior requires a lack of exposure to androgens early in life which leads to feminization of the brain and also requires estrogens to activate these neural circuits in adulthood. But this simple dichotomy, which works well with animals with very distinct sexual dimorphism in behavior, has many caveats when applied to people.
If you walk through any major toy store, then you will likely observe a couple of aisles filled with pink boxes and the complete absence of pink packaging of toys in adjacent aisles. Remarkably, you will also see a strong self-segregation of boys and girls in these aisles. It is rare to see boys in the “pink” aisles and vice versa. The toy manufacturers are often accused of making toys that are gender biased, but it seems more likely that boys and girls enjoy playing with specific types and colors of toys. Indeed, toy manufacturers would immediately double their sales if they could sell toys to both sexes. Boys generally prefer toys such as trucks and balls and girls generally prefer toys such as dolls. Although it is doubtful that there are genes that encode preferences for toy cars and trucks on the Y chromosome, it is possible that hormones might shape the development of a child’s brain to prefer certain types of toys or styles of play behavior. It is reasonable to believe that children learn which types of toys and which styles of play are appropriate to their gender. How can we understand and separate the contribution of physiological mechanisms from learning to understand sex differences in human behaviors? To untangle these issues, animal models are often used. Unlike the situation in humans, where sex differences are usually only a matter of degree (often slight), in some animals, members of only one sex may display a particular behavior. As noted, often only male songbirds sing. Studies of such strongly sex-biased behaviors are particularly valuable for understanding the interaction among behavior, hormones, and the nervous system.
A study of vervet monkeys calls into question the primacy of learning in the establishment of toy preferences (Alexander & Hines, 2002). Female vervet monkeys preferred girl-typical toys, such as dolls or cooking pots, whereas male vervet monkeys preferred boy-typical toys, such as cars or balls. There were no sex differences in preference for gender-neutral toys, such as picture books or stuffed animals. Presumably, monkeys have no prior concept of “boy” or “girl” toys. Young rhesus monkeys also show similar toy preferences.
What then underlies the sex difference in toy preference? It is possible that certain attributes of toys (or objects) appeal to either boys or girls. Toys that appeal to boys or male vervet or rhesus monkeys, in this case, a ball or toy car, are objects that can be moved actively through space, toys that can be incorporated into active, rough and tumble play. The appeal of toys that girls or female vervet monkeys prefer appears to be based on color. Pink and red (the colors of the doll and pot) may provoke attention to infants.
Society may reinforce such stereotypical responses to gender-typical toys. The sex differences in toy preferences emerge by 12 or 24 months of age and seem fixed by 36 months of age, but are sex differences in toy preference present during the first year of life? It is difficult to ask pre-verbal infants what they prefer, but in studies where the investigators examined the amount of time that babies looked at different toys, eye-tracking data indicate that infants as young as 3 months showed sex differences in toy preferences; girls preferred dolls, whereas boys preferred trucks. Another result that suggests, but does not prove, that hormones are involved in toy preferences is the observation that girls diagnosed with congenital adrenal hyperplasia (CAH), whose adrenal glands produce varying amounts of androgens early in life, played with masculine toys more often than girls without CAH. Further, a dose-response relationship between the extent of the disorder (i.e., degree of fetal androgen exposure) and degree of masculinization of play behavior was observed. Are the sex differences in toy preferences or play activity, for example, the inevitable consequences of the differential endocrine environments of boys and girls, or are these differences imposed by cultural practices and beliefs? Are these differences the result of receiving gender-specific toys from an early age, or are these differences some combination of endocrine and cultural factors? Again, these are difficult questions to unravel in people.
Even when behavioral sex differences appear early in development, there seems to be some question regarding the influences of societal expectations. One example is the pattern of human play behavior during which males are more physical; this pattern is seen in a number of other species including nonhuman primates, rats, and dogs. Is the difference in the frequency of rough-and-tumble play between boys and girls due to biological factors associated with being male or female, or is it due to cultural expectations and learning? If there is a combination of biological and cultural influences mediating the frequency of rough-and-tumble play, then what proportion of the variation between the sexes is due to biological factors and what proportion is due to social influences? Importantly, is it appropriate to talk about “normal” sex differences when these traits virtually always arrange themselves along a continuum rather than in discrete categories?
Sex differences are common in humans and in nonhuman animals. Because males and females differ in the ratio of androgenic and estrogenic steroid hormone concentrations, behavioral endocrinologists have been particularly interested in the extent to which behavioral sex differences are mediated by hormones. The process of becoming female or male is called sexual differentiation. The primary step in sexual differentiation occurs at fertilization. In mammals, the ovum (which always contains an X chromosome) can be fertilized by a sperm bearing either a Y or an X chromosome; this process is called sex determination. The chromosomal sex of homogametic mammals (XX) is female; the chromosomal sex of heterogametic mammals (XY) is male. Chromosomal sex determines gonadal sex. Virtually all subsequent sexual differentiation is typically the result of differential exposure to gonadal steroid hormones. Thus, gonadal sex determines hormonal sex, which regulates morphological sex. Morphological differences in the central nervous system, as well as in some effector organs, such as muscles, lead to behavioral sex differences. The process of sexual differentiation is complicated, and the potential for errors is present. Perinatal exposure to androgens is the most common cause of anomalous sexual differentiation among females. The source of androgen may be internal (e.g., secreted by the adrenal glands) or external (e.g., exposure to environmental estrogens). Turner syndrome results when the second X chromosome is missing or damaged; these individuals possess dysgenic ovaries and are not exposed to steroid hormones until puberty. Interestingly, women with Turner syndrome often have impaired spatial memory.
Female mammals are considered the “neutral” sex; additional physiological steps are required for male differentiation, and more steps bring more possibilities for errors in differentiation. Some examples of male anomalous sexual differentiation include 5α-reductase deficiency (in which XY individuals are born with ambiguous genitalia because of a lack of dihydrotestosterone and are reared as females, but masculinization occurs during puberty) and androgen insensitivity syndrome or TFM (in which XY individuals lack receptors for androgens and develop as females). By studying individuals who do not neatly fall into the dichotic boxes of female or male and for whom the process of sexual differentiation is atypical, behavioral endocrinologists glean hints about the process of typical sexual differentiation.
We may ultimately want to know how hormones mediate sex differences in the human brain and behavior (to the extent to which these differences occur). To understand the mechanisms underlying sex differences in the brain and behavior, we return to the birdsong example. Birds provide the best evidence that behavioral sex differences are the result of hormonally induced structural changes in the brain (Goodson, Saldanha, Hahn, & Soma, 2005). In contrast to mammals, in which structural differences in neural tissues have not been directly linked to behavior, structural differences in avian brains have been directly linked to a sexually behavior: birdsong.
Several brain regions in songbirds display significant sex differences in size. Two major brain circuit pathways, (1) the song production motor pathway and (2) the auditory transmission pathway, have been implicated in the learning and production of birdsong. Some parts of the song production pathway of male zebra finches are 3 to 6 times larger than those of female conspecifics. The larger size of these brain areas reflects that neurons in these nuclei are larger, more numerous, and farther apart. Although castration of adult male birds reduces singing, it does not reduce the size of the brain nuclei controlling song production. Similarly, androgen treatment of adult female zebra finches does not induce changes either in singing or in the size of the song control regions. Thus, activational effects of steroid hormones do not account for the sex differences in singing behavior or brain nucleus size in zebra finches. The sex differences in these structures are organized or programmed in the egg by estradiol (masculinizes) or the lack of steroids (feminizes).
Taken together, estrogens appear to be necessary to activate the neural machinery underlying the song system in birds. The testes of birds primarily produce androgens, which enter the circulation. The androgens enter neurons containing aromatase, which converts them to estrogens. Indeed, the brain is the primary source of estrogens, which activate masculine behaviors in many bird species.
Sex differences in human brain size have been reported for years. More recently, sex differences in specific brain structures have been discovered (Figure 2). Sex differences in a number of cognitive functions have also been reported. Females are generally more sensitive to auditory information, whereas males are more sensitive to visual information. Females are also typically more sensitive than males to taste and olfactory input. Women display less lateralization of cognitive functions than men. On average, females generally excel in verbal, perceptual, and fine motor skills, whereas males outperform females on quantitative and visuospatial tasks, including map reading and direction finding. Although reliable sex differences can be documented, these differences in ability are slight. It is important to note that there is more variation within each sex than betweenthe sexes for most cognitive abilities (Figure 3).
Aggressive Behaviors
The possibility for aggressive behavior exists whenever the interests of two or more individuals are in conflict (Nelson, 2006). Conflicts are most likely to arise over limited resources such as territories, food, and mates. A social interaction decides which animal gains access to the contested resource. In many cases, a submissive posture or gesture on the part of one animal avoids the necessity of actual combat over a resource. Animals may also participate in threat displays or ritualized combat in which dominance is determined but no physical damage is inflicted.
There is overwhelming circumstantial evidence that androgenic steroid hormones mediate aggressive behavior across many species. First, seasonal variations in blood plasma concentrations of testosterone and seasonal variations in aggression coincide. For instance, the incidence of aggressive behavior peaks for male deer in autumn, when they are secreting high levels of testosterone. Second, aggressive behaviors increase at the time of puberty, when the testes become active and blood concentrations of androgens rise. Juvenile deer do not participate in the fighting during the mating season. Third, in any given species, males are generally more aggressive than females. This is certainly true of deer; relative to stags, female deer rarely display aggressive behavior, and their rare aggressive acts are qualitatively different from the aggressive behavior of aggressive males. Finally, castration typically reduces aggression in males, and testosterone replacement therapy restores aggression to pre-castration levels. There are some interesting exceptions to these general observations that are outside the scope of this module.
As mentioned, males are generally more aggressive than females. Certainly, human males are much more aggressive than females. Many more men than women are convicted of violent crimes in North America. The sex differences in human aggressiveness appear very early. At every age throughout the school years, many more boys than girls initiate physical assaults. Almost everyone will acknowledge the existence of this sex difference, but assigning a cause to behavioral sex differences in humans always elicits much debate. It is possible that boys are more aggressive than girls because androgens promote aggressive behavior and boys have higher blood concentrations of androgens than girls. It is possible that boys and girls differ in their aggressiveness because the brains of boys are exposed to androgens prenatally and the “wiring” of their brains is thus organized in a way that facilitates the expression of aggression. It is also possible that boys are encouraged and girls are discouraged by family, peers, or others from acting in an aggressive manner. These three hypotheses are not mutually exclusive, but it is extremely difficult to discriminate among them to account for sex differences in human aggressiveness.
What kinds of studies would be necessary to assess these hypotheses? It is usually difficult to separate out the influences of environment and physiology on the development of behavior in humans. For example, boys and girls differ in their rough-and-tumble play at a very young age, which suggests an early physiological influence on aggression. However, parents interact with their male and female offspring differently; they usually play more roughly with male infants than with females, which suggests that the sex difference in aggressiveness is partially learned. This difference in parental interaction style is evident by the first week of life. Because of these complexities in the factors influencing human behavior, the study of hormonal effects on sex-differentiated behavior has been pursued in nonhuman animals, for which environmental influences can be held relatively constant. Animal models for which sexual differentiation occurs postnatally are often used so that this process can be easily manipulated experimentally.
Again, with the appropriate animal model, we can address the questions posed above: Is the sex difference in aggression due to higher adult blood concentrations of androgens in males than in females, or are males more aggressive than females because their brains are organized differently by perinatal hormones? Are males usually more aggressive than females because of an interaction of early and current blood androgen concentrations? If male mice are castrated prior to their sixth day of life, then treated with testosterone propionate in adulthood, they show low levels of aggression. Similarly, females ovariectomized prior to their sixth day but given androgens in adulthood do not express male-like levels of aggression. Treatment of perinatally gonadectomized males or females with testosterone prior to their sixth day life and also in adulthood results in a level of aggression similar to that observed in typical male mice. Thus, in mice, the proclivity for males to act more aggressively than females is organized perinatally by androgens but also requires the presence of androgens after puberty in order to be fully expressed. In other words, aggression in male mice is both organized and activated by androgens. Testosterone exposure in adulthood without prior organization of the brain by steroid hormones does not evoke typical male levels of aggression. The hormonal control of aggressive behavior in house mice is thus similar to the hormonal mediation of heterosexual male mating behavior in other rodent species. Aggressive behavior is both organized and activated by androgens in many species, including rats, hamsters, voles, dogs, and possibly some primate species.
Parental Behaviors
Parental behavior can be considered to be any behavior that contributes directly to the survival of fertilized eggs or offspring that have left the body of the female. There are many patterns of mammalian parental care. The developmental status of the newborn is an important factor driving the type and quality of parental care in a species. Maternal care is much more common than paternal care. The vast majority of research on the hormonal correlates of mammalian parental behavior has been conducted on rats. Rats bear altricial young, and mothers perform a cluster of stereotyped maternal behaviors, including nest building, crouching over the pups to allow nursing and to provide warmth, pup retrieval, and increased aggression directed at intruders. If you expose nonpregnant female rats (or males) to pups, their most common reaction is to huddle far away from them. Rats avoid new things (neophobia). However, if you expose adult rats to pups every day, they soon begin to behave maternally. This process is called concaveation or sensitization and it appears to serve to reduce the adult rats’ fear of pups.
Of course a new mother needs to act maternal as soon as her offspring arrive—not in a week. The onset of maternal behavior in rats is mediated by hormones. Several methods of study, such as hormone removal and replacement therapy, have been used to determine the hormonal correlates of rat maternal behavior. A fast decline of blood concentrations of progesterone in late pregnancy after sustained high concentrations of this hormone, in combination with high concentrations of estradiol and probably prolactin and oxytocin, induces female rats to behave maternally almost immediately in the presence of pups. This pattern of hormones at parturition overrides the usual fear response of adult rats toward pups, and it permits the onset of maternal behavior. Thus, the so-called maternal “instinct” requires hormones to increase the approach tendency and lower the avoidance tendency. Laboratory strains of mice and rats are usually docile, but mothers can be quite aggressive toward animals that venture too close to their litter. Progesterone appears to be the primary hormone that induces this maternal aggression in rodents, but species differences exist. The role of maternal aggression in women’s behavior has not been adequately described or tested.
A series of elegant experiments by Alison Fleming and her collaborators studied the endocrine correlates of the behavior of human mothers as well as the endocrine correlates of maternal attitudes as expressed in self-report questionnaires. Responses such as patting, cuddling, or kissing the baby were called affectionate behaviors; talking, singing, or cooing to the baby were considered vocal behaviors. Both affectionate and vocal behaviors were considered approach behaviors. Basic caregiving activities, such as changing diapers and burping the infants, were also recorded. In these studies, no relationship between hormone concentrations and maternal responsiveness, as measured by attitude questionnaires, was found. For example, most women showed an increasing positive self-image during early pregnancy that dipped during the second half of pregnancy, but recovered after parturition. A related dip in feelings of maternal engagement occurred during late pregnancy, but rebounded substantially after birth in most women. However, when behavior, rather than questionnaire responses, was compared with hormone concentrations, a different story emerged. Blood plasma concentrations of cortisol were positively associated with approach behaviors. In other words, women who had high concentrations of blood cortisol, in samples obtained immediately before or after nursing, engaged in more physically affectionate behaviors and talked more often to their babies than mothers with low cortisol concentrations. Additional analyses from this study revealed that the correlation was even greater for mothers that had reported positive maternal regard (feelings and attitudes) during gestation. Indeed, nearly half of the variation in maternal behavior among women could be accounted for by cortisol concentrations and positive maternal attitudes during pregnancy.
Presumably, cortisol does not induce maternal behaviors directly, but it may act indirectly on the quality of maternal care by evoking an increase in the mother’s general level of arousal, thus increasing her responsiveness to infant-generated cues. New mothers with high cortisol concentrations were also more attracted to their infant’s odors, were superior in identifying their infants, and generally found cues from infants highly appealing (Fleming, Steiner, & Corter, 1997).
The medial preoptic area is critical for the expression of rat maternal behavior. The amygdala appears to tonically inhibit the expression of maternal behavior. Adult rats are fearful of pups, a response that is apparently mediated by chemosensory information. Lesions of the amygdala or afferent sensory pathways from the vomeronasal organ to the amygdala disinhibit the expression of maternal behavior. Hormones or sensitization likely act to disinhibit the amygdala, thus permitting the occurrence of maternal behavior. Although correlations have been established, direct evidence of brain structural changes in human mothers remains unspecified (Fleming & Gonzalez, 2009).
Considered together, there are many examples of hormones influencing behavior and of behavior feeding back to influence hormone secretion. More and more examples of hormone–behavior interactions are discovered, including hormones in the mediation of food and fluid intake, social interactions, salt balance, learning and memory, stress coping, as well as psychopathology including depression, anxiety disorders, eating disorders, postpartum depression, and seasonal depression. Additional research should reveal how these hormone–behavior interactions are mediated.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- 5α-reductase
- An enzyme required to convert testosterone to 5α-dihydrotestosterone.
- Aggression
- A form of social interaction that includes threat, attack, and fighting.
- Aromatase
- An enzyme that converts androgens into estrogens.
- Chromosomal sex
- The sex of an individual as determined by the sex chromosomes (typically XX or XY) received at the time of fertilization.
- Defeminization
- The removal of the potential for female traits.
- Demasculinization
- The removal of the potential for male traits.
- Dihydrotestosterone (DHT)
- A primary androgen that is an androgenic steroid product of testosterone and binds strongly to androgen receptors.
- Endocrine gland
- A ductless gland from which hormones are released into the blood system in response to specific biological signals.
- Estrogen
- Any of the C18 class of steroid hormones, so named because of the estrus-generating properties in females. Biologically important estrogens include estradiol and estriol.
- Feminization
- The induction of female traits.
- Gonadal sex
- The sex of an individual as determined by the possession of either ovaries or testes. Females have ovaries, whereas males have testes.
- Hormone
- An organic chemical messenger released from endocrine cells that travels through the blood to interact with target cells at some distance to cause a biological response.
- Masculinization
- The induction of male traits.
- Maternal behavior
- Parental behavior performed by the mother or other female.
- Neurotransmitter
- A chemical messenger that travels between neurons to provide communication. Some neurotransmitters, such as norepinephrine, can leak into the blood system and act as hormones.
- Oxytocin
- A peptide hormone secreted by the pituitary gland to trigger lactation, as well as social bonding.
- Parental behavior
- Behaviors performed in relation to one’s offspring that contributes directly to the survival of those offspring
- Paternal behavior
- Parental behavior performed by the father or other male.
- Progesterone
- A primary progestin that is involved in pregnancy and mating behaviors.
- Progestin
- A class of C21 steroid hormones named for their progestational (pregnancy-supporting) effects. Progesterone is a common progestin.
- Prohormone
- A molecule that can act as a hormone itself or be converted into another hormone with different properties. For example, testosterone can serve as a hormone or as a prohormone for either dihydrotestosterone or estradiol.
- Prolactin
- A protein hormone that is highly conserved throughout the animal kingdom. It has many biological functions associated with reproduction and synergistic actions with steroid hormones.
- Receptor
- A chemical structure on the cell surface or inside of a cell that has an affinity for a specific chemical configuration of a hormone, neurotransmitter, or other compound.
- Sex determination
- The point at which an individual begins to develop as either a male or a female. In animals that have sex chromosomes, this occurs at fertilization. Females are XX and males are XY. All eggs bear X chromosomes, whereas sperm can either bear X or Y chromosomes. Thus, it is the males that determine the sex of the offspring.
- Sex differentiation
- The process by which individuals develop the characteristics associated with being male or female. Differential exposure to gonadal steroids during early development causes sexual differentiation of several structures including the brain.
- Target cell
- A cell that has receptors for a specific chemical messenger (hormone or neurotransmitter).
- Testosterone
- The primary androgen secreted by the testes of most vertebrate animals, including men.
References
- Alexander, G. M. & Hines, M. (2002). Sex differences in response to children’s toys in nonhuman primates (Cercopithecus aethiops sabaeus). Evolution and Human Behavior, 23, 467–479.
- Berenbaum, S. A., Martin, C. L., Hanish, L. D., Briggs, P. T., & Fabes, R. A. (2008). Sex differences in children’s play. In J. B. Becker, K. J. Berkley, N. Geary, E. Hampson, J. Herman, & E. Young (Eds.), Sex differences in the brain: From genes to behavior. New York: Oxford University Press.
- Dabbs, J. M. (2000). Heroes, rogues, and lovers: Testosterone and behavior. Columbus, OH: McGraw Hill.
- Fleming, A. S., & Gonzalez, A. (2009). Neurobiology of human maternal care. In P. T. Ellison & P. B. Gray (Eds.), Endocrinology of social relationships (pp. 294–318). Cambridge, MA: Harvard University Press.
- Fleming, A. S., Steiner, M., & Corter, C. (1997). Cortisol, hedonics, and maternal responsiveness in human mothers. Hormones and Behavior, 32, 85–98.
- Goodson, J. L., Saldanha, C. J., Hahn, T. P., Soma, K. K. (2005). Recent advances in behavioral neuroendocrinology: Insights from studies on birds. Hormones and Behavior, 48, 461–73.
- Kidd, K. A., Blanchfield, P. J., Mills, K. H., Palace, V. P., Evans, R. E. Lazorchak, J. M. & Flick, R. (2007). Collapse of a fish population following exposure to a synthetic estrogen. Proceedings of the National Academy of Sciences,104, 8897–8901.
- Nelson, R. J. (Ed.) (2006). Biology of aggression. New York: Oxford University Press.
- Nelson, R.J. (2011). An introduction to behavioral endocrinology (4th ed.). Sunderland, MA: Sinauer Associates.
- Phoenix, C. H., Goy, R. W., Gerall, A. A., & Young, W. C. (1959). Organizing action of prenatally administered testosterone propionate on the tissues mediating mating behavior in the female guinea pig. Endocrinology, 65: 369–382.
- van Anders, S., Hamilton, L., Schmidt, N., & Watson, N. (2007). Associations between testosterone secretion and sexual activity in women. Hormones and Behavior, 51, 477–82.
How to cite this Chapter using APA Style:
Nelson, R. J. (2019). Hormones & behavior. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology.Champaign, IL: DEF publishers. Retrieved from http://noba.to/c6gvwu9m
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/c6gvwu9m.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Kelley, H. H. (1967). Attribution theory in social psychology. In D. Levine (Ed.), Nebraska Symposium on Motivation (Vol. 15, pp. 192–240). Lincoln: University of Nebraska Press.
Inhelder, B., & Piaget, J. (1958). The growth of logical thinking from childhood to adolescence. New York: Basic Books.
Malle, B. F. (2004). How the mind explains behavior: Folk explanations, meaning, and social interaction. Cambridge, MA: MIT Press.
Malle, B. F. (1999). How people explain behavior: A new theoretical framework. Personality and Social Psychology Review, 3, 23–48.
DeVries, R. (1969). Constancy of genetic identity in the years three to six. Monographs of the Society for Research in Child Development, 34, 127.
Held, R. (1993). What can rates of development tell us about underlying mechanisms? In C. E. Granrud (Ed.), Visual perception and cognition in infancy (pp. 75–90). Hillsdale, NJ: Erlbaum.
Original chapter by Diane Beck and Evelina Tapia adapted by the Queen’s University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
The human brain is responsible for all behaviors, thoughts, and experiences described in this textbook. This module provides an introductory overview of the brain, including some basic neuroanatomy, and brief descriptions of the neuroscience methods used to study it.
Learning Objectives
- Name and describe the basic function of the brain stem, cerebellum, and cerebral hemispheres.
- Name and describe the basic function of the four cerebral lobes: occipital, temporal, parietal, and frontal cortex.
- Describe a split-brain patient and at least two important aspects of brain function that these patients reveal.
- Distinguish between gray and white matter of the cerebral hemispheres.
- Name and describe the most common approaches to studying the human brain.
- Distinguish among four neuroimaging methods: PET, fMRI, EEG, and DOI.
- Describe the difference between spatial and temporal resolution with regard to brain function.
Introduction
Any textbook on psychology would be incomplete without reference to the brain. Every behavior, thought, or experience described in the other modules must be implemented in the brain. A detailed understanding of the human brain can help us make sense of human experience and behavior. For example, one well-established fact about human cognition is that it is limited. We cannot do two complex tasks at once: We cannot read and carry on a conversation at the same time, text and drive, or surf the Internet while listening to a lecture, at least not successfully or safely. We cannot even pat our head and rub our stomach at the same time (with exceptions, see “A Brain Divided”). Why is this? Many people have suggested that such limitations reflect the fact that the behaviors draw on the same resource; if one behavior uses up most of the resource there is not enough resource left for the other. But what might this limited resource be in the brain?
The brain uses oxygen and glucose, delivered via the blood. The brain is a large consumer of these metabolites, using 20% of the oxygen and calories we consume despite being only 2% of our total weight. However, as long as we are not oxygen-deprived or malnourished, we have more than enough oxygen and glucose to fuel the brain. Thus, insufficient “brain fuel” cannot explain our limited capacity. Nor is it likely that our limitations reflect too few neurons. The average human brain contains 100 billion neurons. It is also not the case that we use only 10% of our brain, a myth that was likely started to imply we had untapped potential. Modern neuroimaging (see “Studying the Human Brain”) has shown that we use all parts of brain, just at different times, and certainly more than 10% at any one time.
If we have an abundance of brain fuel and neurons, how can we explain our limited cognitive abilities? Why can’t we do more at once? The most likely explanation is the way these neurons are wired up. We know, for instance, that many neurons in the visual cortex (the part of the brain responsible for processing visual information) are hooked up in such a way as to inhibit each other (Beck & Kastner, 2009). When one neuron fires, it suppresses the firing of other nearby neurons. If two neurons that are hooked up in an inhibitory way both fire, then neither neuron can fire as vigorously as it would otherwise. This competitive behavior among neurons limits how much visual information the brain can respond to at the same time. Similar kinds of competitive wiring among neurons may underlie many of our limitations. Thus, although talking about limited resources provides an intuitive description of our limited capacity behavior, a detailed understanding of the brain suggests that our limitations more likely reflect the complex way in which neurons talk to each other rather than the depletion of any specific resource.
The Anatomy of the Brain
There are many ways to subdivide the mammalian brain, resulting in some inconsistent and ambiguous nomenclature over the history of neuroanatomy (Swanson, 2000). For simplicity, we will divide the brain into three basic parts: the brain stem, cerebellum, and cerebral hemispheres (see Figure 1). In Figure 2, however, we depict other prominent groupings (Swanson, 2000) of the six major subdivisions of the brain (Kandal, Schwartz, & Jessell, 2000).
Brain Stem
The brain stem is sometimes referred to as the “trunk” of the brain. It is responsible for many of the neural functions that keep us alive, including regulating our respiration (breathing), heart rate, and digestion. In keeping with its function, if a patient sustains severe damage to the brain stem he or she will require “life support” (i.e., machines are used to keep him or her alive). Because of its vital role in survival, in many countries a person who has lost brain stem function is said to be “brain dead,” although other countries require significant tissue loss in the cortex (of the cerebral hemispheres), which is responsible for our conscious experience, for the same diagnosis. The brain stem includes the medulla, pons, midbrain, and diencephalon (which consists of thalamus and hypothalamus). Collectively, these regions also are involved in our sleep–wake cycle, some sensory and motor function, as well as growth and other hormonal behaviors.
Cerebellum
The cerebellum is the distinctive structure at the back of the brain. The Greek philosopher and scientist Aristotle aptly referred to it as the “small brain” (“parencephalon” in Greek, “cerebellum” in Latin) in order to distinguish it from the “large brain” (“encephalon” in Greek, “cerebrum” in Latin). The cerebellum is critical for coordinated movement and posture. More recently, neuroimaging studies (see “Studying the Human Brain”) have implicated it in a range of cognitive abilities, including language. It is perhaps not surprising that the cerebellum’s influence extends beyond that of movement and posture, given that it contains the greatest number of neurons of any structure in the brain. However, the exact role it plays in these higher functions is still a matter of further study.
Cerebral Hemispheres
The cerebral hemispheres are responsible for our cognitive abilities and conscious experience. They consist of the cerebral cortex and accompanying white matter (“cerebrum” in Latin) as well as the subcortical structures of the basal ganglia, amygdala, and hippocampal formation. The cerebral cortex is the largest and most visible part of the brain, retaining the Latin name (cerebrum) for “large brain” that Aristotle coined. It consists of two hemispheres (literally two half spheres) and gives the brain its characteristic gray and convoluted appearance; the folds and grooves of the cortex are called gyri and sulci (gyrus and sulcus if referring to just one), respectively.
The two cerebral hemispheres can be further subdivided into four lobes: the occipital, temporal, parietal, and frontal lobes. The occipital lobe is responsible for vision, as is much of the temporal lobe. The temporal lobe is also involved in auditory processing, memory, and multisensory integration (e.g., the convergence of vision and audition). The parietal lobe houses the somatosensory (body sensations) cortex and structures involved in visual attention, as well as multisensory convergence zones. The frontal lobe houses the motor cortex and structures involved in motor planning, language, judgment, and decision-making. Not surprisingly then, the frontal lobe is proportionally larger in humans than in any other animal.
The subcortical structures are so named because they reside beneath the cortex. The basal ganglia are critical to voluntary movement and as such make contact with the cortex, the thalamus, and the brain stem. The amygdala and hippocampal formation are part of the limbic system, which also includes some cortical structures. The limbic system plays an important role in emotion and, in particular, in aversion and gratification.
A Brain Divided
The two cerebral hemispheres are connected by a dense bundle of white matter tracts called the corpus callosum. Some functions are replicated in the two hemispheres. For example, both hemispheres are responsible for sensory and motor function, although the sensory and motor cortices have a contralateral (or opposite-side) representation; that is, the left cerebral hemisphere is responsible for movements and sensations on the right side of the body and the right cerebral hemisphere is responsible for movements and sensations on the left side of the body. Other functions are lateralized; that is, they reside primarily in one hemisphere or the other. For example, for right-handed and the majority of left-handed individuals, the left hemisphere is most responsible for language.There are some people whose two hemispheres are not connected, either because the corpus callosum was surgically severed (callosotomy) or due to a genetic abnormality. These split-brain patients have helped us understand the functioning of the two hemispheres. First, because of the contralateral representation of sensory information, if an object is placed in only the left or only the right visual hemifield, then only the right or left hemisphere, respectively, of the split-brain patient will see it. In essence, it is as though the person has two brains in his or her head, each seeing half the world. Interestingly, because language is very often localized in the left hemisphere, if we show the right hemisphere a picture and ask the patient what she saw, she will say she didn’t see anything (because only the left hemisphere can speak and it didn’t see anything). However, we know that the right hemisphere sees the picture because if the patient is asked to press a button whenever she sees the image, the left hand (which is controlled by the right hemisphere) will respond despite the left hemisphere’s denial that anything was there. There are also some advantages to having disconnected hemispheres. Unlike those with a fully functional corpus callosum, a split-brain patient can simultaneously search for something in his right and left visual fields (Luck, Hillyard, Mangun, & Gazzaniga, 1989) and can do the equivalent of rubbing his stomach and patting his head at the same time (Franz, Eliason, Ivry, & Gazzaniga, 1996). In other words, they exhibit less competition between the hemispheres.
Gray Versus White Matter
The cerebral hemispheres contain both grey and white matter, so called because they appear grayish and whitish in dissections or in an MRI (magnetic resonance imaging; see, “Studying the Human Brain”). The gray matter is composed of the neuronal cell bodies (see module, “Neurons”). The cell bodies (or soma) contain the genes of the cell and are responsible for metabolism (keeping the cell alive) and synthesizing proteins. In this way, the cell body is the workhorse of the cell. The white matter is composed of the axons of the neurons, and, in particular, axons that are covered with a sheath of myelin (fatty support cells that are whitish in color). Axons conduct the electrical signals from the cell and are, therefore, critical to cell communication. People use the expression “use your gray matter” when they want a person to think harder. The “gray matter” in this expression is probably a reference to the cerebral hemispheres more generally; the gray cortical sheet (the convoluted surface of the cortex) being the most visible. However, both the gray matter and white matter are critical to proper functioning of the mind. Losses of either result in deficits in language, memory, reasoning, and other mental functions. See Figure 3 for MRI slices showing both the inner white matter that connects the cell bodies in the gray cortical sheet.
Studying the Human Brain
How do we know what the brain does? We have gathered knowledge about the functions of the brain from many different methods. Each method is useful for answering distinct types of questions, but the strongest evidence for a specific role or function of a particular brain area is converging evidence; that is, similar findings reported from multiple studies using different methods. One of the first organized attempts to study the functions of the brain was phrenology, a popular field of study in the first half of the 19th century. Phrenologists assumed that various features of the brain, such as its uneven surface, are reflected on the skull; therefore, they attempted to correlate bumps and indentations of the skull with specific functions of the brain. For example, they would claim that a very artistic person has ridges on the head that vary in size and location from those of someone who is very good at spatial reasoning. Although the assumption that the skull reflects the underlying brain structure has been proven wrong, phrenology nonetheless significantly impacted current-day neuroscience and its thinking about the functions of the brain. That is, different parts of the brain are devoted to very specific functions that can be identified through scientific inquiry.
Neuroanatomy
Dissection of the brain, in either animals or cadavers, has been a critical tool of neuroscientists since 340 BC when Aristotle first published his dissections. Since then this method has advanced considerably with the discovery of various staining techniques that can highlight particular cells. Because the brain can be sliced very thinly, examined under the microscope, and particular cells highlighted, this method is especially useful for studying specific groups of neurons or small brain structures; that is, it has a very high spatial resolution. Dissections allow scientists to study changes in the brain that occur due to various diseases or experiences (e.g., exposure to drugs or brain injuries).Virtual dissection studies with living humans are also conducted. Here, the brain is imaged using computerized axial tomography (CAT) or MRI scanners; they reveal with very high precision the various structures in the brain and can help detect changes in gray or white matter. These changes in the brain can then be correlated with behavior, such as performance on memory tests, and, therefore, implicate specific brain areas in certain cognitive functions.
Changing the Brain
Some researchers induce lesions or ablate (i.e., remove) parts of the brain in animals. If the animal’s behavior changes after the lesion, we can infer that the removed structure is important for that behavior. Lesions of human brains are studied in patient populations only; that is, patients who have lost a brain region due to a stroke or other injury, or who have had surgical removal of a structure to treat a particular disease (e.g., a callosotomy to control epilepsy, as in split-brain patients). From such case studies, we can infer brain function by measuring changes in the behavior of the patients before and after the lesion.Because the brain works by generating electrical signals, it is also possible to change brain function with electrical stimulation. Transcranial magnetic stimulation (TMS) refers to a technique whereby a brief magnetic pulse is applied to the head that temporarily induces a weak electrical current in the brain. Although effects of TMS are sometimes referred to as temporary virtual lesions, it is more appropriate to describe the induced electricity as interference with neurons’ normal communication with each other. TMS allows very precise study of when events in the brain happen so it has a good temporal resolution, but its application is limited only to the surface of the cortex and cannot extend to deep areas of the brain. Transcranial direct current stimulation (tDCS) is similar to TMS except that it uses electrical current directly, rather than inducing it with magnetic pulses, by placing small electrodes on the skull. A brain area is stimulated by a low current (equivalent to an AA battery) for a more extended period of time than TMS. When used in combination with cognitive training, tDCS has been shown to improve performance of many cognitive functions such as mathematical ability, memory, attention, and coordination (e.g., Brasil-Neto, 2012; Feng, Bowden, & Kautz, 2013; Kuo & Nitsche, 2012).
Neuroimaging
Neuroimaging tools are used to study the brain in action; that is, when it is engaged in a specific task. Positron emission tomography (PET) records blood flow in the brain. The PET scanner detects the radioactive substance that is injected into the bloodstream of the participant just before or while he or she is performing some task (e.g., adding numbers). Because active neuron populations require metabolites, more blood and hence more radioactive substance flows into those regions. PET scanners detect the injected radioactive substance in specific brain regions, allowing researchers to infer that those areas were active during the task. Functional magnetic resonance imaging (fMRI) also relies on blood flow in the brain. This method, however, measures the changes in oxygen levels in the blood and does not require any substance to be injected into the participant. Both of these tools have good spatial resolution (although not as precise as dissection studies), but because it takes at least several seconds for the blood to arrive to the active areas of the brain, PET and fMRI have poor temporal resolution; that is, they do not tell us very precisely when the activity occurred.
Electroencephalography (EEG), on the other hand, measures the electrical activity of the brain, and therefore, it has a much greater temporal resolution (millisecond precision rather than seconds) than PET or fMRI. Like tDCS, electrodes are placed on the participant’s head when he or she is performing a task. In this case, however, many more electrodes are used, and they measure rather than produce activity. Because the electrical activity picked up at any particular electrode can be coming from anywhere in the brain, EEG has poor spatial resolution; that is, we have only a rough idea of which part of the brain generates the measured activity.
Diffuse optical imaging (DOI) can give researchers the best of both worlds: high spatial and temporal resolution, depending on how it is used. Here, one shines infrared light into the brain, and measures the light that comes back out. DOI relies on the fact that the properties of the light change when it passes through oxygenated blood, or when it encounters active neurons. Researchers can then infer from the properties of the collected light what regions in the brain were engaged by the task. When DOI is set up to detect changes in blood oxygen levels, the temporal resolution is low and comparable to PET or fMRI. However, when DOI is set up to directly detect active neurons, it has both high spatial and temporal resolution.
Because the spatial and temporal resolution of each tool varies, strongest evidence for what role a certain brain area serves comes from converging evidence. For example, we are more likely to believe that the hippocampal formation is involved in memory if multiple studies using a variety of tasks and different neuroimaging tools provide evidence for this hypothesis. The brain is a complex system, and only advances in brain research will show whether the brain can ever really understand itself.
Unpacking Left Brain vs Right Brain
The concept of lateralization of function is more complicated than pop media presents. In this video, shared by Society for Neuroscience, Michael Colacci, a medical school student at Northwestern University Feinberg School of Medicine, unpacks some of the misunderstandings associated with lateralization of function.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- Ablation
- Surgical removal of brain tissue.
- Axial plane
- See “horizontal plane.”
- Basal ganglia
- Subcortical structures of the cerebral hemispheres involved in voluntary movement.
- Brain stem
- The “trunk” of the brain comprised of the medulla, pons, midbrain, and diencephalon.
- Callosotomy
- Surgical procedure in which the corpus callosum is severed (used to control severe epilepsy).
- Case study
- A thorough study of a patient (or a few patients) with naturally occurring lesions.
- Cerebellum
- The distinctive structure at the back of the brain, Latin for “small brain.”
- Cerebral cortex
- The outermost gray matter of the cerebrum; the distinctive convolutions characteristic of the mammalian brain.
- Cerebral hemispheres
- The cerebral cortex, underlying white matter, and subcortical structures.
- Cerebrum
- Usually refers to the cerebral cortex and associated white matter, but in some texts includes the subcortical structures.
- Contralateral
- Literally “opposite side”; used to refer to the fact that the two hemispheres of the brain process sensory information and motor commands for the opposite side of the body (e.g., the left hemisphere controls the right side of the body).
- Converging evidence
- Similar findings reported from multiple studies using different methods.
- Coronal plane
- A slice that runs from head to foot; brain slices in this plane are similar to slices of a loaf of bread, with the eyes being the front of the loaf.
- Diffuse optical imaging (DOI)
- A neuroimaging technique that infers brain activity by measuring changes in light as it is passed through the skull and surface of the brain.
- Electroencephalography (EEG)
- A neuroimaging technique that measures electrical brain activity via multiple electrodes on the scalp.
- Frontal lobe
- The front most (anterior) part of the cerebrum; anterior to the central sulcus and responsible for motor output and planning, language, judgment, and decision-making.
- Functional magnetic resonance imaging (fMRI)
- Functional magnetic resonance imaging (fMRI): A neuroimaging technique that infers brain activity by measuring changes in oxygen levels in the blood.
- Gray matter
- The outer grayish regions of the brain comprised of the neurons’ cell bodies.
- Gyri
- (plural) Folds between sulci in the cortex.
- Gyrus
- A fold between sulci in the cortex.
- Horizontal plane
- A slice that runs horizontally through a standing person (i.e., parallel to the floor); slices of brain in this plane divide the top and bottom parts of the brain; this plane is similar to slicing a hamburger bun.
- Lateralized
- To the side; used to refer to the fact that specific functions may reside primarily in one hemisphere or the other (e.g., for the majority individuals, the left hemisphere is most responsible for language).
- Lesion
- A region in the brain that suffered damage through injury, disease, or medical intervention.
- Limbic system
- Includes the subcortical structures of the amygdala and hippocampal formation as well as some cortical structures; responsible for aversion and gratification.
- Metabolite
- A substance necessary for a living organism to maintain life.
- Motor cortex
- Region of the frontal lobe responsible for voluntary movement; the motor cortex has a contralateral representation of the human body.
- Myelin
- Fatty tissue, produced by glial cells (see module, “Neurons”) that insulates the axons of the neurons; myelin is necessary for normal conduction of electrical impulses among neurons.
- Nomenclature
- Naming conventions.
- Occipital lobe
- The back most (posterior) part of the cerebrum; involved in vision.
- Parietal lobe
- The part of the cerebrum between the frontal and occipital lobes; involved in bodily sensations, visual attention, and integrating the senses.
- Phrenology
- A now-discredited field of brain study, popular in the first half of the 19th century that correlated bumps and indentations of the skull with specific functions of the brain.
- Positron emission tomography (PET)
- A neuroimaging technique that measures brain activity by detecting the presence of a radioactive substance in the brain that is initially injected into the bloodstream and then pulled in by active brain tissue.
- Sagittal plane
- A slice that runs vertically from front to back; slices of brain in this plane divide the left and right side of the brain; this plane is similar to slicing a baked potato lengthwise.
- Somatosensory (body sensations) cortex
- The region of the parietal lobe responsible for bodily sensations; the somatosensory cortex has a contralateral representation of the human body.
- Spatial resolution
- A term that refers to how small the elements of an image are; high spatial resolution means the device or technique can resolve very small elements; in neuroscience it describes how small of a structure in the brain can be imaged.
- Split-brain patient
- A patient who has had most or all of his or her corpus callosum severed.
- Subcortical
- Structures that lie beneath the cerebral cortex, but above the brain stem.
- Sulci
- (plural) Grooves separating folds of the cortex.
- Sulcus
- A groove separating folds of the cortex.
- Temporal lobe
- The part of the cerebrum in front of (anterior to) the occipital lobe and below the lateral fissure; involved in vision, auditory processing, memory, and integrating vision and audition.
- Temporal resolution
- A term that refers to how small a unit of time can be measured; high temporal resolution means capable of resolving very small units of time; in neuroscience it describes how precisely in time a process can be measured in the brain.
- Transcranial direct current stimulation (tDCS)
- A neuroscience technique that passes mild electrical current directly through a brain area by placing small electrodes on the skull.
- Transcranial magnetic stimulation (TMS)
- A neuroscience technique whereby a brief magnetic pulse is applied to the head that temporarily induces a weak electrical current that interferes with ongoing activity.
- Transverse plane
- See “horizontal plane.”
- Visual hemifield
- The half of visual space (what we see) on one side of fixation (where we are looking); the left hemisphere is responsible for the right visual hemifield, and the right hemisphere is responsible for the left visual hemifield.
- White matter
- The inner whitish regions of the cerebrum comprised of the myelinated axons of neurons in the cerebral cortex.
References
- Beck, D. M., & Kastner, S. (2009). Top-down and bottom-up mechanisms in biasing competition in the human brain. Vision Research, 49, 1154–1165.
- Brasil-Neto, J. P. (2012). Learning, memory, and transcranial direct current stimulation. Frontiers in Psychiatry, 3(80). doi: 10.3389/fpsyt.2012.00080.
- Feng, W. W., Bowden, M. G., & Kautz, S. (2013). Review of transcranial direct current stimulation in poststroke recovery. Topics in Stroke Rehabilitation, 20, 68–77.
- Franz, E. A., Eliassen, J. C., Ivry, R. B., & Gazzaniga, M. S. (1996). Dissociation of spatial and temporal coupling in the bimanual movements of callosotomy patients. Psychological Science, 7, 306–310.
- Kandal, E. R., Schwartz, J. H., & Jessell, T. M. (Eds.) (2000). Principles of neural science (Vol. 4). New York, NY: McGraw-Hill.
- Kuo, M. F., & Nitsche, M. A. (2012). Effects of transcranial electrical stimulation on cognition. Clinical EEG and Neuroscience, 43, 192–199.
- Luck, S. J., Hillyard, S. A., Mangun, G. R., & Gazzaniga, M. S. (1989). Independent hemispheric attentional systems mediate visual search in split-brain patients. Nature, 342, 543–545.
- Swanson, L. (2000). What is the brain? Trends in Neurosciences, 23, 519–527.
How to cite this Chapter using APA Style:
Beck, D. & Tapia, E. (2019). The brain. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology.Champaign, IL: DEF publishers. Retrieved from http://noba.to/jx7268sd
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/jx7268sd.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Original chapter by Beth Chance and Allan Rossman adapted by the Queen's University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
As our society increasingly calls for evidence-based decision making, it is important to consider how and when we can draw valid inferences from data. This module will use four recent research studies to highlight key elements of a statistical investigation.
Learning Objectives
- Define basic elements of a statistical investigation.
- Describe the role of p-values and confidence intervals in statistical inference.
- Describe the role of random sampling in generalizing conclusions from a sample to a population.
- Describe the role of random assignment in drawing cause-and-effect conclusions.
- Critique statistical studies.
Introduction
Does drinking coffee actually increase your life expectancy? A recent study (Freedman, Park, Abnet, Hollenbeck, & Sinha, 2012) found that men who drank at least six cups of coffee a day had a 10% lower chance of dying (women 15% lower) than those who drank none. Does this mean you should pick up or increase your own coffee habit?
Modern society has become awash in studies such as this; you can read about several such studies in the news every day. Moreover, data abound everywhere in modern life. Conducting such a study well, and interpreting the results of such studies well for making informed decisions or setting policies, requires understanding basic ideas of statistics, the science of gaining insight from data. Rather than relying on anecdote and intuition, statistics allows us to systematically study phenomena of interest.
Key components to a statistical investigation are:
- Planning the study: Start by asking a testable research question and deciding how to collect data. For example, how long was the study period of the coffee study? How many people were recruited for the study, how were they recruited, and from where? How old were they? What other variables were recorded about the individuals, such as smoking habits, on the comprehensive lifestyle questionnaires? Were changes made to the participants’ coffee habits during the course of the study?
- Examining the data: What are appropriate ways to examine the data? What graphs are relevant, and what do they reveal? What descriptive statistics can be calculated to summarize relevant aspects of the data, and what do they reveal? What patterns do you see in the data? Are there any individual observations that deviate from the overall pattern, and what do they reveal? For example, in the coffee study, did the proportions differ when we compared the smokers to the non-smokers?
- Inferring from the data: What are valid statistical methods for drawing inferences “beyond” the data you collected? In the coffee study, is the 10%–15% reduction in risk of death something that could have happened just by chance?
- Drawing conclusions: Based on what you learned from your data, what conclusions can you draw? Who do you think these conclusions apply to? (Were the people in the coffee study older? Healthy? Living in cities?) Can you draw a cause-and-effect conclusion about your treatments? (Are scientists now saying that the coffee drinking is the cause of the decreased risk of death?)
Notice that the numerical analysis (“crunching numbers” on the computer) comprises only a small part of overall statistical investigation. In this module, you will see how we can answer some of these questions and what questions you should be asking about any statistical investigation you read about.
Distributional Thinking
When data are collected to address a particular question, an important first step is to think of meaningful ways to organize and examine the data. The most fundamental principle of statistics is that data vary. The pattern of that variation is crucial to capture and to understand. Often, careful presentation of the data will address many of the research questions without requiring more sophisticated analyses. It may, however, point to additional questions that need to be examined in more detail.
Example 1: Researchers investigated whether cancer pamphlets are written at an appropriate level to be read and understood by cancer patients (Short, Moriarty, & Cooley, 1995). Tests of reading ability were given to 63 patients. In addition, readability level was determined for a sample of 30 pamphlets, based on characteristics such as the lengths of words and sentences in the pamphlet. The results, reported in terms of grade levels, are displayed in Table 1.
These two variables reveal two fundamental aspects of statistical thinking:
- Data vary. More specifically, values of a variable (such as reading level of a cancer patient or readability level of a cancer pamphlet) vary.
- Analyzing the pattern of variation, called the distribution of the variable, often reveals insights.
Addressing the research question of whether the cancer pamphlets are written at appropriate levels for the cancer patients requires comparing the two distributions. A naïve comparison might focus only on the centers of the distributions. Both medians turn out to be ninth grade, but considering only medians ignores the variability and the overall distributions of these data. A more illuminating approach is to compare the entire distributions, for example with a graph, as in Figure 1.
Figure 1 makes clear that the two distributions are not well aligned at all. The most glaring discrepancy is that many patients (17/63, or 27%, to be precise) have a reading level below that of the most readable pamphlet. These patients will need help to understand the information provided in the cancer pamphlets. Notice that this conclusion follows from considering the distributions as a whole, not simply measures of center or variability, and that the graph contrasts those distributions more immediately than the frequency tables.
Statistical Significance
Even when we find patterns in data, often there is still uncertainty in various aspects of the data. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1 °F over the course of the day). Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the population of interest. In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systematic phenomenon in the larger process or population?
Example 2: In a study reported in the November 2007 issue of Nature, researchers investigated whether pre-verbal infants take into account an individual’s actions toward others in evaluating that individual as appealing or aversive (Hamlin, Wynn, & Bloom, 2007). In one component of the study, 10-month-old infants were shown a “climber” character (a piece of wood with “googly” eyes glued onto it) that could not make it up a hill in two tries. Then the infants were shown two scenarios for the climber’s next try, one where the climber was pushed to the top of the hill by another character (“helper”), and one where the climber was pushed back down the hill by another character (“hinderer”). The infant was alternately shown these two scenarios several times. Then the infant was presented with two pieces of wood (representing the helper and the hinderer characters) and asked to pick one to play with. The researchers found that of the 16 infants who made a clear choice, 14 chose to play with the helper toy.
One possible explanation for this clear majority result is that the helping behavior of the one toy increases the infants’ likelihood of choosing that toy. But are there other possible explanations? What about the color of the toy? Well, prior to collecting the data, the researchers arranged so that each color and shape (red square and blue circle) would be seen by the same number of infants. Or maybe the infants had right-handed tendencies and so picked whichever toy was closer to their right hand? Well, prior to collecting the data, the researchers arranged it so half the infants saw the helper toy on the right and half on the left. Or, maybe the shapes of these wooden characters (square, triangle, circle) had an effect? Perhaps, but again, the researchers controlled for this by rotating which shape was the helper toy, the hinderer toy, and the climber. When designing experiments, it is important to control for as many variables as might affect the responses as possible.
It is beginning to appear that the researchers accounted for all the other plausible explanations. But there is one more important consideration that cannot be controlled—if we did the study again with these 16 infants, they might not make the same choices. In other words, there is some randomness inherent in their selection process. Maybe each infant had no genuine preference at all, and it was simply “random luck” that led to 14 infants picking the helper toy. Although this random component cannot be controlled, we can apply a probability model to investigate the pattern of results that would occur in the long run if random chance were the only factor.
If the infants were equally likely to pick between the two toys, then each infant had a 50% chance of picking the helper toy. It’s like each infant tossed a coin, and if it landed heads, the infant picked the helper toy. So if we tossed a coin 16 times, could it land heads 14 times? Sure, it’s possible, but it turns out to be very unlikely. Getting 14 (or more) heads in 16 tosses is about as likely as tossing a coin and getting 9 heads in a row. This probability is referred to as a p-value. The p-value tells you how often a random process would give a result at least as extreme as what was found in the actual study, assuming there was nothing other than random chance at play. So, if we assume that each infant was choosing equally, then the probability that 14 or more out of 16 infants would choose the helper toy is found to be 0.0021. We have only two logical possibilities: either the infants have a genuine preference for the helper toy, or the infants have no preference (50/50) and an outcome that would occur only 2 times in 1,000 iterations happened in this study. Because this p-value of 0.0021 is quite small, we conclude that the study provides very strong evidence that these infants have a genuine preference for the helper toy. We often compare the p-value to some cut-off value (called the level of significance, typically around 0.05). If the p-value is smaller than that cut-off value, then we reject the hypothesis that only random chance was at play here. In this case, these researchers would conclude that significantly more than half of the infants in the study chose the helper toy, giving strong evidence of a genuine preference for the toy with the helping behavior.
Generalizability
One limitation to the previous study is that the conclusion only applies to the 16 infants in the study. We don’t know much about how those 16 infants were selected. Suppose we want to select a subset of individuals (a sample) from a much larger group of individuals (the population) in such a way that conclusions from the sample can be generalized to the larger population. This is the question faced by pollsters every day.
Example 3: The General Social Survey (GSS) is a survey on societal trends conducted every other year in the United States. Based on a sample of about 2,000 adult Americans, researchers make claims about what percentage of the U.S. population consider themselves to be “liberal,” what percentage consider themselves “happy,” what percentage feel “rushed” in their daily lives, and many other issues. The key to making these claims about the larger population of all American adults lies in how the sample is selected. The goal is to select a sample that is representative of the population, and a common way to achieve this goal is to select a random sample that gives every member of the population an equal chance of being selected for the sample. In its simplest form, random sampling involves numbering every member of the population and then using a computer to randomly select the subset to be surveyed. Most polls don’t operate exactly like this, but they do use probability-based sampling methods to select individuals from nationally representative panels.
In 2004, the GSS reported that 817 of 977 respondents (or 83.6%) indicated that they always or sometimes feel rushed. This is a clear majority, but we again need to consider variation due to random sampling. Fortunately, we can use the same probability model we did in the previous example to investigate the probable size of this error. (Note, we can use the coin-tossing model when the actual population size is much, much larger than the sample size, as then we can still consider the probability to be the same for every individual in the sample.) This probability model predicts that the sample result will be within 3 percentage points of the population value (roughly 1 over the square root of the sample size, the margin of error). A statistician would conclude, with 95% confidence, that between 80.6% and 86.6% of all adult Americans in 2004 would have responded that they sometimes or always feel rushed.
The key to the margin of error is that when we use a probability sampling method, we can make claims about how often (in the long run, with repeated random sampling) the sample result would fall within a certain distance from the unknown population value by chance (meaning by random sampling variation) alone. Conversely, non-random samples are often suspect to bias, meaning the sampling method systematically over-represents some segments of the population and under-represents others. We also still need to consider other sources of bias, such as individuals not responding honestly. These sources of error are not measured by the margin of error.
Cause and Effect Conclusions
In many research studies, the primary question of interest concerns differences between groups. Then the question becomes how were the groups formed (e.g., selecting people who already drink coffee vs. those who don’t). In some studies, the researchers actively form the groups themselves. But then we have a similar question—could any differences we observe in the groups be an artifact of that group-formation process? Or maybe the difference we observe in the groups is so large that we can discount a “fluke” in the group-formation process as a reasonable explanation for what we find?
Example 4: A psychology study investigated whether people tend to display more creativity when they are thinking about intrinsic or extrinsic motivations (Ramsey & Schafer, 2002, based on a study by Amabile, 1985). The subjects were 47 people with extensive experience with creative writing. Subjects began by answering survey questions about either intrinsic motivations for writing (such as the pleasure of self-expression) or extrinsic motivations (such as public recognition). Then all subjects were instructed to write a haiku, and those poems were evaluated for creativity by a panel of judges. The researchers conjectured beforehand that subjects who were thinking about intrinsic motivations would display more creativity than subjects who were thinking about extrinsic motivations. The creativity scores from the 47 subjects in this study are displayed in Figure 2, where higher scores indicate more creativity.
In this example, the key question is whether the type of motivation affects creativity scores. In particular, do subjects who were asked about intrinsic motivations tend to have higher creativity scores than subjects who were asked about extrinsic motivations?
Figure 2 reveals that both motivation groups saw considerable variability in creativity scores, and these scores have considerable overlap between the groups. In other words, it’s certainly not always the case that those with extrinsic motivations have higher creativity than those with intrinsic motivations, but there may still be a statistical tendency in this direction. (Psychologist Keith Stanovich (2013) refers to people’s difficulties with thinking about such probabilistic tendencies as “the Achilles heel of human cognition.”)
The mean creativity score is 19.88 for the intrinsic group, compared to 15.74 for the extrinsic group, which supports the researchers’ conjecture. Yet comparing only the means of the two groups fails to consider the variability of creativity scores in the groups. We can measure variability with statistics using, for instance, the standard deviation: 5.25 for the extrinsic group and 4.40 for the intrinsic group. The standard deviations tell us that most of the creativity scores are within about 5 points of the mean score in each group. We see that the mean score for the intrinsic group lies within one standard deviation of the mean score for extrinsic group. So, although there is a tendency for the creativity scores to be higher in the intrinsic group, on average, the difference is not extremely large.
We again want to consider possible explanations for this difference. The study only involved individuals with extensive creative writing experience. Although this limits the population to which we can generalize, it does not explain why the mean creativity score was a bit larger for the intrinsic group than for the extrinsic group. Maybe women tend to receive higher creativity scores? Here is where we need to focus on how the individuals were assigned to the motivation groups. If only women were in the intrinsic motivation group and only men in the extrinsic group, then this would present a problem because we wouldn’t know if the intrinsic group did better because of the different type of motivation or because they were women. However, the researchers guarded against such a problem by randomly assigning the individuals to the motivation groups. Like flipping a coin, each individual was just as likely to be assigned to either type of motivation. Why is this helpful? Because this random assignment tends to balance out all the variables related to creativity we can think of, and even those we don’t think of in advance, between the two groups. So we should have a similar male/female split between the two groups; we should have a similar age distribution between the two groups; we should have a similar distribution of educational background between the two groups; and so on. Random assignment should produce groups that are as similar as possible except for the type of motivation, which presumably eliminates all those other variables as possible explanations for the observed tendency for higher scores in the intrinsic group.
But does this always work? No, so by “luck of the draw” the groups may be a little different prior to answering the motivation survey. So then the question is, is it possible that an unlucky random assignment is responsible for the observed difference in creativity scores between the groups? In other words, suppose each individual’s poem was going to get the same creativity score no matter which group they were assigned to, that the type of motivation in no way impacted their score. Then how often would the random-assignment process alone lead to a difference in mean creativity scores as large (or larger) than 19.88 – 15.74 = 4.14 points?
We again want to apply to a probability model to approximate a p-value, but this time the model will be a bit different. Think of writing everyone’s creativity scores on an index card, shuffling up the index cards, and then dealing out 23 to the extrinsic motivation group and 24 to the intrinsic motivation group, and finding the difference in the group means. We (better yet, the computer) can repeat this process over and over to see how often, when the scores don’t change, random assignment leads to a difference in means at least as large as 4.41. Figure 3 shows the results from 1,000 such hypothetical random assignments for these scores.
Only 2 of the 1,000 simulated random assignments produced a difference in group means of 4.41 or larger. In other words, the approximate p-value is 2/1000 = 0.002. This small p-value indicates that it would be very surprising for the random assignment process alone to produce such a large difference in group means. Therefore, as with Example 2, we have strong evidence that focusing on intrinsic motivations tends to increase creativity scores, as compared to thinking about extrinsic motivations.
Notice that the previous statement implies a cause-and-effect relationship between motivation and creativity score; is such a strong conclusion justified? Yes, because of the random assignment used in the study. That should have balanced out any other variables between the two groups, so now that the small p-value convinces us that the higher mean in the intrinsic group wasn’t just a coincidence, the only reasonable explanation left is the difference in the type of motivation. Can we generalize this conclusion to everyone? Not necessarily—we could cautiously generalize this conclusion to individuals with extensive experience in creative writing similar the individuals in this study, but we would still want to know more about how these individuals were selected to participate.
Conclusion
Statistical thinking involves the careful design of a study to collect meaningful data to answer a focused research question, detailed analysis of patterns in the data, and drawing conclusions that go beyond the observed data. Random sampling is paramount to generalizing results from our sample to a larger population, and random assignment is key to drawing cause-and-effect conclusions. With both kinds of randomness, probability models help us assess how much random variation we can expect in our results, in order to determine whether our results could happen by chance alone and to estimate a margin of error.
So where does this leave us with regard to the coffee study mentioned at the beginning of this module? We can answer many of the questions:
- This was a 14-year study conducted by researchers at the National Cancer Institute.
- The results were published in the June issue of the New England Journal of Medicine, a respected, peer-reviewed journal.
- The study reviewed coffee habits of more than 402,000 people ages 50 to 71 from six states and two metropolitan areas. Those with cancer, heart disease, and stroke were excluded at the start of the study. Coffee consumption was assessed once at the start of the study.
- About 52,000 people died during the course of the study.
- People who drank between two and five cups of coffee daily showed a lower risk as well, but the amount of reduction increased for those drinking six or more cups.
- The sample sizes were fairly large and so the p-values are quite small, even though percent reduction in risk was not extremely large (dropping from a 12% chance to about 10%–11%).
- Whether coffee was caffeinated or decaffeinated did not appear to affect the results.
- This was an observational study, so no cause-and-effect conclusions can be drawn between coffee drinking and increased longevity, contrary to the impression conveyed by many news headlines about this study. In particular, it’s possible that those with chronic diseases don’t tend to drink coffee.
This study needs to be reviewed in the larger context of similar studies and consistency of results across studies, with the constant caution that this was not a randomized experiment. Whereas a statistical analysis can still “adjust” for other potential confounding variables, we are not yet convinced that researchers have identified them all or completely isolated why this decrease in death risk is evident. Researchers can now take the findings of this study and develop more focused studies that address new questions.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- Cause-and-effect
- Related to whether we say one variable is causing changes in the other variable, versus other variables that may be related to these two variables.
- Confidence interval
- An interval of plausible values for a population parameter; the interval of values within the margin of error of a statistic.
- Distribution
- The pattern of variation in data.
- Generalizability
- Related to whether the results from the sample can be generalized to a larger population.
- Margin of error
- The expected amount of random variation in a statistic; often defined for 95% confidence level.
- Parameter
- A numerical result summarizing a population (e.g., mean, proportion).
- Population
- A larger collection of individuals that we would like to generalize our results to.
- P-value
- The probability of observing a particular outcome in a sample, or more extreme, under a conjecture about the larger population or process.
- Random assignment
- Using a probability-based method to divide a sample into treatment groups.
- Random sampling
- Using a probability-based method to select a subset of individuals for the sample from the population.
- Sample
- The collection of individuals on which we collect data.
- Statistic
- A numerical result computed from a sample (e.g., mean, proportion).
- Statistical significance
- A result is statistically significant if it is unlikely to arise by chance alone.
-
References
- Amabile, T. (1985). Motivation and creativity: Effects of motivational orientation on creative writers. Journal of Personality and Social Psychology, 48(2), 393–399.
- Freedman, N. D., Park, Y., Abnet, C. C., Hollenbeck, A. R., & Sinha, R. (2012). Association of coffee drinking with total and cause-specific mortality. New England Journal of Medicine, 366, 1891–1904.
- Hamlin, J. K., Wynn, K., & Bloom, P. (2007). Social evaluation by preverbal infants. Nature, 452(22), 557–560.
- Ramsey, F., & Schafer, D. (2002). The statistical sleuth: A course in methods of data analysis. Belmont, CA: Duxbury.
- Short, T., Moriarty, H., & Cooley, M. E. (1995). Readability of educational materials for patients with cancer. Journal of Statistics Education, 3(2).
- Stanovich, K. (2013). How to think straight about psychology (10th ed.). Upper Saddle River, NJ: Pearson.
How to cite this Chapter using APA Style:
Chance, B. & Rossman, A. (2019). Statistical thinking. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/ruaz6wjs
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/ruaz6wjs.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Original chapter by Christie Napa Scollon adapted by the Queen's University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
Psychologists test research questions using a variety of methods. Most research relies on either correlations or experiments. With correlations, researchers measure variables as they naturally occur in people and compute the degree to which two variables go together. With experiments, researchers actively make changes in one variable and watch for changes in another variable. Experiments allow researchers to make causal inferences. Other types of methods include longitudinal and quasi-experimental designs. Many factors, including practical constraints, determine the type of methods researchers use. Often researchers survey people even though it would be better, but more expensive and time consuming, to track them longitudinally.
Learning Objectives
- Articulate the difference between correlational and experimental designs.
- Understand how to interpret correlations.
- Understand how experiments help us to infer causality.
- Understand how surveys relate to correlational and experimental research.
- Explain what a longitudinal study is.
- List a strength and weakness of different research designs.
Research Designs
In the early 1970’s, a man named Uri Geller tricked the world: he convinced hundreds of thousands of people that he could bend spoons and slow watches using only the power of his mind. In fact, if you were in the audience, you would have likely believed he had psychic powers. Everything looked authentic—this man had to have paranormal abilities! So, why have you probably never heard of him before? Because when Uri was asked to perform his miracles in line with scientific experimentation, he was no longer able to do them. That is, even though it seemed like he was doing the impossible, when he was tested by science, he proved to be nothing more than a clever magician.
When we look at dinosaur bones to make educated guesses about extinct life, or systematically chart the heavens to learn about the relationships between stars and planets, or study magicians to figure out how they perform their tricks, we are forming observations—the foundation of science. Although we are all familiar with the saying “seeing is believing,” conducting science is more than just what your eyes perceive. Science is the result of systematic and intentional study of the natural world. And psychology is no different. In the movie Jerry Maguire, Cuba Gooding, Jr. became famous for using the phrase, “Show me the money!” In psychology, as in all sciences, we might say, “Show me the data!”
One of the important steps in scientific inquiry is to test our research questions, otherwise known as hypotheses. However, there are many ways to test hypotheses in psychological research. Which method you choose will depend on the type of questions you are asking, as well as what resources are available to you. All methods have limitations, which is why the best research uses a variety of methods.
Most psychological research can be divided into two types: experimental and correlational research.
Experimental Research
If somebody gave you $20 that absolutely had to be spent today, how would you choose to spend it? Would you spend it on an item you’ve been eyeing for weeks, or would you donate the money to charity? Which option do you think would bring you the most happiness? If you’re like most people, you’d choose to spend the money on yourself (duh, right?). Our intuition is that we’d be happier if we spent the money on ourselves.
Knowing that our intuition can sometimes be wrong, Professor Elizabeth Dunn (2008) at the University of British Columbia set out to conduct an experiment on spending and happiness. She gave each of the participants in her experiment $20 and then told them they had to spend the money by the end of the day. Some of the participants were told they must spend the money on themselves, and some were told they must spend the money on others (either charity or a gift for someone). At the end of the day she measured participants’ levels of happiness using a self-report questionnaire. (But wait, how do you measure something like happiness when you can’t really see it? Psychologists measure many abstract concepts, such as happiness and intelligence, by beginning with operational definitions of the concepts. See the Noba modules on Intelligence [http://noba.to/ncb2h79v] and Happiness [http://noba.to/qnw7g32t], respectively, for more information on specific measurement strategies.)
In an experiment, researchers manipulate, or cause changes, in the independent variable, and observe or measure any impact of those changes in the dependent variable. The independent variable is the one under the experimenter’s control, or the variable that is intentionally altered between groups. In the case of Dunn’s experiment, the independent variable was whether participants spent the money on themselves or on others. The dependent variable is the variable that is not manipulated at all, or the one where the effect happens. One way to help remember this is that the dependent variable “depends” on what happens to the independent variable. In our example, the participants’ happiness (the dependent variable in this experiment) depends on how the participants spend their money (the independent variable). Thus, any observed changes or group differences in happiness can be attributed to whom the money was spent on. What Dunn and her colleagues found was that, after all the spending had been done, the people who had spent the money on others were happier than those who had spent the money on themselves. In other words, spending on others causes us to be happier than spending on ourselves. Do you find this surprising?
But wait! Doesn’t happiness depend on a lot of different factors—for instance, a person’s upbringing or life circumstances? What if some people had happy childhoods and that’s why they’re happier? Or what if some people dropped their toast that morning and it fell jam-side down and ruined their whole day? It is correct to recognize that these factors and many more can easily affect a person’s level of happiness. So how can we accurately conclude that spending money on others causes happiness, as in the case of Dunn’s experiment?
The most important thing about experiments is random assignment. Participants don’t get to pick which condition they are in (e.g., participants didn’t choose whether they were supposed to spend the money on themselves versus others). The experimenter assigns them to a particular condition based on the flip of a coin or the roll of a die or any other random method. Why do researchers do this? With Dunn’s study, there is the obvious reason: you can imagine which condition most people would choose to be in, if given the choice. But another equally important reason is that random assignment makes it so the groups, on average, are similar on all characteristics except what the experimenter manipulates.
By randomly assigning people to conditions (self-spending versus other-spending), some people with happy childhoods should end up in each condition. Likewise, some people who had dropped their toast that morning (or experienced some other disappointment) should end up in each condition. As a result, the distribution of all these factors will generally be consistent across the two groups, and this means that on average the two groups will be relatively equivalent on all these factors. Random assignment is critical to experimentation because if the only difference between the two groups is the independent variable, we can infer that the independent variable is the cause of any observable difference (e.g., in the amount of happiness they feel at the end of the day).
Here’s another example of the importance of random assignment: Let’s say your class is going to form two basketball teams, and you get to be the captain of one team. The class is to be divided evenly between the two teams. If you get to pick the players for your team first, whom will you pick? You’ll probably pick the tallest members of the class or the most athletic. You probably won’t pick the short, uncoordinated people, unless there are no other options. As a result, your team will be taller and more athletic than the other team. But what if we want the teams to be fair? How can we do this when we have people of varying height and ability? All we have to do is randomly assign players to the two teams. Most likely, some tall and some short people will end up on your team, and some tall and some short people will end up on the other team. The average height of the teams will be approximately the same. That is the power of random assignment!
Other considerations
In addition to using random assignment, you should avoid introducing confounds into your experiments. Confounds are things that could undermine your ability to draw causal inferences. For example, if you wanted to test if a new happy pill will make people happier, you could randomly assign participants to take the happy pill or not (the independent variable) and compare these two groups on their self-reported happiness (the dependent variable). However, if some participants know they are getting the happy pill, they might develop expectations that influence their self-reported happiness. This is sometimes known as a placebo effect. Sometimes a person just knowing that he or she is receiving special treatment or something new is enough to actually cause changes in behavior or perception: In other words, even if the participants in the happy pill condition were to report being happier, we wouldn’t know if the pill was actually making them happier or if it was the placebo effect—an example of a confound. A related idea is participant demand. This occurs when participants try to behave in a way they think the experimenter wants them to behave. Placebo effects and participant demand often occur unintentionally. Even experimenter expectations can influence the outcome of a study. For example, if the experimenter knows who took the happy pill and who did not, and the dependent variable is the experimenter’s observations of people’s happiness, then the experimenter might perceive improvements in the happy pill group that are not really there.
One way to prevent these confounds from affecting the results of a study is to use a double-blind procedure. In a double-blind procedure, neither the participant nor the experimenter knows which condition the participant is in. For example, when participants are given the happy pill or the fake pill, they don’t know which one they are receiving. This way the participants shouldn’t experience the placebo effect, and will be unable to behave as the researcher expects (participant demand). Likewise, the researcher doesn’t know which pill each participant is taking (at least in the beginning—later, the researcher will get the results for data-analysis purposes), which means the researcher’s expectations can’t influence his or her observations. Therefore, because both parties are “blind” to the condition, neither will be able to behave in a way that introduces a confound. At the end of the day, the only difference between groups will be which pills the participants received, allowing the researcher to determine if the happy pill actually caused people to be happier.
Correlational Designs
When scientists passively observe and measure phenomena it is called correlational research. Here, we do not intervene and change behavior, as we do in experiments. In correlational research, we identify patterns of relationships, but we usually cannot infer what causes what. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.
So, what if you wanted to test whether spending on others is related to happiness, but you don’t have $20 to give to each participant? You could use a correlational design—which is exactly what Professor Dunn did, too. She asked people how much of their income they spent on others or donated to charity, and later she asked them how happy they were. Do you think these two variables were related? Yes, they were! The more money people reported spending on others, the happier they were.
More details about the correlation
To find out how well two variables correspond, we can plot the relation between the two scores on what is known as a scatterplot (Figure 1). In the scatterplot, each dot represents a data point. (In this case it’s individuals, but it could be some other unit.) Importantly, each dot provides us with two pieces of information—in this case, information about how good the person rated the past month (x-axis) and how happy the person felt in the past month (y-axis). Which variable is plotted on which axis does not matter.
The association between two variables can be summarized statistically using the correlation coefficient (abbreviated as r). A correlation coefficient provides information about the direction and strength of the association between two variables. For the example above, the direction of the association is positive. This means that people who perceived the past month as being good reported feeling more happy, whereas people who perceived the month as being bad reported feeling less happy.
With a positive correlation, the two variables go up or down together. In a scatterplot, the dots form a pattern that extends from the bottom left to the upper right (just as they do in Figure 1). The r value for a positive correlation is indicated by a positive number (although, the positive sign is usually omitted). Here, the r value is .81.
A negative correlation is one in which the two variables move in opposite directions. That is, as one variable goes up, the other goes down. Figure 2 shows the association between the average height of males in a country (y-axis) and the pathogen prevalence (or commonness of disease; x-axis) of that country. In this scatterplot, each dot represents a country. Notice how the dots extend from the top left to the bottom right. What does this mean in real-world terms? It means that people are shorter in parts of the world where there is more disease. The r value for a negative correlation is indicated by a negative number—that is, it has a minus (–) sign in front of it. Here, it is –.83.
The strength of a correlation has to do with how well the two variables align. Recall that in Professor Dunn’s correlational study, spending on others positively correlated with happiness: The more money people reported spending on others, the happier they reported to be. At this point you may be thinking to yourself, I know a very generous person who gave away lots of money to other people but is miserable! Or maybe you know of a very stingy person who is happy as can be. Yes, there might be exceptions. If an association has many exceptions, it is considered a weak correlation. If an association has few or no exceptions, it is considered a strong correlation. A strong correlation is one in which the two variables always, or almost always, go together. In the example of happiness and how good the month has been, the association is strong. The stronger a correlation is, the tighter the dots in the scatterplot will be arranged along a sloped line.
The r value of a strong correlation will have a high absolute value. In other words, you disregard whether there is a negative sign in front of the r value, and just consider the size of the numerical value itself. If the absolute value is large, it is a strong correlation. A weak correlation is one in which the two variables correspond some of the time, but not most of the time. Figure 3 shows the relation between valuing happiness and grade point average (GPA). People who valued happiness more tended to earn slightly lower grades, but there were lots of exceptions to this. The r value for a weak correlation will have a low absolute value. If two variables are so weakly related as to be unrelated, we say they are uncorrelated, and the r value will be zero or very close to zero. In the previous example, is the correlation between height and pathogen prevalence strong? Compared to Figure 3, the dots in Figure 2 are tighter and less dispersed. The absolute value of –.83 is large. Therefore, it is a strong negative correlation.
Can you guess the strength and direction of the correlation between age and year of birth? If you said this is a strong negative correlation, you are correct! Older people always have lower years of birth than younger people (e.g., 1950 vs. 1995), but at the same time, the older people will have a higher age (e.g., 65 vs. 20). In fact, this is a perfect correlation because there are no exceptions to this pattern. I challenge you to find a 10-year-old born before 2003! You can’t.
Problems with the correlation
If generosity and happiness are positively correlated, should we conclude that being generous causes happiness? Similarly, if height and pathogen prevalence are negatively correlated, should we conclude that disease causes shortness? From a correlation alone, we can’t be certain. For example, in the first case it may be that happiness causes generosity, or that generosity causes happiness. Or, a third variable might cause both happiness and generosity, creating the illusion of a direct link between the two. For example, wealth could be the third variable that causes both greater happiness and greater generosity. This is why correlation does not mean causation—an often repeated phrase among psychologists.
Qualitative Designs
Just as correlational research allows us to study topics we can’t experimentally manipulate (e.g., whether you have a large or small income), there are other types of research designs that allow us to investigate these harder-to-study topics. Qualitative designs, including participant observation, case studies, and narrative analysis are examples of such methodologies. Although something as simple as “observation” may seem like it would be a part of all research methods, participant observation is a distinct methodology that involves the researcher embedding him- or herself into a group in order to study its dynamics. For example, Festinger, Riecken, and Shacter (1956) were very interested in the psychology of a particular cult. However, this cult was very secretive and wouldn’t grant interviews to outside members. So, in order to study these people, Festinger and his colleagues pretended to be cult members, allowing them access to the behavior and psychology of the cult. Despite this example, it should be noted that the people being observed in a participant observation study usually know that the researcher is there to study them.
Another qualitative method for research is the case study, which involves an intensive examination of specific individuals or specific contexts. Sigmund Freud, the father of psychoanalysis, was famous for using this type of methodology; however, more current examples of case studies usually involve brain injuries. For instance, imagine that researchers want to know how a very specific brain injury affects people’s experience of happiness. Obviously, the researchers can’t conduct experimental research that involves inflicting this type of injury on people. At the same time, there are too few people who have this type of injury to conduct correlational research. In such an instance, the researcher may examine only one person with this brain injury, but in doing so, the researcher will put the participant through a very extensive round of tests. Hopefully what is learned from this one person can be applied to others; however, even with thorough tests, there is the chance that something unique about this individual (other than the brain injury) will affect his or her happiness. But with such a limited number of possible participants, a case study is really the only type of methodology suitable for researching this brain injury.
The final qualitative method to be discussed in this section is narrative analysis. Narrative analysis centers around the study of stories and personal accounts of people, groups, or cultures. In this methodology, rather than engaging with participants directly, or quantifying their responses or behaviors, researchers will analyze the themes, structure, and dialogue of each person’s narrative. That is, a researcher will examine people’s personal testimonies in order to learn more about the psychology of those individuals or groups. These stories may be written, audio-recorded, or video-recorded, and allow the researcher not only to study what the participant says but how he or she says it. Every person has a unique perspective on the world, and studying the way he or she conveys a story can provide insight into that perspective.
Quasi-Experimental Designs
What if you want to study the effects of marriage on a variable? For example, does marriage make people happier? Can you randomly assign some people to get married and others to remain single? Of course not. So how can you study these important variables? You can use a quasi-experimental design.
A quasi-experimental design is similar to experimental research, except that random assignment to conditions is not used. Instead, we rely on existing group memberships (e.g., married vs. single). We treat these as the independent variables, even though we don’t assign people to the conditions and don’t manipulate the variables. As a result, with quasi-experimental designs causal inference is more difficult. For example, married people might differ on a variety of characteristics from unmarried people. If we find that married participants are happier than single participants, it will be hard to say that marriage causes happiness, because the people who got married might have already been happier than the people who have remained single.
Because experimental and quasi-experimental designs can seem pretty similar, let’s take another example to distinguish them. Imagine you want to know who is a better professor: Dr. Smith or Dr. Khan. To judge their ability, you’re going to look at their students’ final grades. Here, the independent variable is the professor (Dr. Smith vs. Dr. Khan) and the dependent variable is the students’ grades. In an experimental design, you would randomly assign students to one of the two professors and then compare the students’ final grades. However, in real life, researchers can’t randomly force students to take one professor over the other; instead, the researchers would just have to use the preexisting classes and study them as-is (quasi-experimental design). Again, the key difference is random assignment to the conditions of the independent variable. Although the quasi-experimental design (where the students choose which professor they want) may seem random, it’s most likely not. For example, maybe students heard Dr. Smith sets low expectations, so slackers prefer this class, whereas Dr. Khan sets higher expectations, so smarter students prefer that one. This now introduces a confounding variable (student intelligence) that will almost certainly have an effect on students’ final grades, regardless of how skilled the professor is. So, even though a quasi-experimental design is similar to an experimental design (i.e., it has a manipulated independent variable), because there’s no random assignment, you can’t reasonably draw the same conclusions that you would with an experimental design.
Longitudinal Studies
Another powerful research design is the longitudinal study. Longitudinal studies track the same people over time. Some longitudinal studies last a few weeks, some a few months, some a year or more. Some studies that have contributed a lot to psychology followed the same people over decades. For example, one study followed more than 20,000 Germans for two decades. From these longitudinal data, psychologist Rich Lucas (2003) was able to determine that people who end up getting married indeed start off a bit happier than their peers who never marry. Longitudinal studies like this provide valuable evidence for testing many theories in psychology, but they can be quite costly to conduct, especially if they follow many people for many years.
Surveys
A survey is a way of gathering information, using old-fashioned questionnaires or the Internet. Compared to a study conducted in a psychology laboratory, surveys can reach a larger number of participants at a much lower cost. Although surveys are typically used for correlational research, this is not always the case. An experiment can be carried out using surveys as well. For example, King and Napa (1998) presented participants with different types of stimuli on paper: either a survey completed by a happy person or a survey completed by an unhappy person. They wanted to see whether happy people were judged as more likely to get into heaven compared to unhappy people. Can you figure out the independent and dependent variables in this study? Can you guess what the results were? Happy people (vs. unhappy people; the independent variable) were judged as more likely to go to heaven (the dependent variable) compared to unhappy people!
Likewise, correlational research can be conducted without the use of surveys. For instance, psychologists LeeAnn Harker and Dacher Keltner (2001) examined the smile intensity of women’s college yearbook photos. Smiling in the photos was correlated with being married 10 years later!
Tradeoffs in Research
Even though there are serious limitations to correlational and quasi-experimental research, they are not poor cousins to experiments and longitudinal designs. In addition to selecting a method that is appropriate to the question, many practical concerns may influence the decision to use one method over another. One of these factors is simply resource availability—how much time and money do you have to invest in the research? (Tip: If you’re doing a senior honor’s thesis, do not embark on a lengthy longitudinal study unless you are prepared to delay graduation!) Often, we survey people even though it would be more precise—but much more difficult—to track them longitudinally. Especially in the case of exploratory research, it may make sense to opt for a cheaper and faster method first. Then, if results from the initial study are promising, the researcher can follow up with a more intensive method.
Beyond these practical concerns, another consideration in selecting a research design is the ethics of the study. For example, in cases of brain injury or other neurological abnormalities, it would be unethical for researchers to inflict these impairments on healthy participants. Nonetheless, studying people with these injuries can provide great insight into human psychology (e.g., if we learn that damage to a particular region of the brain interferes with emotions, we may be able to develop treatments for emotional irregularities). In addition to brain injuries, there are numerous other areas of research that could be useful in understanding the human mind but which pose challenges to a true experimental design—such as the experiences of war, long-term isolation, abusive parenting, or prolonged drug use. However, none of these are conditions we could ethically experimentally manipulate and randomly assign people to. Therefore, ethical considerations are another crucial factor in determining an appropriate research design.
Research Methods: Why You Need Them
Research Matters
This video is an advertisement, so the scientific details are not at the threshold we would expect for this class. That said, this video shows the importance of curiosity and the drive to solve problems in research. If you are motivated to understand and solve problems, research is a tool you can use!
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
Confounds
Factors that undermine the ability to draw causal inferences from an experiment.
Correlation
Measures the association between two variables, or how they go together.
Dependent variable
The variable the researcher measures but does not manipulate in an experiment.
Experimenter expectations
When the experimenter’s expectations influence the outcome of a study.
Independent variable
The variable the researcher manipulates and controls in an experiment.
Longitudinal study
A study that follows the same group of individuals over time.
Operational definitions
How researchers specifically measure a concept.
Participant demand
When participants behave in a way that they think the experimenter wants them to behave.
Placebo effect
When receiving special treatment or something new affects human behavior.
Quasi-experimental design
An experiment that does not require random assignment to conditions.
Random assignment
Assigning participants to receive different conditions of an experiment by chance.
References
- Chiao, J. (2009). Culture–gene coevolution of individualism – collectivism and the serotonin transporter gene. Proceedings of the Royal Society B, 277, 529-537. doi: 10.1098/rspb.2009.1650
- Dunn, E. W., Aknin, L. B., & Norton, M. I. (2008). Spending money on others promotes happiness. Science, 319(5870), 1687–1688. doi:10.1126/science.1150952
- Festinger, L., Riecken, H.W., & Schachter, S. (1956). When prophecy fails. Minneapolis, MN: University of Minnesota Press.
- Harker, L. A., & Keltner, D. (2001). Expressions of positive emotion in women\'s college yearbook pictures and their relationship to personality and life outcomes across adulthood. Journal of Personality and Social Psychology, 80, 112–124.
- King, L. A., & Napa, C. K. (1998). What makes a life good? Journal of Personality and Social Psychology, 75, 156–165.
- Lucas, R. E., Clark, A. E., Georgellis, Y., & Diener, E. (2003). Re-examining adaptation and the setpoint model of happiness: Reactions to changes in marital status. Journal of Personality and Social Psychology, 84, 527–539.
How to cite this Chapter using APA Style:
Scollon, C. N. (2019). Research designs. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology.Champaign, IL: DEF publishers. Retrieved from http://noba.to/acxb2thy
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/acxb2thy.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Original chapter by Matthias R. Mehl adapted by the Queen's University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
Because of its ability to determine cause-and-effect relationships, the laboratory experiment is traditionally considered the method of choice for psychological science. One downside, however, is that as it carefully controls conditions and their effects, it can yield findings that are out of touch with reality and have limited use when trying to understand real-world behavior. This module highlights the importance of also conducting research outside the psychology laboratory, within participants’ natural, everyday environments, and reviews existing methodologies for studying daily life
Learning Objectives
- Identify limitations of the traditional laboratory experiment.
- Explain ways in which daily life research can further psychological science.
- Know what methods exist for conducting psychological research in the real world.
Introduction
The laboratory experiment is traditionally considered the “gold standard” in psychology research. This is because only laboratory experiments can clearly separate cause from effect and therefore establish causality. Despite this unique strength, it is also clear that a scientific field that is mainly based on controlled laboratory studies ends up lopsided. Specifically, it accumulates a lot of knowledge on what can happen—under carefully isolated and controlled circumstances—but it has little to say about what actually does happen under the circumstances that people actually encounter in their daily lives.
For example, imagine you are a participant in an experiment that looks at the effect of being in a good mood on generosity, a topic that may have a good deal of practical application. Researchers create an internally-valid, carefully-controlled experiment where they randomly assign you to watch either a happy movie or a neutral movie, and then you are given the opportunity to help the researcher out by staying longer and participating in another study. If people in a good mood are more willing to stay and help out, the researchers can feel confident that – since everything else was held constant – your positive mood led you to be more helpful. However, what does this tell us about helping behaviors in the real world? Does it generalize to other kinds of helping, such as donating money to a charitable cause? Would all kinds of happy movies produce this behavior, or only this one? What about other positive experiences that might boost mood, like receiving a compliment or a good grade? And what if you were watching the movie with friends, in a crowded theatre, rather than in a sterile research lab? Taking research out into the real world can help answer some of these sorts of important questions.
As one of the founding fathers of social psychology remarked, “Experimentation in the laboratory occurs, socially speaking, on an island quite isolated from the life of society” (Lewin, 1944, p. 286). This module highlights the importance of going beyond experimentation and also conducting research outside the laboratory (Reis & Gosling, 2010), directly within participants’ natural environments, and reviews existing methodologies for studying daily life.
Rationale for Conducting Psychology Research in the Real World
One important challenge researchers face when designing a study is to find the right balance between ensuring Internal Validity, or the degree to which a study allows unambiguous causal inferences, and External Validity, or the degree to which a study ensures that potential findings apply to settings and samples other than the ones being studied (Brewer, 2000). Unfortunately, these two kinds of validity tend to be difficult to achieve at the same time, in one study. This is because creating a controlled setting, in which all potentially influential factors (other than the experimentally-manipulated variable) are controlled, is bound to create an environment that is quite different from what people naturally encounter (e.g., using a happy movie clip to promote helpful behavior). However, it is the degree to which an experimental situation is comparable to the corresponding real-world situation of interest that determines how generalizable potential findings will be. In other words, if an experiment is very far-off from what a person might normally experience in everyday life, you might reasonably question just how useful its findings are.
Because of the incompatibility of the two types of validity, one is often—by design—prioritized over the other. Due to the importance of identifying true causal relationships, psychology has traditionally emphasized internal over external validity. However, in order to make claims about human behavior that apply across populations and environments, researchers complement traditional laboratory research, where participants are brought into the lab, with field research where, in essence, the psychological laboratory is brought to participants. Field studies allow for the important test of how psychological variables and processes of interest “behave” under real-world circumstances (i.e., what actually does happen rather than what can happen). They can also facilitate “downstream” operationalizations of constructs that measure life outcomes of interest directly rather than indirectly.
Take, for example, the fascinating field of psychoneuroimmunology, where the goal is to understand the interplay of psychological factors - such as personality traits or one’s stress level - and the immune system. Highly sophisticated and carefully controlled experiments offer ways to isolate the variety of neural, hormonal, and cellular mechanisms that link psychological variables such as chronic stress to biological outcomes such as immunosuppression (a state of impaired immune functioning; Sapolsky, 2004). Although these studies demonstrate impressively how psychological factors can affect health-relevant biological processes, they—because of their research design—remain mute about the degree to which these factors actually do undermine people’s everyday health in real life. It is certainly important to show that laboratory stress can alter the number of natural killer cells in the blood. But it is equally important to test to what extent the levels of stress that people experience on a day-to-day basis result in them catching a cold more often or taking longer to recover from one. The goal for researchers, therefore, must be to complement traditional laboratory experiments with less controlled studies under real-world circumstances. The term ecological validity is used to refer the degree to which an effect has been obtained under conditions that are typical for what happens in everyday life (Brewer, 2000). In this example, then, people might keep a careful daily log of how much stress they are under as well as noting physical symptoms such as headaches or nausea. Although many factors beyond stress level may be responsible for these symptoms, this more correlational approach can shed light on how the relationship between stress and health plays out outside of the laboratory.
An Overview of Research Methods for Studying Daily Life
Capturing “life as it is lived” has been a strong goal for some researchers for a long time. Wilhelm and his colleagues recently published a comprehensive review of early attempts to systematically document daily life (Wilhelm, Perrez, & Pawlik, 2012). Building onto these original methods, researchers have, over the past decades, developed a broad toolbox for measuring experiences, behavior, and physiology directly in participants’ daily lives (Mehl & Conner, 2012). Figure 1 provides a schematic overview of the methodologies described below.
Studying Daily Experiences
Starting in the mid-1970s, motivated by a growing skepticism toward highly-controlled laboratory studies, a few groups of researchers developed a set of new methods that are now commonly known as the experience-sampling method (Hektner, Schmidt, & Csikszentmihalyi, 2007), ecological momentary assessment (Stone & Shiffman, 1994), or the diary method (Bolger & Rafaeli, 2003). Although variations within this set of methods exist, the basic idea behind all of them is to collect in-the-moment (or, close-to-the-moment) self-report data directly from people as they go about their daily lives. This is typically accomplished by asking participants’ repeatedly (e.g., five times per day) over a period of time (e.g., a week) to report on their current thoughts and feelings. The momentary questionnaires often ask about their location (e.g., “Where are you now?”), social environment (e.g., “With whom are you now?”), activity (e.g., “What are you currently doing?”), and experiences (e.g., “How are you feeling?”). That way, researchers get a snapshot of what was going on in participants’ lives at the time at which they were asked to report.
Technology has made this sort of research possible, and recent technological advances have altered the different tools researchers are able to easily use. Initially, participants wore electronic wristwatches that beeped at preprogrammed but seemingly random times, at which they completed one of a stack of provided paper questionnaires. With the mobile computing revolution, both the prompting and the questionnaire completion were gradually replaced by handheld devices such as smartphones. Being able to collect the momentary questionnaires digitally and time-stamped (i.e., having a record of exactly when participants responded) had major methodological and practical advantages and contributed to experience sampling going mainstream (Conner, Tennen, Fleeson, & Barrett, 2009).
Over time, experience sampling and related momentary self-report methods have become very popular, and, by now, they are effectively the gold standard for studying daily life. They have helped make progress in almost all areas of psychology (Mehl & Conner, 2012). These methods ensure receiving many measurements from many participants, and has further inspired the development of novel statistical methods (Bolger & Laurenceau, 2013). Finally, and maybe most importantly, they accomplished what they sought out to accomplish: to bring attention to what psychology ultimately wants and needs to know about, namely “what people actually do, think, and feel in the various contexts of their lives” (Funder, 2001, p. 213). In short, these approaches have allowed researchers to do research that is more externally valid, or more generalizable to real life, than the traditional laboratory experiment.
To illustrate these techniques, consider a classic study, Stone, Reed, and Neale (1987), who tracked positive and negative experiences surrounding a respiratory infection using daily experience sampling. They found that undesirable experiences peaked and desirable ones dipped about four to five days prior to participants coming down with the cold. More recently, Killingsworth and Gilbert (2010) collected momentary self-reports from more than 2,000 participants via a smartphone app. They found that participants were less happy when their mind was in an idling, mind-wandering state, such as surfing the Internet or multitasking at work, than when it was in an engaged, task-focused one, such as working diligently on a paper. These are just two examples that illustrate how experience-sampling studies have yielded findings that could not be obtained with traditional laboratory methods.
Recently, the day reconstruction method (DRM) (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004) has been developed to obtain information about a person’s daily experiences without going through the burden of collecting momentary experience-sampling data. In the DRM, participants report their experiences of a given day retrospectively after engaging in a systematic, experiential reconstruction of the day on the following day. As a participant in this type of study, you might look back on yesterday, divide it up into a series of episodes such as “made breakfast,” “drove to work,” “had a meeting,” etc. You might then report who you were with in each episode and how you felt in each. This approach has shed light on what situations lead to moments of positive and negative mood throughout the course of a normal day.
Studying Daily Behavior
Experience sampling is often used to study everyday behavior (i.e., daily social interactions and activities). In the laboratory, behavior is best studied using direct behavioral observation (e.g., video recordings). In the real world, this is, of course, much more difficult. As Funder put it, it seems it would require a “detective’s report [that] would specify in exact detail everything the participant said and did, and with whom, in all of the contexts of the participant’s life” (Funder, 2007, p. 41).
As difficult as this may seem, Mehl and colleagues have developed a naturalistic observation methodology that is similar in spirit. Rather than following participants—like a detective—with a video camera (see Craik, 2000), they equip participants with a portable audio recorder that is programmed to periodically record brief snippets of ambient sounds (e.g., 30 seconds every 12 minutes). Participants carry the recorder (originally a microcassette recorder, now a smartphone app) on them as they go about their days and return it at the end of the study. The recorder provides researchers with a series of sound bites that, together, amount to an acoustic diary of participants’ days as they naturally unfold—and that constitute a representative sample of their daily activities and social encounters. Because it is somewhat similar to having the researcher’s ear at the participant’s lapel, they called their method the electronically activated recorder, or EAR (Mehl, Pennebaker, Crow, Dabbs, & Price, 2001). The ambient sound recordings can be coded for many things, including participants’ locations (e.g., at school, in a coffee shop), activities (e.g., watching TV, eating), interactions (e.g., in a group, on the phone), and emotional expressions (e.g., laughing, sighing). As unnatural or intrusive as it might seem, participants report that they quickly grow accustomed to the EAR and say they soon find themselves behaving as they normally would.
In a cross-cultural study, Ramírez-Esparza and her colleagues used the EAR method to study sociability in the United States and Mexico. Interestingly, they found that although American participants rated themselves significantly higher than Mexicans on the question, “I see myself as a person who is talkative,” they actually spent almost 10 percent less time talking than Mexicans did (Ramírez-Esparza, Mehl, Álvarez Bermúdez, & Pennebaker, 2009). In a similar way, Mehl and his colleagues used the EAR method to debunk the long-standing myth that women are considerably more talkative than men. Using data from six different studies, they showed that both sexes use on average about 16,000 words per day. The estimated sex difference of 546 words was trivial compared to the immense range of more than 46,000 words between the least and most talkative individual (695 versus 47,016 words; Mehl, Vazire, Ramírez-Esparza, Slatcher, & Pennebaker, 2007). Together, these studies demonstrate how naturalistic observation can be used to study objective aspects of daily behavior and how it can yield findings quite different from what other methods yield (Mehl, Robbins, & Deters, 2012).
A series of other methods and creative ways for assessing behavior directly and unobtrusively in the real world are described in a seminal book on real-world, subtle measures (Webb, Campbell, Schwartz, Sechrest, & Grove, 1981). For example, researchers have used time-lapse photography to study the flow of people and the use of space in urban public places (Whyte, 1980). More recently, they have observed people’s personal (e.g., dorm rooms) and professional (e.g., offices) spaces to understand how personality is expressed and detected in everyday environments (Gosling, Ko, Mannarelli, & Morris, 2002). They have even systematically collected and analyzed people’s garbage to measure what people actually consume (e.g., empty alcohol bottles or cigarette boxes) rather than what they say they consume (Rathje & Murphy, 2001). Because people often cannot and sometimes may not want to accurately report what they do, the direct—and ideally nonreactive—assessment of real-world behavior is of high importance for psychological research (Baumeister, Vohs, & Funder, 2007).
Studying Daily Physiology
In addition to studying how people think, feel, and behave in the real world, researchers are also interested in how our bodies respond to the fluctuating demands of our lives. What are the daily experiences that make our “blood boil”? How do our neurotransmitters and hormones respond to the stressors we encounter in our lives? What physiological reactions do we show to being loved—or getting ostracized? You can see how studying these powerful experiences in real life, as they actually happen, may provide more rich and informative data than one might obtain in an artificial laboratory setting that merely mimics these experiences.
Also, in pursuing these questions, it is important to keep in mind that what is stressful, engaging, or boring for one person might not be so for another. It is, in part, for this reason that researchers have found only limited correspondence between how people respond physiologically to a standardized laboratory stressor (e.g., giving a speech) and how they respond to stressful experiences in their lives. To give an example, Wilhelm and Grossman (2010) describe a participant who showed rather minimal heart rate increases in response to a laboratory stressor (about five to 10 beats per minute) but quite dramatic increases (almost 50 beats per minute) later in the afternoon while watching a soccer game. Of course, the reverse pattern can happen as well, such as when patients have high blood pressure in the doctor’s office but not in their home environment—the so-called white coat hypertension (White, Schulman, McCabe, & Dey, 1989).
Ambulatory physiological monitoring – that is, monitoring physiological reactions as people go about their daily lives - has a long history in biomedical research and an array of monitoring devices exist (Fahrenberg & Myrtek, 1996). Among the biological signals that can now be measured in daily life with portable signal recording devices are the electrocardiogram (ECG), blood pressure, electrodermal activity (or “sweat response”), body temperature, and even the electroencephalogram (EEG) (Wilhelm & Grossman, 2010). Most recently, researchers have added ambulatory assessment of hormones (e.g., cortisol) and other biomarkers (e.g., immune markers) to the list (Schlotz, 2012). The development of ever more sophisticated ways to track what goes on underneath our skins as we go about our lives is a fascinating and rapidly advancing field.
In a recent study, Lane, Zareba, Reis, Peterson, and Moss (2011) used experience sampling combined with ambulatory electrocardiography (a so-called Holter monitor) to study how emotional experiences can alter cardiac function in patients with a congenital heart abnormality (e.g., long QT syndrome). Consistent with the idea that emotions may, in some cases, be able to trigger a cardiac event, they found that typical—in most cases even relatively low intensity— daily emotions had a measurable effect on ventricular repolarization, an important cardiac indicator that, in these patients, is linked to risk of a cardiac event. In another study, Smyth and colleagues (1998) combined experience sampling with momentary assessment of cortisol, a stress hormone. They found that momentary reports of current or even anticipated stress predicted increased cortisol secretion 20 minutes later. Further, and independent of that, the experience of other kinds of negative affect (e.g., anger, frustration) also predicted higher levels of cortisol and the experience of positive affect (e.g., happy, joyful) predicted lower levels of this important stress hormone. Taken together, these studies illustrate how researchers can use ambulatory physiological monitoring to study how the little—and seemingly trivial or inconsequential—experiences in our lives leave objective, measurable traces in our bodily systems.
Studying Online Behavior
Another domain of daily life that has only recently emerged is virtual daily behavior or how people act and interact with others on the Internet. Irrespective of whether social media will turn out to be humanity’s blessing or curse (both scientists and laypeople are currently divided over this question), the fact is that people are spending an ever increasing amount of time online. In light of that, researchers are beginning to think of virtual behavior as being as serious as “actual” behavior and seek to make it a legitimate target of their investigations (Gosling & Johnson, 2010).
One way to study virtual behavior is to make use of the fact that most of what people do on the Web—emailing, chatting, tweeting, blogging, posting— leaves direct (and permanent) verbal traces. For example, differences in the ways in which people use words (e.g., subtle preferences in word choice) have been found to carry a lot of psychological information (Pennebaker, Mehl, & Niederhoffer, 2003). Therefore, a good way to study virtual social behavior is to study virtual language behavior. Researchers can download people’s—often public—verbal expressions and communications and analyze them using modern text analysis programs (e.g., Pennebaker, Booth, & Francis, 2007).
For example, Cohn, Mehl, and Pennebaker (2004) downloaded blogs of more than a thousand users of lifejournal.com, one of the first Internet blogging sites, to study how people responded socially and emotionally to the attacks of September 11, 2001. In going “the online route,” they could bypass a critical limitation of coping research, the inability to obtain baseline information; that is, how people were doing before the traumatic event occurred. Through access to the database of public blogs, they downloaded entries from two months prior to two months after the attacks. Their linguistic analyses revealed that in the first days after the attacks, participants expectedly expressed more negative emotions and were more cognitively and socially engaged, asking questions and sending messages of support. Already after two weeks, though, their moods and social engagement returned to baseline, and, interestingly, their use of cognitive-analytic words (e.g., “think,” “question”) even dropped below their normal level. Over the next six weeks, their mood hovered around their pre-9/11 baseline, but both their social engagement and cognitive-analytic processing stayed remarkably low. This suggests a social and cognitive weariness in the aftermath of the attacks. In using virtual verbal behavior as a marker of psychological functioning, this study was able to draw a fine timeline of how humans cope with disasters.
Reflecting their rapidly growing real-world importance, researchers are now beginning to investigate behavior on social networking sites such as Facebook (Wilson, Gosling, & Graham, 2012). Most research looks at psychological correlates of online behavior such as personality traits and the quality of one’s social life but, importantly, there are also first attempts to export traditional experimental research designs into an online setting. In a pioneering study of online social influence, Bond and colleagues (2012) experimentally tested the effects that peer feedback has on voting behavior. Remarkably, their sample consisted of 16 million (!) Facebook users. They found that online political-mobilization messages (e.g., “I voted” accompanied by selected pictures of their Facebook friends) influenced real-world voting behavior. This was true not just for users who saw the messages but also for their friends and friends of their friends. Although the intervention effect on a single user was very small, through the enormous number of users and indirect social contagion effects, it resulted cumulatively in an estimated 340,000 additional votes—enough to tilt a close election. In short, although still in its infancy, research on virtual daily behavior is bound to change social science, and it has already helped us better understand both virtual and “actual” behavior.
“Smartphone Psychology”?
A review of research methods for studying daily life would not be complete without a vision of “what’s next.” Given how common they have become, it is safe to predict that smartphones will not just remain devices for everyday online communication but will also become devices for scientific data collection and intervention (Kaplan & Stone, 2013; Yarkoni, 2012). These devices automatically store vast amounts of real-world user interaction data, and, in addition, they are equipped with sensors to track the physical (e. g., location, position) and social (e.g., wireless connections around the phone) context of these interactions. Miller (2012, p. 234) states, “The question is not whether smartphones will revolutionize psychology but how, when, and where the revolution will happen.” Obviously, their immense potential for data collection also brings with it big new challenges for researchers (e.g., privacy protection, data analysis, and synthesis). Yet it is clear that many of the methods described in this module—and many still to be developed ways of collecting real-world data—will, in the future, become integrated into the devices that people naturally and happily carry with them from the moment they get up in the morning to the moment they go to bed.
Conclusion
This module sought to make a case for psychology research conducted outside the lab. If the ultimate goal of the social and behavioral sciences is to explain human behavior, then researchers must also—in addition to conducting carefully controlled lab studies—deal with the “messy” real world and find ways to capture life as it naturally happens.
Mortensen and Cialdini (2010) refer to the dynamic give-and-take between laboratory and field research as “full-cycle psychology”. Going full cycle, they suggest, means that “researchers use naturalistic observation to determine an effect’s presence in the real world, theory to determine what processes underlie the effect, experimentation to verify the effect and its underlying processes, and a return to the natural environment to corroborate the experimental findings” (Mortensen & Cialdini, 2010, p. 53). To accomplish this, researchers have access to a toolbox of research methods for studying daily life that is now more diverse and more versatile than it has ever been before. So, all it takes is to go ahead and—literally—bring science to life.
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
Ambulatory assessment
An overarching term to describe methodologies that assess the behavior, physiology, experience, and environments of humans in naturalistic settings.
Daily Diary method
A methodology where participants complete a questionnaire about their thoughts, feelings, and behavior of the day at the end of the day.
Day reconstruction method (DRM)
A methodology where participants describe their experiences and behavior of a given day retrospectively upon a systematic reconstruction on the following day.
Ecological momentary assessment
An overarching term to describe methodologies that repeatedly sample participants’ real-world experiences, behavior, and physiology in real time.
Ecological validity
The degree to which a study finding has been obtained under conditions that are typical for what happens in everyday life.
Electronically activated recorder, or EAR
A methodology where participants wear a small, portable audio recorder that intermittently records snippets of ambient sounds around them.
Experience-sampling method
A methodology where participants report on their momentary thoughts, feelings, and behaviors at different points in time over the course of a day.
External validity
The degree to which a finding generalizes from the specific sample and context of a study to some larger population and broader settings.
Full-cycle psychology
A scientific approach whereby researchers start with an observational field study to identify an effect in the real world, follow up with laboratory experimentation to verify the effect and isolate the causal mechanisms, and return to field research to corroborate their experimental findings.
Generalize
Generalizing, in science, refers to the ability to arrive at broad conclusions based on a smaller sample of observations. For these conclusions to be true the sample should accurately represent the larger population from which it is drawn.
Internal validity
The degree to which a cause-effect relationship between two variables has been unambiguously established.
Linguistic inquiry and word count
A quantitative text analysis methodology that automatically extracts grammatical and psychological information from a text by counting word frequencies.
Lived day analysis
A methodology where a research team follows an individual around with a video camera to objectively document a person’s daily life as it is lived.
White coat hypertension
A phenomenon in which patients exhibit elevated blood pressure in the hospital or doctor’s office but not in their everyday lives.
References
- Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2, 396–403.
- Bolger, N., & Laurenceau, J-P. (2013). Intensive longitudinal methods: An introduction to diary and experience sampling research. New York, NY: Guilford Press.
- Bolger, N., Davis, A., & Rafaeli, E. (2003). Diary methods: Capturing life as it is lived. Annual Review of Psychology, 54, 579–616.
- Bond, R. M., Jones, J. J., Kramer, A. D., Marlow, C., Settle, J. E., & Fowler, J. H. (2012). A 61 million-person experiment in social influence and political mobilization. Nature, 489, 295–298.
- Brewer, M. B. (2000). Research design and issues of validity. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social psychology (pp. 3–16). New York, NY: Cambridge University Press.
- Cohn, M. A., Mehl, M. R., & Pennebaker, J. W. (2004). Linguistic indicators of psychological change after September 11, 2001. Psychological Science, 15, 687–693.
- Conner, T. S., Tennen, H., Fleeson, W., & Barrett, L. F. (2009). Experience sampling methods: A modern idiographic approach to personality research. Social and Personality Psychology Compass, 3, 292–313.
- Craik, K. H. (2000). The lived day of an individual: A person-environment perspective. In W. B. Walsh, K. H. Craik, & R. H. Price (Eds.), Person-environment psychology: New directions and perspectives (pp. 233–266). Mahwah, NJ: Lawrence Erlbaum Associates.
- Fahrenberg, J., &. Myrtek, M. (Eds.) (1996). Ambulatory assessment: Computer-assisted psychological and psychophysiological methods in monitoring and field studies. Seattle, WA: Hogrefe & Huber.
- Funder, D. C. (2007). The personality puzzle. New York, NY: W. W. Norton & Co.
- Funder, D. C. (2001). Personality. Review of Psychology, 52, 197–221.
- Gosling, S. D., & Johnson, J. A. (2010). Advanced methods for conducting online behavioral research. Washington, DC: American Psychological Association.
- Gosling, S. D., Ko, S. J., Mannarelli, T., & Morris, M. E. (2002). A room with a cue: Personality judgments based on offices and bedrooms. Journal of Personality and Social Psychology, 82, 379–398.
- Hektner, J. M., Schmidt, J. A., & Csikszentmihalyi, M. (2007). Experience sampling method: Measuring the quality of everyday life. Thousand Oaks, CA: Sage.
- Kahneman, D., Krueger, A., Schkade, D., Schwarz, N., and Stone, A. (2004). A survey method for characterizing daily life experience: The Day Reconstruction Method. Science, 306, 1776–780.
- Kaplan, R. M., & Stone A. A. (2013). Bringing the laboratory and clinic to the community: Mobile technologies for health promotion and disease prevention. Annual Review of Psychology, 64, 471-498.
- Killingsworth, M. A., & Gilbert, D. T. (2010). A wandering mind is an unhappy mind. Science, 330, 932.
- Lane, R. D., Zareba, W., Reis, H., Peterson, D., &, Moss, A. (2011). Changes in ventricular repolarization duration during typical daily emotion in patients with Long QT Syndrome. Psychosomatic Medicine, 73, 98–105.
- Lewin, K. (1944) Constructs in psychology and psychological ecology. University of Iowa Studies in Child Welfare, 20, 23–27.
- Mehl, M. R., & Conner, T. S. (Eds.) (2012). Handbook of research methods for studying daily life. New York, NY: Guilford Press.
- Mehl, M. R., Pennebaker, J. W., Crow, M., Dabbs, J., & Price, J. (2001). The electronically activated recorder (EAR): A device for sampling naturalistic daily activities and conversations. Behavior Research Methods, Instruments, and Computers, 33, 517–523.
- Mehl, M. R., Robbins, M. L., & Deters, G. F. (2012). Naturalistic observation of health-relevant social processes: The electronically activated recorder (EAR) methodology in psychosomatics. Psychosomatic Medicine, 74, 410–417.
- Mehl, M. R., Vazire, S., Ramírez-Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are women really more talkative than men? Science, 317, 82.
- Miller, G. (2012). The smartphone psychology manifesto. Perspectives in Psychological Science, 7, 221–237.
- Mortenson, C. R., & Cialdini, R. B. (2010). Full-cycle social psychology for theory and application. Social and Personality Psychology Compass, 4, 53–63.
- Pennebaker, J. W., Mehl, M. R., Niederhoffer, K. (2003). Psychological aspects of natural language use: Our words, our selves. Annual Review of Psychology, 54, 547–577.
- Ramírez-Esparza, N., Mehl, M. R., Álvarez Bermúdez, J., & Pennebaker, J. W. (2009). Are Mexicans more or less sociable than Americans? Insights from a naturalistic observation study. Journal of Research in Personality, 43, 1–7.
- Rathje, W., & Murphy, C. (2001). Rubbish! The archaeology of garbage. New York, NY: Harper Collins.
- Reis, H. T., & Gosling, S. D. (2010). Social psychological methods outside the laboratory. In S. T. Fiske, D. T. Gilbert, & G. Lindzey, (Eds.), Handbook of social psychology (5th ed., Vol. 1, pp. 82–114). New York, NY: Wiley.
- Sapolsky, R. (2004). Why zebras don’t get ulcers: A guide to stress, stress-related diseases and coping. New York, NY: Henry Holt and Co.
- Schlotz, W. (2012). Ambulatory psychoneuroendocrinology: Assessing salivary cortisol and other hormones in daily life. In M.R. Mehl & T.S. Conner (Eds.), Handbook of research methods for studying daily life (pp. 193–209). New York, NY: Guilford Press.
- Smyth, J., Ockenfels, M. C., Porter, L., Kirschbaum, C., Hellhammer, D. H., & Stone, A. A. (1998). Stressors and mood measured on a momentary basis are associated with salivary cortisol secretion. Psychoneuroendocrinology, 23, 353–370.
- Stone, A. A., & Shiffman, S. (1994). Ecological momentary assessment (EMA) in behavioral medicine. Annals of Behavioral Medicine, 16, 199–202.
- Stone, A. A., Reed, B. R., Neale, J. M. (1987). Changes in daily event frequency precede episodes of physical symptoms. Journal of Human Stress, 13, 70–74.
- Webb, E. J., Campbell, D. T., Schwartz, R. D., Sechrest, L., & Grove, J. B. (1981). Nonreactive measures in the social sciences. Boston, MA: Houghton Mifflin Co.
- White, W. B., Schulman, P., McCabe, E. J., & Dey, H. M. (1989). Average daily blood pressure, not office blood pressure, determines cardiac function in patients with hypertension. Journal of the American Medical Association, 261, 873–877.
- Whyte, W. H. (1980). The social life of small urban spaces. Washington, DC: The Conservation Foundation.
- Wilhelm, F.H., & Grossman, P. (2010). Emotions beyond the laboratory: Theoretical fundaments, study design, and analytic strategies for advanced ambulatory assessment. Biological Psychology, 84, 552–569.
- Wilhelm, P., Perrez, M., & Pawlik, K. (2012). Conducting research in daily life: A historical review. In M. R. Mehl & T. S. Conner (Eds.), Handbook of research methods for studying daily life. New York, NY: Guilford Press.
- Wilson, R., & Gosling, S. D., & Graham, L. (2012). A review of Facebook research in the social sciences. Perspectives on Psychological Science, 7, 203–220.
- Yarkoni, T. (2012). Psychoinformatics: New horizons at the interface of the psychological and computing sciences. Current Directions in Psychological Science, 21, 391–397.
How to cite this Chapter using APA Style:
Mehl, M. R. (2019). Conducting psychology research in the real world. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/hsfe5k3d
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/hsfe5k3d.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Original chapter by Edward Diener adapted by the Queen's University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
Scientific research has been one of the great drivers of progress in human history, and the dramatic changes we have seen during the past century are due primarily to scientific findings—modern medicine, electronics, automobiles and jets, birth control, and a host of other helpful inventions. Psychologists believe that scientific methods can be used in the behavioral domain to understand and improve the world. Although psychology trails the biological and physical sciences in terms of progress, we are optimistic based on discoveries to date that scientific psychology will make many important discoveries that can benefit humanity. This module outlines the characteristics of the science, and the promises it holds for understanding behavior. The ethics that guide psychological research are briefly described. It concludes with the reasons you should learn about scientific psychology.
Learning Objectives
- Describe how scientific research has changed the world.
- Describe the key characteristics of the scientific approach.
- Discuss a few of the benefits, as well as problems that have been created by science.
- Describe several ways that psychological science has improved the world.
- Describe a number of the ethical guidelines that psychologists follow.
Scientific Advances and World Progress
There are many people who have made positive contributions to humanity in modern times. Take a careful look at the names on the following list. Which of these individuals do you think has helped humanity the most?
- Mother Teresa
- Albert Schweitzer
- Edward Jenner
- Norman Borlaug
- Fritz Haber
The usual response to this question is “Who on earth are Jenner, Borlaug, and Haber?” Many people know that Mother Teresa helped thousands of people living in the slums of Kolkata (Calcutta). Others recall that Albert Schweitzer opened his famous hospital in Africa and went on to earn the Nobel Peace Prize. The other three historical figures, on the other hand, are far less well known. Jenner, Borlaug, and Haber were scientists whose research discoveries saved millions, and even billions, of lives. Dr. Edward Jenner is often considered the “father of immunology” because he was among the first to conceive of and test vaccinations. His pioneering work led directly to the eradication of smallpox. Many other diseases have been greatly reduced because of vaccines discovered using science—measles, pertussis, diphtheria, tetanus, typhoid, cholera, polio, hepatitis—and all are the legacy of Jenner. Fritz Haber and Norman Borlaug saved more than a billion human lives. They created the “Green Revolution” by producing hybrid agricultural crops and synthetic fertilizer. Humanity can now produce food for the seven billion people on the planet, and the starvation that does occur is related to political and economic factors rather than our collective ability to produce food.
If you examine major social and technological changes over the past century most of them can be directly attributed to science. The world in 1914 was very different than the one we see today (Easterbrook, 2003). There were few cars and most people traveled by foot, horseback, or carriage. There were no radios, televisions, birth control pills, artificial hearts or antibiotics. Only a small portion of the world had telephones, refrigeration or electricity. These days we find that 80% of all households have television and 84% have electricity. It is estimated that three quarters of the world’s population has access to a mobile phone! Life expectancy was 47 years in 1900 and 79 years in 2010. The percentage of hungry and malnourished people in the world has dropped substantially across the globe. Even average levels of I.Q. have risen dramatically over the past century due to better nutrition and schooling.
What Is Science?
What is this process we call “science,” which has so dramatically changed the world? Ancient people were more likely to believe in magical and supernatural explanations for natural phenomena such as solar eclipses or thunderstorms. By contrast, scientifically minded people try to figure out the natural world through testing and observation. Specifically, science is the use of systematic observation in order to acquire knowledge. For example, children in a science class might combine vinegar and baking soda to observe the bubbly chemical reaction. These empirical methods are wonderful ways to learn about the physical and biological world. Science is not magic—it will not solve all human problems, and might not answer all our questions about behavior. Nevertheless, it appears to be the most powerful method we have for acquiring knowledge about the observable world. The essential elements of science are as follows:
- Systematic observation is the core of science. Scientists observe the world, in a very organized way. We often measure the phenomenon we are observing. We record our observations so that memory biases are less likely to enter in to our conclusions. We are systematic in that we try to observe under controlled conditions, and also systematically vary the conditions of our observations so that we can see variations in the phenomena and understand when they occur and do not occur.
- Observation leads to hypotheses we can test. When we develop hypotheses and theories, we state them in a way that can be tested. For example, you might make the claim that candles made of paraffin wax burn more slowly than do candles of the exact same size and shape made from bee’s wax. This claim can be readily tested by timing the burning speed of candles made from these materials.
- Science is democratic. People in ancient times may have been willing to accept the views of their kings or pharaohs as absolute truth. These days, however, people are more likely to want to be able to form their own opinions and debate conclusions. Scientists are skeptical and have open discussions about their observations and theories. These debates often occur as scientists publish competing findings with the idea that the best data will win the argument.
- Science is cumulative. We can learn the important truths discovered by earlier scientists and build on them. Any physics student today knows more about physics than Sir Isaac Newton did even though Newton was possibly the most brilliant physicist of all time. A crucial aspect of scientific progress is that after we learn of earlier advances, we can build upon them and move farther along the path of knowledge.
Psychology as a Science
Even in modern times many people are skeptical that psychology is really a science. To some degree this doubt stems from the fact that many psychological phenomena such as depression, intelligence, and prejudice do not seem to be directly observable in the same way that we can observe the changes in ocean tides or the speed of light. Because thoughts and feelings are invisible many early psychological researchers chose to focus on behavior. You might have noticed that some people act in a friendly and outgoing way while others appear to be shy and withdrawn. If you have made these types of observations then you are acting just like early psychologists who used behavior to draw inferences about various types of personality. By using behavioral measures and rating scales it is possible to measure thoughts and feelings. This is similar to how other researchers explore “invisible” phenomena such as the way that educators measure academic performance or economists measure quality of life.
One important pioneering researcher was Francis Galton, a cousin of Charles Darwin who lived in England during the late 1800s. Galton used patches of color to test people’s ability to distinguish between them. He also invented the self-report questionnaire, in which people offered their own expressed judgments or opinions on various matters. Galton was able to use self-reports to examine—among other things—people’s differing ability to accurately judge distances.
Although he lacked a modern understanding of genetics Galton also had the idea that scientists could look at the behaviors of identical and fraternal twins to estimate the degree to which genetic and social factors contribute to personality; a puzzling issue we currently refer to as the “nature-nurture question.”
In modern times psychology has become more sophisticated. Researchers now use better measures, more sophisticated study designs and better statistical analyses to explore human nature. Simply take the example of studying the emotion of happiness. How would you go about studying happiness? One straight forward method is to simply ask people about their happiness and to have them use a numbered scale to indicate their feelings. There are, of course, several problems with this. People might lie about their happiness, might not be able to accurately report on their own happiness, or might not use the numerical scale in the same way. With these limitations in mind modern psychologists employ a wide range of methods to assess happiness.
They use, for instance, “peer report measures” in which they ask close friends and family members about the happiness of a target individual. Researchers can then compare these ratings to the self-report ratings and check for discrepancies. Researchers also use memory measures, with the idea that dispositionally positive people have an easier time recalling pleasant events and negative people have an easier time recalling unpleasant events. Modern psychologists even use biological measures such as saliva cortisol samples (cortisol is a stress related hormone) or fMRI images of brain activation (the left pre-frontal cortex is one area of brain activity associated with good moods).
Despite our various methodological advances it is true that psychology is still a very young science. While physics and chemistry are hundreds of years old psychology is barely a hundred and fifty years old and most of our major findings have occurred only in the last 60 years. There are legitimate limits to psychological science but it is a science nonetheless.
Psychological Science is Useful
Psychological science is useful for creating interventions that help people live better lives. A growing body of research is concerned with determining which therapies are the most and least effective for the treatment of psychological disorders.
For example, many studies have shown that cognitive behavioral therapy can help many people suffering from depression and anxiety disorders (Butler, Chapman, Forman, & Beck, 2006; Hoffman & Smits, 2008). In contrast, research reveals that some types of therapies actually might be harmful on average (Lilienfeld, 2007).
In organizational psychology, a number of psychological interventions have been found by researchers to produce greater productivity and satisfaction in the workplace (e.g., Guzzo, Jette, & Katzell, 1985). Human factor engineers have greatly increased the safety and utility of the products we use. For example, the human factors psychologist Alphonse Chapanis and other researchers redesigned the cockpit controls of aircraft to make them less confusing and easier to respond to, and this led to a decrease in pilot errors and crashes.
Forensic sciences have made courtroom decisions more valid. We all know of the famous cases of imprisoned persons who have been exonerated because of DNA evidence. Equally dramatic cases hinge on psychological findings. For instance, psychologist Elizabeth Loftus has conducted research demonstrating the limits and unreliability of eyewitness testimony and memory. Thus, psychological findings are having practical importance in the world outside the laboratory. Psychological science has experienced enough success to demonstrate that it works, but there remains a huge amount yet to be learned.
Ethics of Scientific Psychology
Psychology differs somewhat from the natural sciences such as chemistry in that researchers conduct studies with human research participants. Because of this there is a natural tendency to want to guard research participants against potential psychological harm. For example, it might be interesting to see how people handle ridicule but it might not be advisable to ridicule research participants.
-
- Scientific psychologists follow a specific set of guidelines for research known as a code of ethics. There are extensive ethical guidelines for how human participants should be treated in psychological research (Diener & Crandall, 1978; Sales & Folkman, 2000). Following are a few highlights:
- Informed consent. In general, people should know when they are involved in research, and understand what will happen to them during the study. They should then be given a free choice as to whether to participate.
- Confidentiality. Information that researchers learn about individual participants should not be made public without the consent of the individual.
- Privacy. Researchers should not make observations of people in private places such as their bedrooms without their knowledge and consent. Researchers should not seek confidential information from others, such as school authorities, without consent of the participant or their guardian.
- Benefits. Researchers should consider the benefits of their proposed research and weigh these against potential risks to the participants. People who participate in psychological studies should be exposed to risk only if they fully understand these risks and only if the likely benefits clearly outweigh the risks.
- Deception. Some researchers need to deceive participants in order to hide the true nature of the study. This is typically done to prevent participants from modifying their behavior in unnatural ways. Researchers are required to “debrief” their participants after they have completed the study. Debriefing is an opportunity to educate participants about the true nature of the study.
Why Learn About Scientific Psychology?
I once had a psychology professor who asked my class why we were taking a psychology course. Our responses give the range of reasons that people want to learn about psychology:
To understand ourselves
- To understand other people and groups
- To be better able to influence others, for example, in socializing children or motivating employees
- To learn how to better help others and improve the world, for example, by doing effective psychotherapy
- To learn a skill that will lead to a profession such as being a social worker or a professor
- To learn how to evaluate the research claims you hear or read about
- Because it is interesting, challenging, and fun! People want to learn about psychology because this is exciting in itself, regardless of other positive outcomes it might have. Why do we see movies? Because they are fun and exciting, and we need no other reason. Thus, one good reason to study psychology is that it can be rewarding in itself.
Conclusions
The science of psychology is an exciting adventure. Whether you will become a scientific psychologist, an applied psychologist, or an educated person who knows about psychological research, this field can influence your life and provide fun, rewards, and understanding. My hope is that you learn a lot from the modules in this e-text, and also that you enjoy the experience! I love learning about psychology and neuroscience, and hope you will too!
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
Empirical methods
Approaches to inquiry that are tied to actual measurement and observation.
Ethics
Professional guidelines that offer researchers a template for making decisions that protect research participants from potential harm and that help steer scientists away from conflicts of interest or other situations that might compromise the integrity of their research.
Hypotheses
A logical idea that can be tested.
Systematic observation
The careful observation of the natural world with the aim of better understanding it. Observations provide the basic data that allow scientists to track, tally, or otherwise organize information about the natural world.
Theories
Groups of closely related phenomena or observations.
References
- Butler, A. C., Chapman, J. E., Forman, E. M., & Beck, A. T. (2006). The empirical status of cognitive- behavioral therapy: A review of meta-analyses. Clinical Psychology Review, 26, 17–31.
- Diener, E., & Crandall, R. (1978). Ethics in social and behavioral research. Chicago, IL: University of Chicago Press.
- Easterbrook, G. (2003). The progress paradox. New York, NY: Random House.
- Guzzo, R. A., Jette, R. D., & Katzell, R. A. (1985). The effects of psychologically based intervention programs on worker productivity: A meta-analysis. Personnel Psychology, 38, 275.291.
- Hoffman, S. G., & Smits, J. A. J. (2008). Cognitive-behavioral therapy for adult anxiety disorders.
Journal of Clinical Psychiatry, 69, 621–32. - Lilienfeld, S. O. (2007). Psychological treatments that cause harm. Perspectives on Psychological Science, 2, 53–70.
- Moore, D. (2003). Public lukewarm on animal rights. Gallup News Service, May 21. http://www.gallup.com/poll/8461/public-lukewarm-animal-rights.aspx
- Sales, B. D., & Folkman, S. (Eds.). (2000). Ethics in research with human participants. Washington, DC: American Psychological Association.
How to cite this Chapter using APA Style:
Diener, E. (2019). Why science? Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/qu4abpzy
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/qu4abpzy.
Additional information about the Diener Education Fund (DEF) can be accessed here.
Original chapter by Erin I. Smith adapted by the Queen's University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
We are bombarded every day with claims about how the world works, claims that have a direct impact on how we think about and solve problems in society and our personal lives. This module explores important considerations for evaluating the trustworthiness of such claims by contrasting between scientific thinking and everyday observations (also known as “anecdotal evidence”).
Learning Objectives
- Compare and contrast conclusions based on scientific and everyday inductive reasoning.
- Understand why scientific conclusions and theories are trustworthy, even if they are not able to be proven.
- Articulate what it means to think like a psychological scientist, considering qualities of good scientific explanations and theories.
- Discuss science as a social activity, comparing and contrasting facts and values.
Introduction
Why are some people so much happier than others? Is it harmful for children to have imaginary companions? How might students study more effectively?
Even if you’ve never considered these questions before, you probably have some guesses about their answers. Maybe you think getting rich or falling in love leads to happiness. Perhaps you view imaginary friends as expressions of a dangerous lack of realism. What’s more, if you were to ask your friends, they would probably also have opinions about these questions—opinions that may even differ from your own.
A quick internet search would yield even more answers. We live in the “Information Age,” with people having access to more explanations and answers than at any other time in history. But, although the quantity of information is continually increasing, it’s always good practice to consider the quality of what you read or watch: Not all information is equally trustworthy. The trustworthiness of information is especially important in an era when “fake news,” urban myths, misleading “click-bait,” and conspiracy theories compete for our attention alongside well-informed conclusions grounded in evidence. Determining what information is well-informed is a crucial concern and a central task of science. Science is a way of using observable data to help explain and understand the world around us in a trustworthy way.
In this module, you will learn about scientific thinking. You will come to understand how scientific research informs our knowledge and helps us create theories. You will also come to appreciate how scientific reasoning is different from the types of reasoning people often use to form personal opinions.
Scientific Versus Everyday Reasoning
Each day, people offer statements as if they are facts, such as, “It looks like rain today,” or, “Dogs are very loyal.” These conclusions represent hypotheses about the world: best guesses as to how the world works. Scientists also draw conclusions, claiming things like, “There is an 80% chance of rain today,” or, “Dogs tend to protect their human companions.” You’ll notice that the two examples of scientific claims use less certain language and are more likely to be associated with probabilities. Understanding the similarities and differences between scientific and everyday (non-scientific) statements is essential to our ability to accurately evaluate the trustworthiness of various claims.
Scientific and everyday reasoning both employ induction: drawing general conclusions from specific observations. For example, a person’s opinion that cramming for a test increases performance may be based on her memory of passing an exam after pulling an all-night study session. Similarly, a researcher’s conclusion against cramming might be based on studies comparing the test performances of people who studied the material in different ways (e.g., cramming versus study sessions spaced out over time). In these scenarios, both scientific and everyday conclusions are drawn from a limited sample of potential observations.
The process of induction, alone, does not seem suitable enough to provide trustworthy information—given the contradictory results. What should a student who wants to perform well on exams do? One source of information encourages her to cram, while another suggests that spacing out her studying time is the best strategy. To make the best decision with the information at hand, we need to appreciate the differences between personal opinions and scientific statements, which requires an understanding of science and the nature of scientific reasoning.
There are generally agreed-upon features that distinguish scientific thinking - and the theories and data generated by it—from everyday thinking. A short list of some of the commonly cited features of scientific theories and data is shown in Table 1.
One additional feature of modern science not included in this list but prevalent in scientists’ thinking and theorizing is falsifiability, a feature that has so permeated scientific practice that it warrants additional clarification. In the early 20th century, Karl Popper (1902-1994) suggested that science can be distinguished from pseudoscience (or just everyday reasoning) because scientific claims are capable of being falsified. That is, a claim can be conceivably demonstrated to be untrue. For example, a person might claim that “all people are right handed.” This claim can be tested and - ultimately - thrown out because it can be shown to be false: There are people who are left-handed. An easy rule of thumb is to not get confused by the term “falsifiable” but to understand that—more or less—it means testable.
On the other hand, some claims cannot be tested and falsified. Imagine, for instance, that a magician claims that he can teach people to move objects with their minds. The trick, he explains, is to truly believe in one’s ability for it to work. When his students fail to budge chairs with their minds, the magician scolds, “Obviously, you don’t truly believe.” The magician’s claim does not qualify as falsifiable because there is no way to disprove it. It is unscientific.
Popper was particularly irritated about nonscientific claims because he believed they were a threat to the science of psychology. Specifically, he was dissatisfied with Freud’s explanations for mental illness. Freud believed that when a person suffers a mental illness it is often due to problems stemming from childhood. For instance, imagine a person who grows up to be an obsessive perfectionist. If she were raised by messy, relaxed parents, Freud might argue that her adult perfectionism is a reaction to her early family experiences—an effort to maintain order and routine instead of chaos. Alternatively, imagine the same person being raised by harsh, orderly parents. In this case, Freud might argue that her adult tidiness is simply her internalizing her parents’ way of being. As you can see, according to Freud’s rationale, both opposing scenarios are possible; no matter what the disorder, Freud’s theory could explain its childhood origin—thus failing to meet the principle of falsifiability.
Popper argued against statements that could not be falsified. He claimed that they blocked scientific progress: There was no way to advance, refine, or refute knowledge based on such claims. Popper’s solution was a powerful one: If science showed all the possibilities that were not true, we would be left only with what is true. That is, we need to be able to articulate - beforehand - the kinds of evidence that will disprove our hypothesis and cause us to abandon it.
This may seem counterintuitive. For example, if a scientist wanted to establish a comprehensive understanding of why car accidents happen, she would systematically test all potential causes: alcohol consumption, speeding, using a cell phone, fiddling with the radio, wearing sandals, eating, chatting with a passenger, etc. A complete understanding could only be achieved once all possible explanations were explored and either falsified or not. After all the testing was concluded, the evidence would be evaluated against the criteria for falsification, and only the real causes of accidents would remain. The scientist could dismiss certain claims (e.g., sandals lead to car accidents) and keep only those supported by research (e.g., using a mobile phone while driving increases risk). It might seem absurd that a scientist would need to investigate so many alternative explanations, but it is exactly how we rule out bad claims. Of course, many explanations are complicated and involve multiple causes—as with car accidents, as well as psychological phenomena.
Test Yourself 1: Can It Be Falsified?
Which of the following hypotheses can be falsified? For each, be sure to consider what kind of data could be collected to demonstrate that a statement is not true.
A. Chocolate tastes better than pasta.
B. We live in the most violent time in history.
C. Time can run backward as well as forward.
D. There are planets other than Earth that have water on them.
[See answer at end of this module]
Although the idea of falsification remains central to scientific data and theory development, these days it’s not used strictly the way Popper originally envisioned it. To begin with, scientists aren’t solely interested in demonstrating what isn’t. Scientists are also interested in providing descriptions and explanations for the way things are. We want to describe different causes and the various conditions under which they occur. We want to discover when young children start speaking in complete sentences, for example, or whether people are happier on the weekend, or how exercise impacts depression. These explorations require us to draw conclusions from limited samples of data. In some cases, these data seem to fit with our hypotheses and in others they do not. This is where interpretation and probability come in.
The Interpretation of Research Results
Imagine a researcher wanting to examine the hypothesis—a specific prediction based on previous research or scientific theory—that caffeine enhances memory. She knows there are several published studies that suggest this might be the case, and she wants to further explore the possibility. She designs an experiment to test this hypothesis. She randomly assigns some participants a cup of fully caffeinated tea and some a cup of herbal tea. All the participants are instructed to drink up, study a list of words, then complete a memory test. There are three possible outcomes of this proposed study:
- The caffeine group performs better (support for the hypothesis).
- The no-caffeine group performs better (evidence against the hypothesis).
- There is no difference in the performance between the two groups (also evidence against the hypothesis).
Let’s look, from a scientific point of view, at how the researcher should interpret each of these three possibilities.
First, if the results of the memory test reveal that the caffeine group performs better, this is a piece of evidence in favor of the hypothesis: It appears, at least in this case, that caffeine is associated with better memory. It does not, however, prove that caffeine is associated with better memory. There are still many questions left unanswered. How long does the memory boost last? Does caffeine work the same way with people of all ages? Is there a difference in memory performance between people who drink caffeine regularly and those who never drink it? Could the results be a freak occurrence? Because of these uncertainties, we do not say that a study—especially a single study—proves a hypothesis. Instead, we say the results of the study offer evidence in support of the hypothesis. Even if we tested this across 10 thousand or 100 thousand people we still could not use the word “proven” to describe this phenomenon. This is because inductive reasoning is based on probabilities. Probabilities are always a matter of degree; they may be extremely likely or unlikely. Science is better at shedding light on the likelihood—or probability—of something than at proving it. In this way, data is still highly useful even if it doesn’t fit Popper’s absolute standards.
The science of meteorology helps illustrate this point. You might look at your local weather forecast and see a high likelihood of rain. This is because the meteorologist has used inductive reasoning to create her forecast. She has taken current observations—lots of dense clouds coming toward your city—and compared them to historical weather patterns associated with rain, making a reasonable prediction of a high probability of rain. The meteorologist has not proven it will rain, however, by pointing out the oncoming clouds.
Proof is more associated with deductive reasoning. Deductive reasoning starts with general principles that are applied to specific instances (the reverse of inductive reasoning). When the general principles, or premises, are true, and the structure of the argument is valid, the conclusion is, by definition, proven; it must be so. A deductive truth must apply in all relevant circumstances. For example, all living cells contain DNA. From this, you can reason—deductively—that any specific living cell (of an elephant, or a person, or a snake) will therefore contain DNA. Given the complexity of psychological phenomena, which involve many contributing factors, it is nearly impossible to make these types of broad statements with certainty.
Test Yourself 2: Inductive or Deductive?
- The stove was on and the water in the pot was boiling over. The front door was standing open. These clues suggest the homeowner left unexpectedly and in a hurry.
- Gravity is associated with mass. Because the moon has a smaller mass than the Earth, it should have weaker gravity.
- Students don’t like to pay for high priced textbooks. It is likely that many students in the class will opt not to purchase a book.
- To earn a college degree, students need 100 credits. Janine has 85 credits, so she cannot graduate.
[See answer at end of this module]
The second possible result from the caffeine-memory study is that the group who had no caffeine demonstrates better memory. This result is the opposite of what the researcher expects to find (her hypothesis). Here, the researcher must admit the evidence does not support her hypothesis. She must be careful, however, not to extend that interpretation to other claims. For example, finding increased memory in the no-caffeine group would not be evidence that caffeine harms memory. Again, there are too many unknowns. Is this finding a freak occurrence, perhaps based on an unusual sample? Is there a problem with the design of the study? The researcher doesn’t know. She simply knows that she was not able to observe support for her hypothesis.
There is at least one additional consideration: The researcher originally developed her caffeine- benefits-memory hypothesis based on conclusions drawn from previous research. That is, previous studies found results that suggested caffeine boosts memory. The researcher’s single study should not outweigh the conclusions of many studies. Perhaps the earlier research employed participants of different ages or who had different baseline levels of caffeine intake. This new study simply becomes a piece of fabric in the overall quilt of studies of the caffeine- memory relationship. It does not, on its own, definitively falsify the hypothesis.
Finally, it’s possible that the results show no difference in memory between the two groups. How should the researcher interpret this? How would you? In this case, the researcher once again has to admit that she has not found support for her hypothesis.
Interpreting the results of a study—regardless of outcome—rests on the quality of the observations from which those results are drawn. If you learn, say, that each group in a study included only four participants, or that they were all over 90 years old, you might have concerns. Specifically, you should be concerned that the observations, even if accurate, aren’t representative of the general population. This is one of the defining differences between conclusions drawn from personal anecdotes and those drawn from scientific observations. Anecdotal evidence - derived from personal experience and unsystematic observations (e. g., “common sense,”) - is limited by the quality and representativeness of observations, and by memory shortcomings. Well-designed research, on the other hand, relies on observations that are systematically recorded, of high quality, and representative of the population it claims to describe.
Why Should I Trust Science If It Can’t Prove Anything?
It’s worth delving a bit deeper into why we ought to trust the scientific inductive process, even when it relies on limited samples that don’t offer absolute “proof.” To do this, let’s examine a widespread practice in psychological science: null-hypothesis significance testing.
To understand this concept, let’s begin with another research example. Imagine, for instance, a researcher is curious about the ways maturity affects academic performance. She might have a hypothesis that mature students are more likely to be responsible about studying and completing homework and, therefore, will do better in their courses. To test this hypothesis, the researcher needs a measure of maturity and a measure of course performance. She might calculate the correlation—relationship—between student age (her measure of maturity) and points earned in a course (her measure of academic performance). Ultimately, the researcher is interested in the likelihood - or probability - that these two variables closely relate to one another. Null-hypothesis significance testing (NHST) assesses the probability that the collected data (the observations) would be the same if there were no relationship between the variables in the study. Using our example, the NHST would test the probability that the researcher would find a link between age and class performance if there were, in reality, no such link.
Now, here’s where it gets a little complicated. NHST involves a null hypothesis, a statement that two variables are not related (in this case, that student maturity and academic performance are not related in any meaningful way). NHST also involves an alternative hypothesis, a statement that two variables are related (in this case, that student maturity and academic performance go together). To evaluate these two hypotheses, the researcher collects data. The researcher then compares what she expects to find (probability) with what she actually finds (the collected data) to determine whether she can falsify, or reject, the null hypothesis in favor of the alternative hypothesis.
How does she do this? By looking at the distribution of the data. The distribution is the spread of values—in our example, the numeric values of students’ scores in the course. The researcher will test her hypothesis by comparing the observed distribution of grades earned by older students to those earned by younger students, recognizing that some distributions are more or less likely. Your intuition tells you, for example, that the chances of every single person in the course getting a perfect score are lower than their scores being distributed across all levels of performance.
The researcher can use a probability table to assess the likelihood of any distribution she finds in her class. These tables reflect the work, over the past 200 years, of mathematicians and scientists from a variety of fields. You can see, in Table 2a, an example of an expected distribution if the grades were normally distributed (most are average, and relatively few are amazing or terrible). In Table 2b, you can see possible results of this imaginary study, and can clearly see how they differ from the expected distribution.
In the process of testing these hypotheses, there are four possible outcomes. These are determined by two factors: 1) reality, and 2) what the researcher finds (see Table 3). The best possible outcome is accurate detection. This means that the researcher’s conclusion mirrors reality. In our example, let’s pretend the more mature students do perform slightly better. If this is what the researcher finds in her data, her analysis qualifies as an accurate detection of reality. Another form of accurate detection is when a researcher finds no evidence for a phenomenon, but that phenomenon doesn’t actually exist anyway! Using this same example, let’s now pretend that maturity has nothing to do with academic performance. Perhaps academic performance is instead related to intelligence or study habits. If the researcher finds no evidence for a link between maturity and grades and none actually exists, she will have also achieved accurate detection.
There are a couple of ways that research conclusions might be wrong. One is referred to as a type I error - when the researcher concludes there is a relationship between two variables but, in reality, there is not. Back to our example: Let’s now pretend there’s no relationship between maturity and grades, but the researcher still finds one. Why does this happen? It may be that her sample, by chance, includes older students who also have better study habits and perform better: the researcher has “found” a relationship (the data appearing to show age as significantly correlated with academic performance), but the truth is that the apparent relationship is purely coincidental—the result of these specific older students in this particular sample having better-than-average study habits (the real cause of the relationship). They may have always had superior study habits, even when they were young.
Another possible outcome of NHST is a type II error, when the data fail to show a relationship between variables that actually exists. In our example, this time pretend that maturity is —in reality—associated with academic performance, but the researcher doesn’t find it in her sample. Perhaps it was just her bad luck that her older students are just having an off day, suffering from test anxiety, or were uncharacteristically careless with their homework: the peculiarities of her particular sample, by chance, prevent the researcher from identifying the real relationship between maturity and academic performance.
These types of errors might worry you, that there is just no way to tell if data are any good or not. Researchers share your concerns, and address them by using probability values (p- values) to set a threshold for type I or type II errors. When researchers write that a particular finding is “significant at a p < .05 level,” they’re saying that if the same study were repeated 100 times, we should expect this result to occur - by chance - fewer than five times. That is, in this case, a Type I error is unlikely. Scholars sometimes argue over the exact threshold that should be used for probability. The most common in psychological science are .05 (5% chance), .01 (1% chance), and .001 (1/10th of 1% chance). Remember, psychological science doesn’t rely on definitive proof; it’s about the probability of seeing a specific result. This is also why it’s so important that scientific findings be replicated in additional studies.
It’s because of such methodologies that science is generally trustworthy. Not all claims and explanations are equal; some conclusions are better bets, so to speak. Scientific claims are more likely to be correct and predict real outcomes than “common sense” opinions and personal anecdotes. This is because researchers consider how to best prepare and measure their subjects, systematically collect data from large and—ideally—representative samples, and test their findings against probability.
Scientific Theories
The knowledge generated from research is organized according to scientific theories. A scientific theory is a comprehensive framework for making sense of evidence regarding a particular phenomenon. When scientists talk about a theory, they mean something different from how the term is used in everyday conversation. In common usage, a theory is an educated guess—as in, “I have a theory about which team will make the playoffs,” or, “I have a theory about why my sister is always running late for appointments.” Both of these beliefs are liable to be heavily influenced by many untrustworthy factors, such as personal opinions and memory biases. A scientific theory, however, enjoys support from many research studies, collectively providing evidence, including, but not limited to, that which has falsified competing explanations. A key component of good theories is that they describe, explain, and predict in a way that can be empirically tested and potentially falsified.
Theories are open to revision if new evidence comes to light that compels reexamination of the accumulated, relevant data. In ancient times, for instance, people thought the Sun traveled around the Earth. This seemed to make sense and fit with many observations. In the 16th century, however, astronomers began systematically charting visible objects in the sky, and, over a 50-year period, with repeated testing, critique, and refinement, they provided evidence for a revised theory: The Earth and other cosmic objects revolve around the Sun. In science, we believe what the best and most data tell us. If better data come along, we must be willing to change our views in accordance with the new evidence.
Is Science Objective?
Thomas Kuhn (2012), a historian of science, argued that science, as an activity conducted by humans, is a social activity. As such, it is—according to Kuhn—subject to the same psychological influences of all human activities. Specifically, Kuhn suggested that there is no such thing as objective theory or data; all of science is informed by values. Scientists cannot help but let personal/cultural values, experiences, and opinions influence the types of questions they ask and how they make sense of what they find in their research. Kuhn’s argument highlights a distinction between facts (information about the world), and values (beliefs about the way the world is or ought to be). This distinction is an important one, even if it is not always clear.
To illustrate the relationship between facts and values, consider the problem of global warming. A vast accumulation of evidence (facts) substantiates the adverse impact that human activity has on the levels of greenhouse gases in Earth’s atmosphere leading to changing weather patterns. There is also a set of beliefs (values), shared by many people, that influences their choices and behaviors in an attempt to address that impact (e.g., purchasing electric vehicles, recycling, bicycle commuting). Our values—in this case, that Earth as we know it is in danger and should be protected—influence how we engage with facts. People (including scientists) who strongly endorse this value, for example, might be more attentive to research on renewable energy.
The primary point of this illustration is that (contrary to the image of scientists as outside observers to the facts, gathering them neutrally and without bias from the natural world) all science—especially social sciences like psychology—involves values and interpretation. As a result, science functions best when people with diverse values and backgrounds work collectively to understand complex natural phenomena.
Indeed, science can benefit from multiple perspectives. One approach to achieving this is through levels of analysis. Levels of analysis is the idea that a single phenomenon may be explained at different levels simultaneously. Remember the question concerning cramming for a test versus studying over time? It can be answered at a number of different levels of analysis. At a low level, we might use brain scanning technologies to investigate whether biochemical processes differ between the two study strategies. At a higher level—the level of thinking—we might investigate processes of decision making (what to study) and ability to focus, as they relate to cramming versus spaced practice. At even higher levels, we might be interested in real world behaviors, such as how long people study using each of the strategies. Similarly, we might be interested in how the presence of others influences learning across these two strategies. Levels of analysis suggests that one level is not more correct—or truer—than another; their appropriateness depends on the specifics of the question asked. Ultimately, levels of analysis would suggest that we cannot understand the world around us, including human psychology, by reducing the phenomenon to only the biochemistry of genes and dynamics of neural networks. But, neither can we understand humanity without considering the functions of the human nervous system.
Science in Context
There are many ways to interpret the world around us. People rely on common sense, personal experience, and faith, in combination and to varying degrees. All of these offer legitimate benefits to navigating one’s culture, and each offers a unique perspective, with specific uses and limitations. Science provides another important way of understanding the world and, while it has many crucial advantages, as with all methods of interpretation, it also has limitations. Understanding the limits of science—including its subjectivity and uncertainty— does not render it useless. Because it is systematic, using testable, reliable data, it can allow us to determine causality and can help us generalize our conclusions. By understanding how scientific conclusions are reached, we are better equipped to use science as a tool of knowledge.
Answer - Test Yourself 1: Can It Be Falsified?
Answer explained: There are 4 hypotheses presented. Basically, the question asks “which of these could be tested and demonstrated to be false?". We can eliminate answers A, B and C. A is a matter of personal opinion. C is a concept for which there are currently no existing measures. B is a little trickier. A person could look at data on wars, assaults, and other forms of violence to draw a conclusion about which period is the most violent. The problem here is that we do not have data for all time periods, and there is no clear guide to which data should be used to address this hypothesis. The best answer is D, because we have the means to view other planets and to determine whether there is water on them (for example, Mars has ice).
Answer - Test Yourself 2: Inductive or Deductive
Answer explained: This question asks you to consider whether each of 5 examples represents inductive or deductive reasoning. 1) Inductive—it is possible to draw the conclusion—the homeowner left in a hurry—from specific observations such as the stove being on and the door being open. 2) Deductive—starting with a general principle (gravity is associated with mass), we draw a conclusion about the moon having weaker gravity than does the Earth because it has smaller mass. 3) Deductive—starting with a general principle (students do not like to pay for textbooks) it is possible to make a prediction about likely student behavior (they will not purchase textbooks). Note that this is a case of prediction rather than using observations. 4) Deductive—starting with a general principle (students need 100 credits to graduate) it is possible to draw a conclusion about Janine (she cannot graduate because she has fewer than the 100 credits required).
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
Anecdotal evidence
A piece of biased evidence, usually drawn from personal experience, used to support a conclusion that may or may not be correct.
Causality
In research, the determination that one variable causes—is responsible for—an effect.
Correlation
In statistics, the measure of relatedness of two or more variables.
Data (also called observations)
In research, information systematically collected for analysis and interpretation.
Deductive reasoning
A form of reasoning in which a given premise determines the interpretation of specific observations (e.g., All birds have feathers; since a duck is a bird, it has feathers).
Distribution
In statistics, the relative frequency that a particular value occurs for each possible value of a given variable.
Empirical
Concerned with observation and/or the ability to verify a claim.
Fact
Objective information about the world.
Falsify
In science, the ability of a claim to be tested and—possibly—refuted; a defining feature of science.
Generalize
In research, the degree to which one can extend conclusions drawn from the findings of a study to other groups or situations not included in the study.
Hypothesi
A tentative explanation that is subject to testing.
Induction
To draw general conclusions from specific observations.
Inductive reasoning
A form of reasoning in which a general conclusion is inferred from a set of observations (e. g., noting that “the driver in that car was texting; he just cut me off then ran a red light!” (a specific observation), which leads to the general conclusion that texting while driving is dangerous).
Levels of analysis
In science, there are complementary understandings and explanations of phenomena.
Null-hypothesis significance testing (NHST)
In statistics, a test created to determine the chances that an alternative hypothesis would produce a result as extreme as the one observed if the null hypothesis were actually true.
Objective
Being free of personal bias.
Population
In research, all the people belonging to a particular group (e.g., the population of left handed people).
Probability
A measure of the degree of certainty of the occurrence of an event.
Probability values
In statistics, the established threshold for determining whether a given value occurs by chance.
Pseudoscience
Beliefs or practices that are presented as being scientific, or which are mistaken for being scientific, but which are not scientific (e.g., astrology, the use of celestial bodies to make predictions about human behaviors, and which presents itself as founded in astronomy, the actual scientific study of celestial objects. Astrology is a pseudoscience unable to be falsified, whereas astronomy is a legitimate scientific discipline).
Representative
In research, the degree to which a sample is a typical example of the population from which it is drawn.
Sample
In research, a number of people selected from a population to serve as an example of that population.
Scientific theory
An explanation for observed phenomena that is empirically well-supported, consistent, and fruitful (predictive).
Type I error
In statistics, the error of rejecting the null hypothesis when it is true.
Type II error
In statistics, the error of failing to reject the null hypothesis when it is false.
Value
Belief about the way things should be.
References
- Kuhn, T. S. (2012). The structure of scientific revolutions: 50th anniversary edition. Chicago, USA: University of Chicago Press.
- Kuhn, T. S. (2011). Objectivity, value judgment, and theory choice, in T. S. Kuhn (Ed.), The essential tension: Selected studies in scientific tradition and change (pp. 320-339). Chicago: University of Chicago Press. Retrieved from http://ebookcentral.proquest.com
How to cite this Chapter using APA Style:
Buss, D. M. (2019). Evolutionary theories in psychology. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/ymcbwrx4
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/ymcbwrx4
Additional information about the Diener Education Fund (DEF) can be accessed here.
Original chapter by Aneeq Ahmad adapted by the Queen’s University Psychology Department
This Open Access chapter was originally written for the NOBA project. Information on the NOBA project can be found below.
The mammalian nervous system is a complex biological organ, which enables many animals including humans to function in a coordinated fashion. The original design of this system is preserved across many animals through evolution; thus, adaptive physiological and behavioral functions are similar across many animal species. Comparative study of physiological functioning in the nervous systems of different animals lend insights to their behavior and their mental processing and make it easier for us to understand the human brain and behavior. In addition, studying the development of the nervous system in a growing human provides a wealth of information about the change in its form and behaviors that result from this change. The nervous system is divided into central and peripheral nervous systems, and the two heavily interact with one another. The peripheral nervous system controls volitional (somatic nervous system) and nonvolitional (autonomic nervous system) behaviors using cranial and spinal nerves. The central nervous system is divided into forebrain, midbrain, and hindbrain, and each division performs a variety of tasks; for example, the cerebral cortex in the forebrain houses sensory, motor, and associative areas that gather sensory information, process information for perception and memory, and produce responses based on incoming and inherent information. To study the nervous system, a number of methods have evolved over time; these methods include examining brain lesions, microscopy, electrophysiology, electroencephalography, and many scanning technologies.
Learning Objectives
- Describe and understand the development of the nervous system.
- Learn and understand the two important parts of the nervous system.
- Explain the two systems in the peripheral nervous system and what you know about the different regions and areas of the central nervous system.
- Learn and describe different techniques of studying the nervous system. Understand which of these techniques are important for cognitive neuroscientists.
- Describe the reasons for studying different nervous systems in animals other than human beings. Explain what lessons we learn from the evolutionary history of this organ.
Evolution of the Nervous System
Many scientists and thinkers (Cajal, 1937; Crick & Koch, 1990; Edelman, 2004) believe that the human nervous system is the most complex machine known to man. Its complexity points to one undeniable fact—that it has evolved slowly over time from simpler forms. Evolution of the nervous system is intriguing not because we can marvel at this complicated biological structure, but it is fascinating because it inherits a lineage of a long history of many less complex nervous systems (Figure 1), and it documents a record of adaptive behaviors observed in life forms other than humans. Thus, evolutionary study of the nervous system is important, and it is the first step in understanding its design, its workings, and its functional interface with the environment.
The brains of some animals, like apes, monkeys, and rodents, are structurally similar to humans (Figure 1), while others are not (e.g., invertebrates, single-celled organisms). Does anatomical similarity of these brains suggest that behaviors that emerge in these species are also similar? Indeed, many animals display behaviors that are similar to humans, e.g., apes use nonverbal communication signals with their hands and arms that resemble nonverbal forms of communication in humans (Gardner & Gardner, 1969; Goodall, 1986; Knapp & Hall, 2009). If we study very simple behaviors, like physiological responses made by individual neurons, then brain-based behaviors of invertebrates (Kandel & Schwartz, 1982) look very similar to humans, suggesting that from time immemorial such basic behaviors have been conserved in the brains of many simple animal forms and in fact are the foundation of more complex behaviors in animals that evolved later (Bullock, 1984).
Even at the micro-anatomical level, we note that individual neurons differ in complexity across animal species. Human neurons exhibit more intricate complexity than other animals; for example, neuronal processes (dendrites) in humans have many more branch points, branches, and spines.
Complexity in the structure of the nervous system, both at the macro- and micro-levels, give rise to complex behaviors. We can observe similar movements of the limbs, as in nonverbal communication, in apes and humans, but the variety and intricacy of nonverbal behaviors using hands in humans surpasses apes. Deaf individuals who use American Sign Language (ASL) express themselves in English nonverbally; they use this language with such fine gradation that many accents of ASL exist (Walker, 1987). Complexity of behavior with increasing complexity of the nervous system, especially the cerebral cortex, can be observed in the genus Homo (Figure 2). If we compare sophistication of material culture in Homo habilis (2 million years ago; brain volume ~650 cm3) and Homo sapiens (300,000 years to now; brain volume ~1400 cm3), the evidence shows that Homo habilis used crude stone tools compared with modern tools used by Homo sapiens to erect cities, develop written languages, embark on space travel, and study her own self. All of this is due to increasing complexity of the nervous system.
What has led to the complexity of the brain and nervous system through evolution, to its behavioral and cognitive refinement? Darwin (1859, 1871) proposed two forces of natural and sexual selection as work engines behind this change. He prophesied, “psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation” that is, psychology will be based on evolution (Rosenzweig, Breedlove, & Leiman, 2002).
Development of the Nervous System
Where the study of change in the nervous system over eons is immensely captivating, studying the change in a single brain during individual development is no less engaging. In many ways the ontogeny (development) of the nervous system in an individual mimics the evolutionary advancement of this structure observed across many animal species. During development, the nervous tissue emerges from the ectoderm (one of the three layers of the mammalian embryo) through the process of neural induction. This process causes the formation of the neural tube, which extends in a rostrocaudal (head-to-tail) plane. The tube, which is hollow, seams itself in the rostrocaudal direction. In some disease conditions, the neural tube does not close caudally and results in an abnormality called spina bifida. In this pathological condition, the lumbar and sacral segments of the spinal cord are disrupted.
As gestation progresses, the neural tube balloons up (cephalization) at the rostral end, and forebrain, midbrain, hindbrain, and the spinal cord can be visually delineated (day 40). About 50 days into gestation, six cephalic areas can be anatomically discerned (also see below for a more detailed description of these areas).
The progenitor cells (neuroblasts) that form the lining (neuroepithelium) of the neural tube generate all the neurons and glial cells of the central nervous system. During early stages of this development, neuroblasts rapidly divide and specialize into many varieties of neurons and glial cells, but this proliferation of cells is not uniform along the neural tube—that is why we see the forebrain and hindbrain expand into larger cephalic tissues than the midbrain. The neuroepithelium also generates a group of specialized cells that migrate outside the neural tube to form the neural crest. This structure gives rise to sensory and autonomic neurons in the peripheral nervous system.
The Structure of the Nervous System
The mammalian nervous system is divided into central and peripheral nervous systems.
The Peripheral Nervous System
The peripheral nervous system is divided into somatic and autonomic nervous systems (Figure 3). Where the somatic nervous system consists of cranial nerves (12 pairs) and spinal nerves (31 pairs) and is under the volitional control of the individual in maneuvering bodily muscles, the autonomic nervous system also running through these nerves lets the individual have little control over muscles and glands. Main divisions of the autonomic nervous system that control visceral structures are the sympathetic and parasympathetic nervous systems.
At an appropriate cue (say a fear-inducing object like a snake), the sympathetic division generally energizes many muscles (e.g., heart) and glands (e.g., adrenals), causing activity and release of hormones that lead the individual to negotiate the fear-causing snake with fight-or-flight responses. Whether the individual decides to fight the snake or run away from it, either action requires energy; in short, the sympathetic nervous system says “go, go, go.” The parasympathetic nervous system, on the other hand, curtails undue energy mobilization into muscles and glands and modulates the response by saying “stop, stop, stop.” This push–pull tandem system regulates fight-or-flight responses in all of us.
The Central Nervous System
The central nervous system is divided into a number of important parts (see Figure 4), including the spinal cord, each specialized to perform a set of specific functions. Telencephalon or cerebrum is a newer development in the evolution of the mammalian nervous system. In humans, it is about the size of a large napkin and when crumpled into the skull, it forms furrows called sulci (singular form, sulcus). The bulges between sulci are called gyri (singular form, gyrus). The cortex is divided into two hemispheres, and each hemisphere is further divided into four lobes (Figure 5a), which have specific functions. The division of these lobes is based on two delineating sulci: the central sulcus divides the hemisphere into frontal and parietal-occipital lobes and the lateral sulcus marks the temporal lobe, which lies below.
Just in front of the central sulcus lies an area called the primary motor cortex (precentral gyrus), which connects to the muscles of the body, and on volitional command moves them. From mastication to movements in the genitalia, the body map is represented on this strip (Figure 5b).
Some body parts, like fingers, thumbs, and lips, occupy a greater representation on the strip than, say, the trunk. This disproportionate representation of the body on the primary motor cortex is called the magnification factor (Rolls & Cowey, 1970) and is seen in other motor and sensory areas. At the lower end of the central sulcus, close to the lateral sulcus, lies the Broca’s area (Figure 6b) in the left frontal lobe, which is involved with language production. Damage to this part of the brain led Pierre Paul Broca, a French neuroscientist in 1861, to document many different forms of aphasias, in which his patients would lose the ability to speak or would retain partial speech impoverished in syntax and grammar (AAAS, 1880). It is no wonder that others have found subvocal rehearsal and central executive processes of working memory in this frontal lobe (Smith & Jonides, 1997, 1999).
Just behind the central gyrus, in the parietal lobe, lies the primary somatosensory cortex (Figure 6a) on the postcentral gyrus, which represents the whole body receiving inputs from the skin and muscles. The primary somatosensory cortex parallels, abuts, and connects heavily to the primary motor cortex and resembles it in terms of areas devoted to bodily representation. All spinal and some cranial nerves (e.g., the facial nerve) send sensory signals from skin (e.g., touch) and muscles to the primary somatosensory cortex. Close to the lower (ventral) end of this strip, curved inside the parietal lobe, is the taste area (secondary somatosensory cortex), which is involved with taste experiences that originate from the tongue, pharynx, epiglottis, and so forth.
Just below the parietal lobe, and under the caudal end of the lateral fissure, in the temporal lobe, lies the Wernicke’s area (Demonet et al., 1992). This area is involved with language comprehension and is connected to the Broca’s area through the arcuate fasciculus, nerve fibers that connect these two regions. Damage to the Wernicke’s area (Figure 6b) results in many kinds of agnosias; agnosia is defined as an inability to know or understand language and speech-related behaviors. So an individual may show word deafness, which is an inability to recognize spoken language, or word blindness, which is an inability to recognize written or printed language. Close in proximity to the Wernicke’s area is the primary auditory cortex, which is involved with audition, and finally the brain region devoted to smell (olfaction) is tucked away inside the primary olfactory cortex (prepyriform cortex).
At the very back of the cerebral cortex lies the occipital lobe housing the primary visual cortex. Optic nerves travel all the way to the thalamus (lateral geniculate nucleus, LGN) and then to visual cortex, where images that are received on the retina are projected (Hubel, 1995).In the past 50 to 60 years, visual sense and visual pathways have been studied extensively, and our understanding about them has increased manifold. We now understand that all objects that form images on the retina are transformed (transduction) in neural language handed down to the visual cortex for further processing. In the visual cortex, all attributes (features) of the image, such as the color, texture, and orientation, are decomposed and processed by different visual cortical modules (Van Essen, Anderson & Felleman, 1992) and then recombined to give rise to singular perception of the image in question.If we cut the cerebral hemispheres in the middle, a new set of structures come into view. Many of these perform different functions vital to our being. For example, the limbic system contains a number of nuclei that process memory (hippocampus and fornix) and attention and emotions (cingulate gyrus); the globus pallidus is involved with motor movements and their coordination; the hypothalamus and thalamus are involved with drives, motivations, and trafficking of sensory and motor throughputs. The hypothalamus plays a key role in regulating endocrine hormones in conjunction with the pituitary gland that extends from the hypothalamus through a stalk (infundibulum).
As we descend down the thalamus, the midbrain comes into view with superior and inferior colliculi, which process visual and auditory information, as does the substantia nigra, which is involved with notorious Parkinson’s disease, and the reticular formation regulating arousal, sleep, and temperature. A little lower, the hindbrain with the pons processes sensory and motor information employing the cranial nerves, works as a bridge that connects the cerebral cortex with the medulla, and reciprocally transfers information back and forth between the brain and the spinal cord. The medulla oblongata processes breathing, digestion, heart and blood vessel function, swallowing, and sneezing. The cerebellum controls motor movement coordination, balance, equilibrium, and muscle tone.
The midbrain and the hindbrain, which make up the brain stem, culminate in the spinal cord. Whereas inside the cerebral cortex, the gray matter (neuronal cell bodies) lies outside and white matter (myelinated axons) inside; in the spinal cord this arrangement reverses, as the gray matter resides inside and the white matter outside. Paired nerves (ganglia) exit the spinal cord, some closer in direction towards the back (dorsal) and others towards the front (ventral). The dorsal nerves (afferent) receive sensory information from skin and muscles, and ventral nerves (efferent) send signals to muscles and organs to respond.
Studying the Nervous System
The study of the nervous system involves anatomical and physiological techniques that have improved over the years in efficiency and caliber. Clearly, gross morphology of the nervous system requires an eye-level view of the brain and the spinal cord. However, to resolve minute components, optical and electron microscopic techniques are needed.
Light microscopes and, later, electron microscopes have changed our understanding of the intricate connections that exist among nerve cells. For example, modern staining procedures (immunocytochemistry) make it possible to see selected neurons that are of one type or another or are affected by growth. With better resolution of the electron microscopes, fine structures like the synaptic cleft between the pre- and post-synaptic neurons can be studied in detail.
Along with the neuroanatomical techniques, a number of other methodologies aid neuroscientists in studying the function and physiology of the nervous system. Early on, lesion studies in animals (and study of neurological damage in humans) provided information about the function of the nervous system, by ablating (removing) parts of the nervous system or using neurotoxins to destroy them and documenting the effects on behavior or mental processes. Later, more sophisticated microelectrode techniques were introduced, which led to recording from single neurons in the animal brains and investigating their physiological functions. Such studies led to formulating theories about how sensory and motor information are processed in the brain. To study many neurons (millions of them at a time) electroencephalographic (EEG) techniques were introduced. These methods are used to study how large ensembles of neurons, representing different parts of the nervous system, with (event-related potentials) or without stimulation function together. In addition, many scanning techniques that visualize the brain in conjunction with methods mentioned above are used to understand the details of the structure and function of the brain. These include computerized axial tomography (CAT), which uses X-rays to capture many pictures of the brain and sandwiches them into 3-D models to study it. The resolution of this method is inferior to magnetic resonance imaging (MRI), which is yet another way to capture brain images using large magnets that bobble (precession) hydrogen nuclei in the brain. Although the resolution of MRI scans is much better than CAT scans, they do not provide any functional information about the brain. Positron Emission Tomography (PET) involves the acquisition of physiologic (functional) images of the brain based on the detection of positrons. Radio-labeled isotopes of certain chemicals, such as an analog of glucose (fluorodeoxyglucose), enters the active nerve cells and emits positrons, which are captured and mapped into scans. Such scans show how the brain and its many modules become active (or not) when energized with entering glucose analog. Disadvantages of PET scans include being invasive and rendering poor spatial resolution. The latter is why modern PET machines are coupled with CAT scanners to gain better resolution of the functioning brain. Finally, to avoid the invasiveness of PET, functional MRI (fMRI) techniques were developed. Brain images based on fMRI technique visualize brain function by changes in the flow of fluids (blood) in brain areas that occur over time. These scans provide a wealth of functional information about the brain as the individual may engage in a task, which is why the last two methods of brain scanning are very popular among cognitive neuroscientists.
Understanding the nervous system has been a long journey of inquiry, spanning several hundreds of years of meticulous studies carried out by some of the most creative and versatile investigators in the fields of philosophy, evolution, biology, physiology, anatomy, neurology, neuroscience, cognitive sciences, and psychology. Despite our profound understanding of this organ, its mysteries continue to surprise us, and its intricacies make us marvel at this complex structure unmatched in the universe.
A Video Exploration of the Limbic System
To help you visualize the limbic system, we recommend this video:
Check Your Knowledge
To help you with your studying, we’ve included some practice questions for this module. These questions do not necessarily address all content in this module. They are intended as practice, and you are responsible for all of the content in this module even if there is no associated practice question. To promote deeper engagement with the material, we encourage you to create some questions of your own for your practice. You can then also return to these self-generated questions later in the course to test yourself.
Vocabulary
- Afferent nerves
- Nerves that carry messages to the brain or spinal cord.
- Agnosias
- Due to damage of Wernicke’s area. An inability to recognize objects, words, or faces.
- Aphasia
- Due to damage of the Broca’s area. An inability to produce or understand words.
- Arcuate fasciculus
- A fiber tract that connects Wernicke’s and Broca’s speech areas.
- Autonomic nervous system
- A part of the peripheral nervous system that connects to glands and smooth muscles. Consists of sympathetic and parasympathetic divisions.
- Broca’s area
- An area in the frontal lobe of the left hemisphere. Implicated in language production.
- Central sulcus
- The major fissure that divides the frontal and the parietal lobes.
- Cerebellum
- A nervous system structure behind and below the cerebrum. Controls motor movement coordination, balance, equilibrium, and muscle tone.
- Cerebrum
- Consists of left and right hemispheres that sit at the top of the nervous system and engages in a variety of higher-order functions.
- Cingulate gyrus
- A medial cortical portion of the nervous tissue that is a part of the limbic system.
- Computerized axial tomography
- A noninvasive brain-scanning procedure that uses X-ray absorption around the head.
- Ectoderm
- The outermost layer of a developing fetus.
- Efferent nerves
- Nerves that carry messages from the brain to glands and organs in the periphery.
- Electroencephalography
- A technique that is used to measure gross electrical activity of the brain by placing electrodes on the scalp.
- A physiological measure of large electrical change in the brain produced by sensory stimulation or motor responses.
- Forebrain
- A part of the nervous system that contains the cerebral hemispheres, thalamus, and hypothalamus.
- Fornix
- (plural form, fornices) A nerve fiber tract that connects the hippocampus to mammillary bodies.
- Frontal lobe
- The most forward region (close to forehead) of the cerebral hemispheres.
- Functional magnetic resonance imaging
- (or fMRI) A noninvasive brain-imaging technique that registers changes in blood flow in the brain during a given task (also see magnetic resonance imaging).
- Globus pallidus
- A nucleus of the basal ganglia.
- Gray matter
- Composes the bark or the cortex of the cerebrum and consists of the cell bodies of the neurons (see also white matter).
- Gyrus
- (plural form, gyri) A bulge that is raised between or among fissures of the convoluted brain.
- Hippocampus
- (plural form, hippocampi) A nucleus inside (medial) the temporal lobe implicated in learning and memory.
- Homo habilis
- A human ancestor, handy man, that lived two million years ago.
- Homo sapiens
- Modern man, the only surviving form of the genus Homo.
- Hypothalamus
- Part of the diencephalon. Regulates biological drives with pituitary gland.
- Immunocytochemistry
- A method of staining tissue including the brain, using antibodies.
- Lateral geniculate nucleus
- (or LGN) A nucleus in the thalamus that is innervated by the optic nerves and sends signals to the visual cortex in the occipital lobe.
- Lateral sulcus
- The major fissure that delineates the temporal lobe below the frontal and the parietal lobes.
- Lesion studies
- A surgical method in which a part of the animal brain is removed to study its effects on behavior or function.
- Limbic system
- A loosely defined network of nuclei in the brain involved with learning and emotion.
- Magnetic resonance imaging
- Or MRI is a brain imaging noninvasive technique that uses magnetic energy to generate brain images (also see fMRI).
- Magnification factor
- Cortical space projected by an area of sensory input (e.g., mm of cortex per degree of visual field).
- Medulla oblongata
- An area just above the spinal cord that processes breathing, digestion, heart and blood vessel function, swallowing, and sneezing.
- Neural crest
- A set of primordial neurons that migrate outside the neural tube and give rise to sensory and autonomic neurons in the peripheral nervous system.
- Neural induction
- A process that causes the formation of the neural tube.
- Neuroblasts
- Brain progenitor cells that asymmetrically divide into other neuroblasts or nerve cells.
- Neuroepithelium
- The lining of the neural tube.
- Occipital lobe
- The back part of the cerebrum, which houses the visual areas.
- Parasympathetic nervous system
- A division of the autonomic nervous system that is slower than its counterpart—that is, the sympathetic nervous system—and works in opposition to it. Generally engaged in “rest and digest” functions.
- Parietal lobe
- An area of the cerebrum just behind the central sulcus that is engaged with somatosensory and gustatory sensation.
- Pons
- A bridge that connects the cerebral cortex with the medulla, and reciprocally transfers information back and forth between the brain and the spinal cord.
- Positron Emission Tomography
- (or PET) An invasive procedure that captures brain images with positron emissions from the brain after the individual has been injected with radio-labeled isotopes.
- Primary Motor Cortex
- A strip of cortex just in front of the central sulcus that is involved with motor control.
- Primary Somatosensory Cortex
- A strip of cerebral tissue just behind the central sulcus engaged in sensory reception of bodily sensations.
- Rostrocaudal
- A front-back plane used to identify anatomical structures in the body and the brain.
- Somatic nervous system
- A part of the peripheral nervous system that uses cranial and spinal nerves in volitional actions.
- Spina bifida
- A developmental disease of the spinal cord, where the neural tube does not close caudally.
- Sulcus
- (plural form, sulci) The crevices or fissures formed by convolutions in the brain.
- Sympathetic nervous system
- A division of the autonomic nervous system, that is faster than its counterpart that is the parasympathetic nervous system and works in opposition to it. Generally engaged in “fight or flight” functions.
- Temporal lobe
- An area of the cerebrum that lies below the lateral sulcus; it contains auditory and olfactory (smell) projection regions.
- Thalamus
- A part of the diencephalon that works as a gateway for incoming and outgoing information.
- Transduction
- A process in which physical energy converts into neural energy.
- Wernicke’s area
- A language area in the temporal lobe where linguistic information is comprehended (Also see Broca’s area).
- White matter
- Regions of the nervous system that represent the axons of the nerve cells; whitish in color because of myelination of the nerve cells.
- Working memory
- Short transitory memory processed in the hippocampus.
References
- American Association for the Advancement of Science (AAAS). (1880). Dr. Paul Broca. Science, 1(8), 93. http://www.jstor.org/stable/2900242
- Bullock, T. H. (1984). Comparative neuroscience holds promise for quiet revolutions. Science, 225(4661), 473–478.
- Crick, F., & Koch, C. (1990). Towards a neurobiological theory of consciousness. Seminars in the Neurosciences, 2, 263–275.
- Darwin, C. (1871). The descent of man, and the selection in relation to sex. London: J. Murray.
- Darwin, C. (1859). On the origins of species by means of natural selection, or, The preservation of favoured races in the struggle for life. London, UK: J. Murray.
- Demonet, J. F., Chollet, F., Ramsay, S., Cardebat, D., Nespoulous, J. L., Wise, R., . . . Frackowiak, R. (1992). The anatomy of phonological and semantic processing in normal subjects. Brain, 115(6), 1753–1768.
- Edelman, G. (2004). Wider than the sky: The phenomenal gift of consciousness. New Haven, CT: Yale University Press.
- Gardner, R. A., & Gardner, B. T. (1969). Teaching sign language to a chimpanzee. Science, 165(3894), 664–672.
- Goodall, J. (1986). The chimpanzees of Gombe: Patterns of behavior. Cambridge, MA: Harvard University Press.
- Hubel, D. H. (1995). Eye, brain, and vision. Freeman & Co., NY: Scientific American Library/Scientific American Books.
- Kandel, E. R., & Schwartz, J. H. (1982). Molecular biology of learning: Modulation of transmitter release. Science, 218(4571), 433–443.
- Knapp, M. L., & Hall, J. A. (2009). Nonverbal communication in human interaction. Boston, MA: Wadsworth Cengage Learning.
- Ramón y Cajal, S. (1937). Recollections of my life. Memoirs of the American Philosophical Society, 8, 1901–1917.
- Rolls, E. T., & Cowey, A. (1970). Topography of the retina and striate cortex and its relationship to visual acuity in rhesus monkeys and squirrel monkeys. Experimental Brain Research, 10(3), 298–310.
- Rosenzweig, M. R., Breedlove, S. M., & Leiman, A. L. (2002). Biological psychology (3rd ed.). Sunderland, MA: Sinauer Associates.
- Smith, E. E., & Jonides, J. (1999). Storage and executive processes in the frontal lobes. Science, 283(5408), 1657–1661.
- Smith, E. E., & Jonides, J. (1997). Working memory: A view from neuroimaging. Cognitive psychology, 33, 5–42.
- Van Essen, D. C., Anderson, C. H., & Felleman, D. J. (1992). Information processing in the primate visual system: An integrated systems perspective. Science, 255(5043), 419–423.
- Walker, L. A. (1987). A loss for words: The story of deafness in a family. New York, NY: Harper Perennial.
How to cite this Chapter using APA Style:
Ahmad, A. (2019). The nervous system. Adapted for use by Queen's University. Original chapter in R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology.Champaign, IL: DEF publishers. Retrieved from http://noba.to/wnf72q34
Copyright and Acknowledgment:
This material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_US.
This material is attributed to the Diener Education Fund (copyright © 2018) and can be accessed via this link: http://noba.to/wnf72q34.
Additional information about the Diener Education Fund (DEF) can be accessed here.