5 Research Designs
Anne Baird
We have just been looking at models of the research process and goals of research. The following is a comparison of research methods or techniques used to describe, explain, or evaluate. Each of these designs has strengths and weaknesses and is sometimes used in combination with other designs within a single study.
Observational studies involve watching and recording the actions of participants. This may take place in the natural setting, such as observing children at play at a park, or behind a one-way glass while children are at play in a laboratory playroom. The researcher may follow a check list and record the frequency and duration of events (perhaps how many conflicts occur among 2 year olds) or may observe and record as much as possible about an event as a participant (such as attending an Alcoholics Anonymous meeting and recording the slogans on the walls, the structure of the meeting, the expressions commonly used, etc.). The researcher may be a participant or a non-participant. What would be the strengths of being a participant? What would be the weaknesses? Consider the strengths and weaknesses of not participating. In general, observational studies have the strength of allowing the researcher to see how people behave rather than relying on self-report. What people do and what they say they do are often very different. A major weakness of observational studies is that they do not allow the researcher to explain causal relationships. Yet, observational studies are useful and widely used when studying children. Children tend to change their behavior when they know they are being watched (known as the Hawthorne effect) and may not survey well.
Experiments are designed to test hypotheses (or specific statements about the relationship between variables) in a controlled setting in efforts to explain how certain factors or events produce outcomes. A variable is anything that changes in value. Concepts are operationalized or transformed into variables in research which means that the researcher must specify exactly what is going to be measured in the study. For example, if we are interested in studying marital satisfaction, we have to specify what marital satisfaction really means or what we are going to use as an indicator of marital satisfaction. What is something measurable that would indicate some level of marital satisfaction? Would it be the amount of time couples spend together each day? Or eye contact during a discussion about money? Or maybe a subject’s score on a marital satisfaction scale. Each of these is measurable but these may not be equally valid or accurate indicators of marital satisfaction. What do you think? These are the kinds of considerations researchers must make when working through the design.
Three conditions must be met in order to establish cause and effect. Experimental designs are useful in meeting these conditions.
- The independent and dependent variables must be related. In other words, when one is altered, the other changes in response. (The independent variable is something altered or introduced by the researcher. The dependent variable is the outcome or the factor affected by the introduction of the independent variable. For example, if we are looking at the impact of exercise on stress levels, the independent variable would be exercise; the dependent variable would be stress.)
- The cause must come before the effect. Experiments involve measuring subjects on the dependent variable before exposing them to the independent variable (establishing a baseline). So we would measure the subjects’ level of stress before introducing exercise and then again after the exercise to see if there has been a change in stress levels. (Observational and survey research does not always allow us to look at the timing of these events which makes understanding causality problematic with these designs.)
- The cause must be isolated. The researcher must ensure that no outside, perhaps unknown variables are actually causing the effect we see. The experimental design helps make this possible. In an experiment, we would make sure that our subjects’ diets were held constant throughout the exercise program. Otherwise, diet might really be creating the change in stress level rather than exercise.
A basic experimental design involves beginning with a sample (or subset of a population) and randomly assigning subjects to one of two groups: the experimental group or the control group. The experimental group is the group that is going to be exposed to an independent variable or condition the researcher is introducing as a potential cause of an event. The control group is going to be used for comparison and is going to have the same experience as the experimental group but will not be exposed to the independent variable. After exposing the experimental group to the independent variable, the two groups are measured again to see if a change has occurred. If so, we are in a better position to suggest that the independent variable caused the change in the dependent variable. The basic experimental model looks like this:
Sample is randomly assigned to one of the groups below | Measure DV | Introduce IV | Measure DV |
Experimental Group | X | X | X |
Control Group | X | – | X |
The major advantage of the experimental design is that of helping to establish cause and effect relationships. A disadvantage of this design is the difficulty of translating much of what concerns us about human behavior into a laboratory setting. I hope this brief description of experimental design helps you appreciate both the difficulty and the rigor of conducting an experiment.
Case studies involve exploring a single case or situation in great detail. Information may be gathered with the use of observation, interviews, testing, or other methods to uncover as much as possible about a person or situation. Case studies are helpful when investigating unusual situations such as brain trauma or children reared in isolation. And they often used by clinicians who conduct case studies as part of their normal practice when gathering information about a client or patient coming in for treatment. Case studies can be used to explore areas about which little is known and can provide rich detail about situations or conditions. However, the findings from case studies cannot be generalizedor applied to larger populations; this is because cases are not randomly selected and no control group is used for comparison. (Read “The Man Who Mistook His Wife for a Hat” by Dr. Oliver Sacks as a good example of the case study approach.)
Correlational studieslook at relationships or associations between two or more independent variables. Correlational studies do not allow us to draw any conclusions about evidence for cause and effect. However, these studies often help us form hunches or hypotheses about cause and effect.
Surveysare familiar to most people because they are so widely used. Surveys enhance accessibility to subjects because they can be conducted in person, over the phone, through the mail, or online. A survey involves asking a standard set of questions to a group of subjects. In a highly structured survey, subjects are forced to choose from a response set such as “strongly disagree, disagree, undecided, agree, strongly agree”; or “0, 1-5, 6-10, etc.” Surveys are commonly used by sociologists, marketing researchers, political scientists, therapists, and others to gather information on many independent and dependent variables in a relatively short period of time. Surveys typically yield surface information on a wide variety of factors, but may not allow for in-depth understanding of human behavior. Of course, surveys can be designed in a number of ways. They may include forced choice questions and open-ended questions in which the researcher allows the respondent to describe or give details about certain events. One of the most difficult aspects of designing a good survey is wording questions in an unbiased way and asking the right questions so that respondents can give a clear response rather that choosing “undecided” each time. Knowing that 30% of respondents are undecided is of little use! So a lot of time and effort should be placed on the construction of survey items. One of the benefits of having forced choice items is that each response is coded so that the results can be quickly entered and analyzed using statistical software. Analysis takes much longer when respondents give lengthy responses that must be analyzed in a different way. Surveys are useful in examining stated values, attitudes, opinions, and reporting on practices. However, they are based on self-report or what people say they do rather than on observation and this can limit accuracy.
Secondary analysis (archival data) involves analyzing information that has already been collected or examining documents or media to uncover attitudes, practices or preferences. There are a number of data sets available to those who wish to conduct this type of research. For example, Canadian Census Data is available through Statistics Canada and is widely used to look at trends and changes taking place within the country. You can go to https://www12.statcan.gc.ca/census-recensement/index-eng.cfm to check it out. You can find similar data for the U.S. at http://www.census.gov. The researcher conducting secondary analysis does not have to recruit subjects but does need to know the quality of the information collected in the original study.
Content analysis involves looking at media such as old texts, pictures, commercials, lyrics or other materials to explore patterns or themes in culture. An example of content analysis is the classic history of childhood by Aries (1962) called “Centuries of Childhood” or the analysis of television commercials for sexual or violent content. Passages in text or programs that air can be randomly selected for analysis as well. Again, one advantage of analyzing work such as this is that the researcher does not have to go through the time and expense of finding respondents, but the researcher cannot know how accurately the media reflects the actions and sentiments of the population.
Media Attributions
research in which the experimenter passively observes the behavior of the participants without any attempt at intervention or manipulation of the behaviors being observed. Such studies typically involve observation of cases under naturalistic conditions rather than the random assignment of cases to experimental conditions: Specially trained individuals record activities, events, or processes as precisely and completely as possible without personal interpretation.
the effect on the behavior of individuals of knowing that they are being observed or are taking part in research. The Hawthorne effect is typically positive and is named after the Western Electric Company’s Hawthorne Works plant in Cicero, Illinois, where the phenomenon was first observed during a series of studies on worker productivity conducted from 1924 to 1932. These Hawthorne Studies began as an investigation of the effects of illumination conditions, monetary incentives, and rest breaks on productivity, but evolved into a much wider consideration of the role of worker attitudes, supervisory style, and group dynamics. The human relations theory of management is usually considered to have developed from these studies.
a series of observations conducted under controlled conditions to study a relationship with the purpose of drawing causal inferences about that relationship. An experiment involves the manipulation of an independent variable, the measurement of a dependent variable, and the exposure of various participants to one or more of the conditions being studied. Random selection of participants and their random assignment to conditions also are necessary in experiments.
an empirically testable proposition about some fact, behavior, relationship, or the like, usually based on theory, that states an expected outcome resulting from specific conditions or assumptions.
a condition in an experiment or a characteristic of an entity, person, or object that can take on different categories, levels, or values and that can be quantified (measured). For example, test scores and ratings assigned by judges are variables. Numerous types of variables exist, including categorical variables, dependent variables, independent variables, mediators, moderators, and random variables.
operational definition: a description of something in terms of the operations (procedures, actions, or processes) by which it could be observed and measured. For example, the operational definition of anxiety could be in terms of a test score, withdrawal from a situation, or activation of the sympathetic nervous system. The process of creating an operational definition is known as operationalization.
a group of participants in a research study who are exposed to a particular manipulation of the independent variable (i.e., a particular treatment or treatment level). The responses of the experimental group are compared to the responses of a control group, other experimental groups, or both.
a comparison group in a study whose members receive either no intervention at all or some established intervention. The responses of those in the control group are compared with the responses of participants in one or more experimental groups that are given the new treatment being evaluated.
the variable in an experiment that is specifically manipulated or is observed to occur before the dependent, or outcome, variable, in order to assess its effect or influence.
the outcome that is observed to occur or change after the occurrence or variation of the independent variable in an experiment, or the effect that one wants to predict or explain in correlational research.
an in-depth investigation of a single individual, family, event, or other entity. Multiple types of data (psychological, physiological, biographical, environmental) are assembled, for example, to understand an individual’s background, relationships, and behavior. Although case studies allow for intensive analysis of an issue, they are limited in the extent to which their findings may be generalized.
the extent to which results or findings obtained from a sample are applicable to a broader population. For example, a theoretical model of change would be said to have high generalizability if it applied to numerous behaviors (e.g., smoking, diet, substance use, exercise) and varying populations (e.g., young children, teenagers, middle-age and older adults). A finding that has greater generalizability also is said to have greater external validity, in that conclusions pertain to situations beyond the original study.
a type of study in which relationships between variables are simply observed without any control over the setting in which those relationships occur or any manipulation by the researcher. Field research often takes this form. For example, consider a researcher assessing teaching style. They could use a correlational approach by attending classes on a college campus that are each taught in a different way (e.g., lecture, interactive, computer aided) and noting any differences in student learning that arise.
a study in which a group of participants is selected from a population and data about or opinions from those participants are collected, measured, and analyzed. Information typically is gathered by interview or self-report questionnaire, and the results thus obtained may then be extrapolated to the whole population.
a test or survey item in which several possible responses are given and participants are asked to pick the correct response or the one that best matches their preference. An example of a fixed-alternative question is “Which of the following most closely corresponds to your age: 12 or younger, 13 to 19, 20 to 39, 40 to 59, 60 to 79, or 80 or older?” A fixed-alternative question is sometimes referred to as a closed question, although this can also refer to any inquiry requesting a short definite answer (e.g., “How old are you?”). Also called fixed-choice question; forced-choice question; multiple-choice question.
a question that cannot be answered with a "yes" or "no" responses, or with a static response. Open-ended questions are phrased as a statement which requires a response.
a statement or series of answers to questions that an individual provides about their state, feelings, thoughts, beliefs, past behaviors, and so forth. Self-report methods rely on the honesty and self-awareness of the participant (see self-report bias) and are used especially to measure behaviors or traits that cannot easily be directly observed by others.
re-analysis of data already collected in a previous study, by a different researcher normally wishing to address a new research question.
the use of books, journals, historical documents, and other existing records or data available in storage in scientific research. Archival research allows for unobtrusive observation of human activity in natural settings and permits the study of phenomena that otherwise cannot easily be investigated. A persistent drawback, however, is that causal inferences are always more tentative than those provided by laboratory experiments.
1. a systematic, quantitative procedure for coding the themes in qualitative material, such as projective-test responses, propaganda, or fiction.
2. a systematic, quantitative study of verbally communicated material (e.g., articles, speeches, films) by determining the frequency of specific ideas, concepts, or terms. Also called quantitative semantics.