2.3 Analyzing Findings and Experimental Design
Learning Objectives
By the end of this section, you will be able to:
- Explain what a correlation coefficient tells us about the relationship between variables
- Recognize that correlation does not indicate a cause-and-effect relationship between variables
- Discuss our tendency to look for relationships between variables that do not really exist
- Explain random sampling and assignment of participants into experimental and control groups
- Discuss how experimenter or participant bias could affect the results of an experiment
- Identify independent and dependent variables
Did you know that as sales in ice cream increase, so does the overall rate of crime? Is it possible that indulging in your favorite flavor of ice cream could send you on a crime spree? Or, after committing crime do you think you might decide to treat yourself to a cone? There is no question that a relationship exists between ice cream and crime (e.g., Harper, 2013), but it would be pretty foolish to decide that one thing actually caused the other to occur. It is much more likely that both ice cream sales and crime rates are related to the temperature outside. When the temperature is warm, there are lots of people out of their houses, interacting with each other, getting annoyed with one another, and sometimes committing crimes. Also, when it is warm outside, we are more likely to seek a cool treat like ice cream.
How do we determine if there is indeed a relationship between two things? And when there is a relationship, how can we discern whether it is a coincidence, the result of a third variable (like temperature), or true cause-and-effect?
CORRELATIONAL RESEARCH
Correlation means that there is a relationship between two or more variables (such as ice cream consumption and crime), but this relationship does not necessarily imply cause and effect. When two variables are correlated, it simply means that as one variable changes, so does the other. We can measure correlation by calculating a statistic known as a correlation coefficient. A correlation coefficient is a number from -1 to +1 that indicates the strength and direction of the relationship between variables. The correlation coefficient is usually represented by the letter r.
The number portion of the correlation coefficient indicates the strength of the relationship. The closer the number is to +1, the more strongly related the variables are, and the more predictable changes in one variable will be as the other variable changes. The closer the number is to zero, the weaker the relationship, and the less predictable the relationships between the variables becomes. For instance, a correlation coefficient of 0.9 indicates a far stronger relationship than a correlation coefficient of 0.3. Furthermore, a correlation of -0.8 is stronger than a correlation of 0.4 because -0.8 is closer to +1 than 0.4, even though it is negative. If the variables are not related to one another at all, the correlation coefficient is 0.
The sign of the correlation coefficient indicates the direction of the relationship (figure below). A positive correlation means that as one variable increases so does the other, and when one variable decreases so does the other. Ice cream sales and crime rates are positively correlated in that days with high ice cream sales also have high crime rates. Ice cream sales are also positively correlated with temperature, because hotter days means selling more ice cream.
A negative correlation means that the variables move in opposite directions. If two variables are negatively correlated, a decrease in one variable is associated with an increase in the other and vice versa. In a real-world example, student researchers at the University of Minnesota found a weak negative correlation (r= -0.29) between the average number of days per week that students got fewer than 5 hours of sleep and their GPA (Lowry, Dean, & Manders, 2010). Keep in mind that a negative correlation is not the same as no correlation. For example, we would probably find no correlation between hours of sleep and shoe size.
As mentioned earlier, correlations have predictive value. Imagine that you are on the admissions committee of a major university. You are faced with a huge number of applications, but you are able to accommodate only a small percentage of the applicant pool. How might you decide who should be admitted? You might try to correlate your current students’ college GPA with their scores on standardized tests like the SAT or ACT. By observing which correlations were strongest for your current students, you could use this information to predict relative success of those students who have applied for admission into the university.
Manipulate this interactive scatterplot to practice your understanding of positive and negative correlation.
Correlation Does Not Indicate Causation
Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect. While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable, is actually causing the systematic movement in our variables of interest. In the ice cream/crime rate example mentioned earlier, temperature is a confounding variable that could account for the relationship between the two variables.
Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes changes in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive. Think back to our discussion of the research done by the American Cancer Society and how their research projects were some of the first demonstrations of the link between smoking and cancer. It seems reasonable to assume that smoking causes cancer, but if we were limited to correlational research, we would be overstepping our bounds by making this assumption.
Unfortunately, people mistakenly make claims of causation after conducting correlations all the time. Such claims are especially common in advertisements and news stories. For example, recent research found that people who eat cereal on a regular basis achieve healthier weights than those who rarely eat cereal (Frantzen, Treviño, Echon, Garcia-Dominic, & DiMarco, 2013; Barton et al., 2005). Guess how the cereal companies report this finding. Does eating cereal really cause an individual to maintain a healthy weight, or are there other possible explanations, such as, someone at a healthy weight is more likely to regularly eat a healthy breakfast than someone who is obese or someone who avoids meals in an attempt to diet? While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.
Does eating cereal really cause someone to be a healthy weight? (credit: Time Skillern)
Illusory Correlations
The temptation to make cause-and-effect statements based on correlational research is not the only way we tend to misinterpret data. We also tend to make the mistake of illusory correlations. Illusory correlations, or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full (see figure).
Many people believe that a full moon makes people behave oddly. (credit: Cory Zanker)
There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. After all, our bodies are largely made up of water. A meta-analysis of nearly 40 studies consistently demonstrated, however, that the relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.
Why are we so eager to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias. Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior (Fiedler, 2004).
CAUSALITY: CONDUCTING EXPERIMENTS AND USING THE DATA
As you’ve learned, the only way to establish that there is a cause-and-effect relationship between two variables is to conduct a scientific experiment. Experiments allow us to meet the three basic requirements for making claims about causal relationships. 1) Temporal ordering, 2) no third variable, and 3) covariance.
Temporal ordering means that the cause must always come before the effect. If the effect can happen before the cause, then the cause isn’t responsible for the outcome. For example, if there was crime in the city before ice cream was sold there, then ice cream can’t be the cause of crime. Experiments allow researchers to manipulate when the proposed cause occurs to see if the effect always comes after it.
The third variable problem can also be called the confounding variable problem. As discussed before, sometimes, a third, hidden factor influence both the proposed cause and effect. For example, temperature increases both ice cream sales and crime rates. Experiments allow researchers to control many relevant variables in a variety of ways to reduce the effect of third variable.
Covariance simply means that if a cause leads to an effect then they should be correlated. If there was no relationship between ice cream sales and crime, then there would be no reason to assume that ice cream affected the crime rate at all. Experiments allow researchers to see if causes and effects always occur together.
Only well-designed experiments allow researchers to make claims about causation. Remember, fancy statistics are only as good as the data collected from the research design, so don’t let impressive-sounding statistics make claims that aren’t supported by the data that the researchers actually have!
The Experimental Hypothesis
In order to conduct an experiment, a researcher must have a specific hypothesis to be tested. As you’ve learned, hypotheses can be formulated either through direct observation of the real world or after careful review of previous research. For example, if you think that children should not be allowed to watch violent programming on television because doing so would cause them to behave more violently, then you have basically formulated a hypothesis—namely, that watching violent television programs causes children to behave more violently. How might you have arrived at this particular hypothesis? You may have observed young children mimicking the behaviors that they watch on television. Or, you may have read an article about the association between hours watching television and aggressive behavior. The correlational design of the study didn’t allow the original researchers to make causal inferences, and you want to see if there really is a causal relationship or if there is something else that might be at play. For example, aggressive kids might choose to watch more violent TV than less aggressive children. Perhaps kids who watch more TV might also have parents that don’t pay as much attention to them so they act out in order to get attention.
How Might You Test Your Hypothesis?
Designing an Experiment
The most basic experimental design involves two groups: the experimental group and the control group. The two groups are designed to be the same except for one difference— experimental manipulation. The experimental group gets the experimental manipulation—that is, the treatment or variable being tested (in this case, violent TV images)—and the control group does not. Since experimental manipulation is the only difference between the experimental and control groups, we can be sure that any differences between the two are due to experimental manipulation rather than chance.
In our example of how violent television programming might affect violent behavior in children, we have the experimental group view violent television programming for a specified time and then measure their violent behavior. We measure the violent behavior in our control group after they watch nonviolent television programming for the same amount of time. It is important for the control group to be treated similarly to the experimental group, with the exception that the control group does not receive the experimental manipulation. Therefore, we have the control group watch non-violent television programming for the same amount of time as the experimental group.
We also need to precisely define, or operationalize, what is considered violent and nonviolent. An operational definition is a description of how we will measure our variables, and it is important in allowing others understand exactly how and what a researcher measures in a particular experiment. In operationalizing violent behavior, we might choose to count only physical acts like kicking or punching as instances of this behavior, or we also may choose to include angry verbal exchanges. Whatever we determine, it is important that we operationalize violent behavior in such a way that anyone who hears about our study for the first time knows exactly what we mean by violence. This aids peoples’ ability to interpret our data as well as their capacity to repeat our experiment should they choose to do so.
Once we have operationalized what is considered violent television programming and what is considered violent behavior from our experiment participants, we need to establish how we will run our experiment. In this case, we might have participants watch a 30-minute television program (either violent or nonviolent, depending on their group membership) before sending them out to a playground for an hour where their behavior is observed and the number and type of violent acts is recorded.
Ideally, the people who observe and record the children’s behavior are unaware of who was assigned to the experimental or control group, in order to control for experimenter bias. Experimenter bias refers to the possibility that a researcher’s expectations might skew the results of the study. Remember, conducting an experiment requires a lot of planning, and the people involved in the research project have a vested interest in supporting their hypotheses. If the observers knew which child was in which group, it might influence how much attention they paid to each child’s behavior as well as how they interpreted that behavior. By being blind to which child is in which group, we protect against those biases. This situation is a single-blind study, meaning that one of the groups (participants) are unaware as to which group they are in (experiment or control group) while the researcher who developed the experiment knows which participants are in each group.
In a double-blind study, both the researchers and the participants are blind to group assignments. Why would a researcher want to run a study where no one knows who is in which group? Because by doing so, we can control for both experimenter and participant expectations. The placebo effect occurs when people’s expectations or beliefs influence their experience in a given situation. In other words, simply expecting something to happen can actually make it happen.
The placebo effect is commonly described in terms of testing the effectiveness of a new medication. Imagine that you work in a pharmaceutical company, and you think you have a new drug that is effective in treating depression. To demonstrate that your medication is effective, you run an experiment with two groups: The experimental group receives the medication, and the control group does not. But you don’t want participants to know whether they received the drug or not.
Why is that? Now, imagine that you are a participant in this study. You have just taken a pill that you think will improve your mood. Because you expect the pill to have an effect, you might feel better simply because you took the pill and not because of any drug actually contained in the pill—this is the placebo effect.
To make sure that any effects on mood are due to the drug and not due to expectations, the control group receives a placebo (in this case a sugar pill). Now everyone gets a pill, and once again neither the researcher nor the experimental participants know who got the drug and who got the sugar pill. Any differences in mood between the experimental and control groups can now be attributed to the drug itself rather than to experimenter bias or participant expectations (see figure).
Providing the control group with a placebo treatment protects against bias caused by expectancy. (credit: Elaine and Arthur Shapiro)
Independent and Dependent Variables
In a research experiment, we strive to study whether changes in one thing cause changes in another. To achieve this, we must pay attention to two important variables, or things that can be changed, in any experimental study: the independent variable and the dependent variable. An independent variable is manipulated or controlled by the experimenter. In a well-designed experimental study, the independent variable is the only important difference between the experimental and control groups. In our example of how violent television programs affect children’s display of violent behavior, the independent variable is the type of program—violent or nonviolent—viewed by participants in the study (see figure). A dependent variable is what the researcher measures to see how much effect the independent variable had. In our example, the dependent variable is the number of violent acts displayed by the experimental participants.
In an experiment, manipulation of the independent variable is expected to result in changes in the dependent variable. (credit “automatic weapon”” modification of work by Daniel Oines; credit “toy gun”: modification of work by Emran Kassim)
We expect that the dependent variable will change as a function of the independent variable. In other words, the dependent variable depends on the independent variable. Returning to our example, what effect, if any, does watching a half hour of violent television programming or nonviolent television programming (independent variable) have on the number of incidents of physical aggression displayed on the playground (dependent variable)?
Selecting and Assigning Experimental Participants
Now that our study is designed, we need to obtain a sample of individuals to include in our experiment. Our study involves human participants so we need to determine who to include. Participants are the subjects of psychological research, and as the name implies, individuals who are involved in psychological research actively participate in the process. Often, psychological research projects rely on college students to serve as participants. In fact, the vast majority of research in psychology subfields has historically involved students as research participants (Sears, 1986; Arnett, 2008). But are college students truly representative of the general population? College students tend to be younger, more educated, more liberal, and less diverse than the general population. Although using students as test subjects is an accepted practice, relying on such a limited pool of research participants can be problematic because it is difficult to generalize findings to the larger population.
Our hypothetical experiment involves children, and we must first generate a sample of child participants. Samples are used because populations are usually too large to reasonably involve every member in our particular experiment (figure). If possible, we should use a random sample (there are other types of samples, but for the purposes of this chapter, we will focus on random samples). A random sample is a subset of a larger population in which every member of the population has an equal chance of being selected. Random samples are preferred because if the sample is large enough we can be reasonably sure that the participating individuals are representative of the larger population. This means that the percentages of characteristics in the sample—sex, ethnicity, socioeconomic level, and any other characteristics that might affect the results—are close to those percentages in the larger population.
In our example, let’s say we decide our population of interest is fourth graders. But all fourth graders is a very large population, so we need to be more specific; instead we might say our population of interest is all fourth graders in a particular city. We should include students from various income brackets, family situations, races, ethnicities, religions, and geographic areas of town. With this more manageable population, we can work with the local schools in selecting a random sample of around 200 fourth graders who we want to participate in our experiment.
In summary, because we cannot test all of the fourth graders in a city, we want to find a group of about 200 that reflects the composition of that city. With a representative group, we can generalize our findings to the larger population without fear of our sample being biased in some way.
Researchers may work with (a) a large population or (b) a sample group that is a subset of a large population. (credit “crowd”: modification of work by James Cridland; credit “students”: modification of work by Laurie Sullivan)
Now that we have a sample, the next step of the experimental process is to split the participants into experimental and control groups through random assignment. With random assignment, all participants have an equal chance of being assigned to either group. Random assignment is critical for sound experimental design. With sufficiently large samples, random assignment makes it unlikely that there are systematic differences between the groups. So, for instance, it would be very unlikely that we would get one group composed entirely of males, a given ethnic identity, or a given religious ideology. This is important because if the groups were systematically different before the experiment began, we would not know the origin of any differences we find between the groups: Were the differences preexisting, or were they caused by manipulation of the independent variable? Random assignment allows us to assume that any differences observed between experimental and control groups result from the manipulation of the independent variable.
Use this online tool to instantly generate randomized numbers and to learn more about random sampling and assignments.
Issues to Consider
While experiments allow scientists to make cause-and-effect claims, they are not without problems. True experiments require the experimenter to manipulate an independent variable, and that can complicate many questions that psychologists might want to address. For instance, imagine that you want to know what effect sex (the independent variable) has on spatial memory (the dependent variable). Although you can certainly look for differences between males and females on a task that taps into spatial memory, you cannot directly control a person’s sex. We categorize this type of research approach as quasi-experimental and recognize that we cannot make cause-and-effect claims in these circumstances.
Many experiments in psychology are quasi-experimental because factors of interest cannot be realistically or ethically assigned. For instance, you would not be able to conduct an experiment designed to determine if experiencing abuse as a child leads to lower levels of self-esteem among adults. To conduct such an experiment, you would need to randomly assign some experimental participants to a group that receives abuse, and that experiment would be unethical.
Interpreting Experimental Findings
Once data is collected from both the experimental and the control groups, a statistical analysis is conducted to find out if there are meaningful differences between the two groups. A statistical analysis determines how likely any difference found is due to chance. In most areas of psychology, group differences are considered meaningful, or significant, if the odds that these differences occurred by chance alone are 5 percent or less. If we repeated this experiment 100 times, we would expect to find the same results at least 95 times out of 100.
The greatest strength of experiments is the ability to assert that any significant differences in the findings are caused by the independent variable. This occurs because random selection, random assignment, and a design that limits the effects of both experimenter bias and participant expectancy should create groups that are similar in composition and treatment. Therefore, any difference between the groups is attributable to the independent variable, and now we can finally make a causal statement.
Reporting Research
When psychologists complete a research project, they generally want to share their findings with other scientists. The American Psychological Association (APA) publishes a manual detailing how to write a paper for submission to scientific journals. Unlike an article that might be published in a magazine like Psychology Today, which targets a general audience with an interest in psychology, scientific journals generally publish peer-reviewed journal articles aimed at an audience of professionals and scholars who are actively involved in research themselves.
The Online Writing Lab (OWL) at Purdue University can walk you through the APA writing guidelines.
A peer-reviewed journal article is read by several other scientists (generally anonymously) with expertise in the subject matter. These peer reviewers provide feedback—to both the author and the journal editor—regarding the quality of the draft. Peer reviewers look for a strong rationale for the research being described, a clear description of how the research was conducted, and evidence that the research was conducted in an ethical manner. They also look for flaws in the study’s design, methods, and statistical analyses. They check that the conclusions drawn by the authors seem reasonable given the observations made during the research. Peer reviewers also comment on how valuable the research is in advancing the discipline’s knowledge. This helps prevent unnecessary duplication of research findings in the scientific literature and, to some extent, ensures that each research article provides new information. Ultimately, the journal editor will compile all of the peer reviewer feedback and determine whether the article will be published in its current state (a rare occurrence), published with revisions, or not accepted for publication.
Peer review provides some degree of quality control for psychological research. Poorly conceived or executed studies can be weeded out, and even well-designed research can be improved by the revisions suggested. Peer review also ensures that the research is described clearly enough to allow other scientists to replicate it, meaning they can repeat the experiment using different samples to determine reliability. Sometimes replications involve additional measures that expand on the original finding. In any case, each replication serves to provide more evidence to support the original research findings. Successful replications of published research make scientists more apt to adopt those findings, while repeated failures tend to cast doubt on the legitimacy of the original article and lead scientists to look elsewhere. For example, it would be a major advancement in the medical field if a published study indicated that taking a new drug helped individuals achieve a healthy weight without changing their diet. But if other scientists could not replicate the results, the original study’s claims would be questioned.
Some scientists have claimed that routine childhood vaccines cause some children to develop autism, and, in fact, several peer-reviewed publications published research making these claims. Since the initial reports, large-scale epidemiological research has suggested that vaccinations are not responsible for causing autism and that it is much safer to have your child vaccinated than not. Furthermore, several of the original studies making this claim have since been retracted.
A published piece of work can be rescinded when data is called into question because of falsification, fabrication, or serious research design problems. Once rescinded, the scientific community is informed that there are serious problems with the original publication. Retractions can be initiated by the researcher who led the study, by research collaborators, by the institution that employed the researcher, or by the editorial board of the journal in which the article was originally published. In the vaccine-autism case, the retraction was made because of a significant conflict of interest in which the leading researcher had a financial interest in establishing a link between childhood vaccines and autism (Offit, 2008). Unfortunately, the initial studies received so much media attention that many parents around the world became hesitant to have their children vaccinated (figure below). For more information about how the vaccine/autism story unfolded, as well as the repercussions of this story, take a look at Paul Offit’s book, Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure.
Vaccines are essential in preventing a wide number of dangerous diseases and have no empirical link to developmental disorders such as autism (credit” modification of work by UNICEF Sverige)
RELIABILITY AND VALIDITY
Reliability and validity are two important considerations that must be made with any type of data collection. Reliability refers to the ability to consistently produce a given result. In the context of psychological research, this would mean that any instruments or tools used to collect data do so in consistent, reproducible ways.
Unfortunately, being consistent in measurement does not necessarily mean that you have measured something correctly. To illustrate this concept, consider a kitchen scale that would be used to measure the weight of cereal that you eat in the morning. If the scale is not properly calibrated, it may consistently under- or overestimate the amount of cereal that’s being measured. While the scale is highly reliable in producing consistent results (e.g., the same amount of cereal poured onto the scale produces the same reading each time), those results are not valid. Validity refers to the extent to which a given instrument or tool accurately measures what it’s supposed to measure. While any valid measure is by necessity reliable (you can’t measure something accurately if you’re not getting consistent results), but the reverse is not necessarily true. Researchers strive to use instruments that are both highly reliable and valid.
SUMMARY
A correlation is described with a correlation coefficient, r, which ranges from -1 to 1. The correlation coefficient tells us about the nature (positive or negative) and the strength of the relationship between two or more variables. Correlations do not tell us anything about causation—regardless of how strong the relationship is between variables. In fact, the only way to demonstrate causation is by conducting an experiment. People often make the mistake of claiming that correlations exist when they really do not.
Researchers can test cause-and-effect hypotheses by conducting experiments. Ideally, experimental participants are randomly selected from the population of interest. Then, the participants are randomly assigned to their respective groups. Sometimes, the researcher and the participants are blind to group membership to prevent their expectations from influencing the results.
In ideal experimental design, the only difference between the experimental and control groups is whether participants are exposed to the experimental manipulation. Each group goes through all phases of the experiment, but each group will experience a different level of the independent variable: the experimental group is exposed to the experimental manipulation, and the control group is not exposed to the experimental manipulation. The researcher then measures the changes that are produced in the dependent variable in each group. Once data is collected from both groups, it is analyzed statistically to determine if there are meaningful differences between the groups.
Psychologists report their research findings in peer-reviewed journal articles. Research published in this format is checked by several other psychologists who serve as a filter separating ideas that are supported by evidence from ideas that are not. Replication has an important role in ensuring the legitimacy of published research. In the long run, only those findings that are capable of being replicated consistently will achieve consensus in the scientific community.
References:
Openstax Psychology text by Kathryn Dumper, William Jenkins, Arlene Lacombe, Marilyn Lovett and Marion Perlmutter licensed under CC BY v4.0. https://openstax.org/details/books/psychology
Exercises
Review Questions:
1. Height and weight are positively correlated. This means that:
a. There is no relationship between height and weight.
b. Usually, the taller someone is, the thinner they are.
c. Usually, the shorter someone is, the heavier they are.
d. As height increases, typically weight increases.
2. Which of the following correlation coefficients indicates the strongest relationship between two variables?
a. -.90
b. -.50
c. +.80
d. +.25
3. Which statement best illustrates a negative correlation between the number of hours spent watching TV the week before an exam and the grade on that exam?
a. Watching too much television leads to poor exam performance.
b. Smart students watch less television.
c. Viewing television interferes with a student’s ability to prepare for the upcoming exam.
d. Students who watch more television perform more poorly on their exams.
4. The correlation coefficient indicates the weakest relationship when ________.
a. it is closest to 0
b. it is closest to -1
c. it is positive
d. it is negative
5. ________ means that everyone in the population has the same likelihood of being asked to participate in the study.
a. operationalizing
b. placebo effect
c. random assignment
d. random sampling
6. The ________ is controlled by the experimenter, while the ________ represents the information collected and statistically analyzed by the experimenter.
a. dependent variable; independent variable
b. independent variable; dependent variable
c. placebo effect; experimenter bias
d. experiment bias; placebo effect
7. Researchers must ________ important concepts in their studies so others would have a clear understanding of exactly how those concepts were defined.
a. randomly assign
b. randomly select
c. operationalize
d. generalize
8. Sometimes, researchers will administer a(n) ________ to participants in the control group to control for the effects that participant expectation might have on the experiment.
a. dependent variable
b. independent variable
c. statistical analysis
d. placebo
Critical Thinking Questions:
1. Earlier in this section, we read about research suggesting that there is a correlation between eating cereal and weight. Cereal companies that present this information in their advertisements could lead someone to believe that eating more cereal causes healthy weight. Why would they make such a claim and what arguments could you make to counter this cause-and-effect claim?
2. Recently a study was published in the journal, Nutrition ans Cancer, which established a negative correlation between coffee consumption and breast cancer. Sepcifically, it found that women consuming more than 5 cuts of coffee a day were less likely to develop breast cancer than women who never consumed coffee (Lowcock, Cotterchio, Anderson, Boucher, & El-Sohemy, 2013). Imagine you see a newspaper story about this research that says, “Coffee Protects Against Cancer.” Why is this headline misleading and why would a more accurate headline draw less interest?
3. Sometimes, true random sampling can be very difficult to obtain. Many researchers make use of convenience samples as an alternative. For example, one popular convenience sample would involve students enrolled in Introduction to Psychology courses. What are the implications of using this sampling technique?
4. Peer review is an important part of publishing research findings in many scientific disciplines. This process is normally conducted anonymously; in other words, the author of the article being reviewed does not know who is reviewing the article, and the reviewers are unaware of the author’s identity. Why would this be an important part of this process?
Personal Application Questions:
1. We all have a tendency to make illusory correlations from time to time. Try to think of an illusory correlation that is held by you, a family member, or a close friend. How do you think this illusory correlation came about and what can be done in the future to combat them?
2. Are there any questions about human or animal behavior that you would really like to answer? Generate a hypothesis and briefly describe how you would conduct an experiment to answer your question.
Glossary:
cause-and-effect relationship
confounding variable
control group
correlation
correlation coefficient
dependent variable
double-blind study
experimental group
illusory correlation
independent variable
negative correlation
operational definition
peer-reviewed journal article
placebo effect
positive correlation
random assignment
random sample
reliability
replicate
single-blind study
statistical analysis
validity
Answers to Exercises
Review Questions:
1. D
2. A
3. D
4. A
5. D
6. B
7. C
8. D
Critical Thinking Questions:
1. The cereal companies are trying to make a profit, so framing the research findings in this way would improve their bottom line. However, it could be that people who forgo more fatty options for breakfast are health conscious and engage in a variety of other behaviors that help them maintain a healthy weight.
2. Using the word protects seems to suggest causation as a function of correlation. If the headline were more accurate, it would be less interesting because indicating that two things are associated is less powerful than indicating that doing one thing causes a change in the other.
3. If research is limited to students enrolled in Introduction to Psychology courses, then our ability to generalize to the larger population would be dramatically reduced. One could also argue that students enrolled in Introduction to Psychology courses may not be representative of the larger population of college students at their school, much less the larger general population.
4. Anonymity protects against personal biases interfering with the reviewer’s opinion of the research. Allowing the reviewer to remain anonymous would mean that they can be honest in their appraisal of the manuscript without fear of reprisal.
Glossary:
cause-and-effect relationship: changes in one variable cause the changes in the other variable; can be determined only through an experimental research design
confirmation bias: tendency to ignore evidence that disproves ideas or beliefs
confounding variable: unanticipated outside factor that affects both variables of interest, often giving the false impression that changes in one variable causes
changes in the other variable, when, in actuality, the outside factor causes changes in both variables
control group: serves as a basis for comparison and controls for chance factors that might influence the results of the study—by holding such factors constant across groups so that the experimental manipulation is the only difference between groups
correlation: relationship between two or more variables; when two variables are correlated, one variable changes as the other does
correlation coefficient: number from -1 to +1, indicating the strength and direction of the relationship between variables, and usually represented by r
dependent variable: variable that the researcher measures to see how much effect the independent variable had
double-blind study: experiment in which both the researchers and the participants are blind to group assignments
experimental group: group designed to answer the research question; experimental manipulation is the only difference between the experimental and control groups, so any differences between the two are due to experimental manipulation rather than chance
experimenter bias: researcher expectations skew the results of the study
illusory correlation: seeing relationships between two things when in reality no such relationship exists
independent variable: variable that is influenced or controlled by the experimenter; in a sound experimental study, the independent variable is the only important difference between the experimental and control group
negative correlation: two variables change in different directions, with one becoming larger as the other becomes smaller; a negative correlation is not the same thing as no correlation
operational definition: description of what actions and operations will be used to measure the dependent variables and manipulate the independent variables
participants: subjects of psychological research
peer-reviewed journal article: article read by several other scientists (usually anonymously) with expertise in the subject matter, who provide feedback regarding the quality of the manuscript before it is accepted for publication
placebo effect: people’s expectations or beliefs influencing or determining their experience in a given situation
positive correlation: two variables change in the same direction, both becoming either larger or smaller
random assignment: method of experimental group assignment in which all participants have an equal chance of being assigned to either group
random sample: subset of a larger population in which every member of the population has an equal chance of being selected
reliability: consistency and reproducibility of a given result
replicate: repeating an experiment using different samples to determine the research’s reliability
single-blind study: experiment in which the researcher knows which participants are in the experimental group and which are in the control group
statistical analysis: determines how likely any difference between experimental groups is due to chance
validity: accuracy of a given result in measuring what it is designed to measure