On the (non)persuasive power of a brain image
The persuasive power of brain images has captivated scholars in many disciplines. Like others, we too were intrigued by the finding that a brain image makes accompanying information more credible (McCabe & Castel in Cognition 107:343-352, 2008). But when our attempts to build on this effect failed, we instead ran a series of systematic replications of the original study—comprising 10 experiments and nearly 2,000 subjects. When we combined the original data with ours in a meta-analysis, we arrived at a more precise estimate of the effect, determining that a brain image exerted little to no influence. The persistent meme of the influential brain image should be viewed with a critical eye.
KeywordsJudgment and decision makingNeuroimagingStatistics
A number of psychological research findings capture our attention. Take the finding that people agree more with the conclusions in a news article when it features an image of the brain, even though that image is nonprobative—providing no information about the accuracy of the conclusions already in the text of the article (McCabe & Castel, 2008). In a time of explosive growth in the field of brain research and the encroaching inevitability of neuroscientific evidence in courtrooms, the persuasive influence of a brain image is both intriguing and worrying.
Perhaps because of its implications, this research has received much attention in both the popular and scholarly press (nearly 40 citations per year, according to Google Scholar, as of November 30, 2012). Although McCabe and Castel (2008) did not overstate their findings, many others have. Sometimes, these overstatements were linguistic exaggerations. One author of a paper in a medical journal reported that “brain images . . . can be extremely misleading” (Smith, 2010). Other authors of a paper in a social issues journal concluded, “clearly people are too easily convinced” (Kang, Inzlicht, & Derks, 2010). Other overstatements made claims beyond what McCabe and Castel themselves reported: In an education journal, authors wrote that “brain images make both educators and scientists more likely to believe the statements” (Hinton & Fischer, 2008), while in a forensic psychiatric journal, others worried about “the potential of neuroscientific data to hold significant prejudicial, and at times, dubious probative, value for addressing questions relevant to criminal responsibility and sentencing mitigation” (Treadway & Buckholtz, 2011).
These and other misrepresentations show that the persuasive power of brain images captivates scholars in many disciplines. We too were captivated by this finding and attempted to build on it—but were surprised when we had difficulty obtaining McCabe and Castel’s (2008) basic finding. Moreover, in searching the published literature, we were likewise surprised to discover that the effect had not been replicated. In one paper, some brain images were more influential than others on subjects’ evaluations of an article’s credibility, but because there was no condition in which subjects evaluated the article without a brain image, we cannot draw conclusions about the power of brain images per se (Keehner, Mayberry, & Fischer, 2011). Other papers show that written neuroscience information makes “bad” explanations of psychological phenomena seem more satisfying (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008) and that written fMRI evidence can even lead to more guilty verdicts in a mock juror trial (McCabe, Castel, & Rhodes, 2011). We even found work in another domain showing that meaningless mathematics boosts the quality of abstracts (Erikkson, 2012). But we did not find any other evidence that brain images themselves wield power.
Given the current discussion in psychological science regarding the importance of replication (see the November 2012 Perspectives on Psychological Science, the February 2012 Observer, and www.psychfiledrawer.org), we therefore turned our attention to a concentrated attempt to more precisely estimate how much more people will agree with an article’s conclusions when it is accompanied by a brain image. Here, we report a meta-analysis including McCabe and Castel’s (2008) original data and 10 of our own experiments that use their materials.1 We arrive at a more precise estimate of the size of the effect, concluding that a brain image exerts little to no influence on the extent to which people agree with the conclusions of a news article.
Characteristics of our 10 experiments included in the meta-analysis
Victoria undergraduate subject pool
Wellington high school students
Victoria Intro Psyc subject pool
In each experiment, we manipulated, between subjects, the presence or absence of a brain image.
We told subjects that they were taking part in a study examining visual and verbal learning styles. Subjects then read a brief news article, "Brain Scans Can Detect Criminals,” from McCabe and Castel's (2008) third experiment. The article was from the BBC News Web site and summarized a study discussed in Nature (BBC News, 2005; Wild, 2005). All of our attempts to replicate focused on this experiment, because it produced McCabe and Castel's largest effect (d = 0.40). Although there were two other experiments in their paper, the first used different materials and a different dependent measure and was a within-subjects design. Their second experiment, also a within-subjects design, examined the effects of different types of brain images but did not have a baseline condition with no brain image and, therefore, did not permit brain versus no-brain comparisons.
In their third experiment, McCabe and Castel (2008) used a 2 × 2 between-subjects design, manipulating (1) the presence or absence of a brain image depicting activity in the frontal lobes and (2) whether the article featured experts critiquing the article’s claims. Although they did not explain the rationale for the critique manipulation, it stands to reason that criticism would counteract the persuasive influence of a brain image. Indeed, they adopted that reasoning in a later paper showing that the persuasive influence of written neuroscientific evidence on juror verdicts decreases when the validity of that evidence is questioned (McCabe et al., 2011). McCabe and Castel found that the critique manipulation did not influence people’s ratings of the article’s conclusions, nor did it interact with the presence of a brain image, so in Experiments 1–5 we used the article without the critique.
But when we took a closer look at McCabe and Castel’s (2008) raw data, we found that the influence of the brain image was larger when the article’s claims were critiqued, as compared with when they were not, tcritique(52) = 2.07, p = .04, d = 0.56; tno_critique(52) = 1.13, p = .27, d = 0.31. Note that this surprising result runs counter to the explanation that evidence is less influential when its validity is called into question (McCabe et al., 2011). With these findings in mind, in Experiments 6–10, we used the article in which experts criticized the article’s claims in an extra 100 words.
After reading the article (critiqued or not), subjects responded to the statement "Do you agree or disagree with the conclusion that brain imaging can be used as a lie detector?" on a scale from 1 (strongly disagree) to 4 (strongly agree). Subjects were randomly assigned to a condition in which the article appeared alone or a condition in which an image of the brain appeared alongside the article. In all online experiments, subjects then encountered an attention check, which they had to pass to stay in the data set (Oppenheimer, Meyvis, & Davidenko, 2009).2
Results and discussion
Summary of results of our 10 experiments included in the meta-analysis
95 % CI
1. Mechanical Turk
2. Victoria undergraduate subject pool
3. Wellington high school students
4. Mechanical Turk
5. Victoria Intro Psyc subject pool
6. Mechanical Turk [critique]
7. General public [critique]
8. Mechanical Turk [critique]
9. Mechanical Turk [critique]
10. Mechanical Turk [critique]
It is surprising, however, that an image of the brain exerted little to no influence on people’s judgments. We know that images can exert powerful effects on cognition—in part, because they facilitate connections to prior knowledge. For instance, when pictures clarify complex ideas (such as the workings of a bicycle pump) and bridge the gap between what nonexperts know and do not know, people comprehend and remember that material better (Mayer & Gallini, 1990; see Carney & Levin, 2002, for a review).
Manipulations like these that boost comprehension can also make other concepts related to the material feel more easily available in memory, and we know that people interpret this feeling of ease as diagnostic of familiarity and truth (Newman, Garry, Bernstein, Kantner, & Lindsay, 2012; Tversky & Kahneman, 1973; Whittlesea, 1993; see Alter & Oppenheimer, 2009, for a review). But a brain image depicting activity in the frontal lobes is different. To people who may not understand how fMRI works, or even where the frontal lobes are, seeing an image of the brain may not be any more helpful than seeing an ink blot. It seems reasonable to speculate, therefore, that images of the brain are like other technical images: To people who cannot connect them to prior knowledge, there is no boost of comprehension, nor a feeling of increased cognitive availability. This speculation leads directly to an interesting question: To what extent is the influence of a brain image moderated by prior knowledge?
Another explanation for the trivial effect of brain images is that people have become more skeptical about neuroscience information since McCabe and Castel’s (2008) study. Indeed, the media itself has begun engaging in critical self-reflection. For instance, a recent article in the New York Times railed against the “cultural tendency, in which neuroscientific explanations eclipse historical, political, economic, literary and journalistic interpretations of experience” and “phenomena like neuro law, which, in part, uses the evidence of damaged brains as the basis for legal defense of people accused of heinous crimes” (Quart, 2012). If people have indeed grown skeptical, we might then expect them to also be protected against the influence of other forms of neuroscience information. To test this hypothesis, we ran a series of replications of another well-known 2008 study showing that people rated bad explanations of scientific phenomena as more satisfying when those explanations featured neuroscience language (Weisberg et al., 2008).
Summary of results of experiments replicating Weisberg, Keil, Goodstein, Rawson, and Gray (2008)
95 % CI
1. Mechanical Turk
2. Mechanical Turk
3. Mechanical Turk
4. Mechanical Turk
5. Mechanical Turk
Why the disparity, then, between the trivial effects of a brain image and the more marked effects of neuroscience language? A closer inspection reveals that the Weisberg et al. (2008) study is not simply the language analog of the McCabe and Castel (2008) study. For instance, Weisberg et al. found that neuroscience language makes bad explanations seem better but has less (or no) effect on good explanations. By contrast, McCabe and Castel did not vary the quality of their explanations. In addition, Weisberg et al. compared written information that did or did not feature neuroscience language, but McCabe and Castel added a brain image to an article that already featured some neuroscience language. Perhaps, then, the persuasive influence of the brain image is small when people have already been swayed by the neuroscience language in the article. Although such a possibility is outside the scope of this article, it is an important question for future research.
How are we to understand our results, given that other research shows that some brain images are more influential than others (Keehner et al., 2011)? One possibility is that Keehner and colleagues’ within-subjects design—in which subjects considered a series of five different brain images—encouraged people to rely on relative ease of processing when making judgments across the different images (Alter & Oppenheimer, 2008). By contrast, McCabe and Castel’s (2008) between-subjects design does not allow people to adopt this strategy. And recall, of course, that because Keehner and colleagues did not compare the influence of any brain image with that of no brain image, their work still does not address that basic question.
Although our findings do not support popular descriptions of the persuasiveness of brain images, they do fit with very recent research and discussions questioning their allure (Gruber & Dickerson, 2012; Farah & Hook, 2013). Importantly, our estimation approach avoids the dichotomous thinking that dominates media discourse of popular psychological effects and, instead, emphasizes—in accord with APA standards—interpretation of results based on point and interval estimates. Furthermore, research in the domain of jury decision making suggests that brain images have little or no independent influence on juror verdicts—a context in which the persuasive influence of a brain image would have serious consequences (Greene & Cahill, 2012; Schweitzer & Saks, 2011; Schweitzer et al., 2011). Taken together, these findings and ours present compelling evidence that when it comes to brains, the “amazingly persistent meme of the overly influential image”3 has been wildly overstated.
We thank Alan Castel for sharing his materials with us and for his many helpful discussions.
When we included subjects who failed the attention check in the meta-analysis, the estimated raw effect size was an even smaller 0.04, 95 % CI [0.00, 0.11]. Across studies, exclusion rates varied from 24 % to 31 % of subjects. These rates are lower than those found by Oppenheimer et al. (2009).
We thank Martha Farah (personal communication, June 20, 2012) for coining this delightful term.
We are grateful for the support of the New Zealand Government through the Marsden Fund, administered by the Royal Society of New Zealand on behalf of the Marsden Fund Council. Robert B. Michael gratefully acknowledges support from Victoria University of Wellington.