How much influence does an image of the brain have on people’s agreement with the conclusions of a news article? To answer this question, we first calculated the raw effect size for each experiment by determining the difference between mean agreement ratings among people in the brain and the no-brain conditions. We report these findings in Table 2.
Table 2 Summary of results of our 10 experiments included in the meta-analysis
To find a more precise estimate of the size of the effect, we used ESCI software (Cumming, 2012) to run a random effects model meta-analysis of our 10 experiments and two findings from McCabe and Castel (2008). We display the results in Fig. 1. The result of this meta-analysis is an estimated raw effect size of 0.07, 95 % CI [0.00, 0.14], z = 1.84, p = .07. On a 4-point scale, this estimate represents movement up the scale by 0.07 points, or 2.4 % (cf. McCabe and Castel’s [2008] original raw effect size of 0.26, or 8.7 %). We also found no evidence of heterogeneity across the experiments. Tau—the estimated standard deviation between experiments—was small (0.07), and the CI included zero as a plausible value (95 % CI [0, 0.13]; note, of course, that tau cannot be less than 0), suggesting that the observed variation across experiments could very plausibly be attributed to sampling variability. This finding is important because, at first glance, it appears as though the brain image might be more persuasive on paper than online, but the statistics simply do not support this idea. We also examined the impact of other potentially important moderators. We ran an analysis of covariance on the dependent measure, using age and education as covariates and condition (brain, no brain) as an independent variable. Neither covariate interacted with the independent variable, suggesting that the persuasive influence of a brain image is not moderated by age or education.
How are we to understand the size of the brain image effect in context? Let us consider subjects’ hypothetical responses, as shown in Fig. 2. The line marked “No Brain” represents the weighted mean agreement of subjects who read the article without a brain image. The line marked “Brain” represents how far subjects’ agreement would shift, in the mean, if they had read the article with a brain image. Taken together, this figure, coupled with the meta-analysis, makes it strikingly clear that the image of the brain exerted little to no influence. The exaggerations of McCabe and Castel’s (2008) work by other researchers seem even more worrisome in light of this more precise estimate of the effect size.
It is surprising, however, that an image of the brain exerted little to no influence on people’s judgments. We know that images can exert powerful effects on cognition—in part, because they facilitate connections to prior knowledge. For instance, when pictures clarify complex ideas (such as the workings of a bicycle pump) and bridge the gap between what nonexperts know and do not know, people comprehend and remember that material better (Mayer & Gallini, 1990; see Carney & Levin, 2002, for a review).
Manipulations like these that boost comprehension can also make other concepts related to the material feel more easily available in memory, and we know that people interpret this feeling of ease as diagnostic of familiarity and truth (Newman, Garry, Bernstein, Kantner, & Lindsay, 2012; Tversky & Kahneman, 1973; Whittlesea, 1993; see Alter & Oppenheimer, 2009, for a review). But a brain image depicting activity in the frontal lobes is different. To people who may not understand how fMRI works, or even where the frontal lobes are, seeing an image of the brain may not be any more helpful than seeing an ink blot. It seems reasonable to speculate, therefore, that images of the brain are like other technical images: To people who cannot connect them to prior knowledge, there is no boost of comprehension, nor a feeling of increased cognitive availability. This speculation leads directly to an interesting question: To what extent is the influence of a brain image moderated by prior knowledge?
Another explanation for the trivial effect of brain images is that people have become more skeptical about neuroscience information since McCabe and Castel’s (2008) study. Indeed, the media itself has begun engaging in critical self-reflection. For instance, a recent article in the New York Times railed against the “cultural tendency, in which neuroscientific explanations eclipse historical, political, economic, literary and journalistic interpretations of experience” and “phenomena like neuro law, which, in part, uses the evidence of damaged brains as the basis for legal defense of people accused of heinous crimes” (Quart, 2012). If people have indeed grown skeptical, we might then expect them to also be protected against the influence of other forms of neuroscience information. To test this hypothesis, we ran a series of replications of another well-known 2008 study showing that people rated bad explanations of scientific phenomena as more satisfying when those explanations featured neuroscience language (Weisberg et al., 2008).
Our five replications, which appear in Table 3, produced similar patterns of results. To estimate the size of the effect more precisely, we followed a similar approach as we did earlier, running a random effects model meta-analysis of our five experiments and the original finding from Weisberg et al. (2008). The result was an estimated raw effect size of 0.40, 95 % CI [0.23, 0.57], z = 4.71, p < .01. On a 7-point scale, this estimate represents movement up the scale by 0.40 points, or 6.67 %. The CI does not include zero as a plausible value, providing evidence against the idea that people have become savvy enough about neuroscience to be protected against its influence more generally.
Table 3 Summary of results of experiments replicating Weisberg, Keil, Goodstein, Rawson, and Gray (2008)
Why the disparity, then, between the trivial effects of a brain image and the more marked effects of neuroscience language? A closer inspection reveals that the Weisberg et al. (2008) study is not simply the language analog of the McCabe and Castel (2008) study. For instance, Weisberg et al. found that neuroscience language makes bad explanations seem better but has less (or no) effect on good explanations. By contrast, McCabe and Castel did not vary the quality of their explanations. In addition, Weisberg et al. compared written information that did or did not feature neuroscience language, but McCabe and Castel added a brain image to an article that already featured some neuroscience language. Perhaps, then, the persuasive influence of the brain image is small when people have already been swayed by the neuroscience language in the article. Although such a possibility is outside the scope of this article, it is an important question for future research.
How are we to understand our results, given that other research shows that some brain images are more influential than others (Keehner et al., 2011)? One possibility is that Keehner and colleagues’ within-subjects design—in which subjects considered a series of five different brain images—encouraged people to rely on relative ease of processing when making judgments across the different images (Alter & Oppenheimer, 2008). By contrast, McCabe and Castel’s (2008) between-subjects design does not allow people to adopt this strategy. And recall, of course, that because Keehner and colleagues did not compare the influence of any brain image with that of no brain image, their work still does not address that basic question.
Although our findings do not support popular descriptions of the persuasiveness of brain images, they do fit with very recent research and discussions questioning their allure (Gruber & Dickerson, 2012; Farah & Hook, 2013). Importantly, our estimation approach avoids the dichotomous thinking that dominates media discourse of popular psychological effects and, instead, emphasizes—in accord with APA standards—interpretation of results based on point and interval estimates. Furthermore, research in the domain of jury decision making suggests that brain images have little or no independent influence on juror verdicts—a context in which the persuasive influence of a brain image would have serious consequences (Greene & Cahill, 2012; Schweitzer & Saks, 2011; Schweitzer et al., 2011). Taken together, these findings and ours present compelling evidence that when it comes to brains, the “amazingly persistent meme of the overly influential image”Footnote 3 has been wildly overstated.