Advertisement

Psychonomic Bulletin & Review

, Volume 20, Issue 6, pp 1357–1363 | Cite as

Neuroscientific information bias in metacomprehension: The effect of brain images on metacomprehension judgment of neuroscience research

  • Kenji Ikeda
  • Shinji Kitagami
  • Tomoyo Takahashi
  • Yosuke Hattori
  • Yuichi Ito
Brief Report

Abstract

In the present study, we investigated how brain images affect metacomprehension judgments of neuroscience research. Participants made a prereading judgment of comprehension of the text topic and then read a text about neuroimaging findings. In Experiment 1, participants read text only or text accompanying brain images. In Experiment 2, participants read text accompanying bar graphs or text accompanying brain images. Then participants were asked to rate their comprehension of the text. Finally, they completed comprehension tests. The results of Experiment 1 showed that the text accompanying brain images was associated with higher metacomprehension judgments than was the text only, whereas the performance of the comprehension test did not differ between each condition. The results of Experiment 2 showed that the text accompanying brain images was associated not only with credibility of the text, but also with higher metacomprehension judgments than was the text accompanying the bar graphs, whereas the performance of the comprehension test did not differ between each condition. The findings suggest that the readers’ subjective judgments differ from actual comprehension.

Keywords

Metacomprehension Brain image Credibility Neuroscientific information 

Recently, the number of papers about neuroscience has increased (e.g., Illes, Kirshen, & Gabrieli, 2003; Morein-Zamir & Sahakin, 2010), and neuroscientific information such as that portrayed by brain images seems to capture people’s interest. However, interpreting the information conveyed by brain images may be difficult for laypeople, because the images are detailed and complex representations conveying a large amount of visual information. Therefore, brain images may not enhance the lay public’s understanding of neuroscientific information. On the other hand, it has been suggested that brain images affect judgment about scientific credibility (Keehner, Mayberry, & Fischer, 2011; McCabe & Castel, 2008).

McCabe and Castel (2008) examined the notion that brain images affect readers’ perception of the credibility of scientific reasoning. In their first experiment, after reading text only (text-only condition), text with illustrative bar graphs (bar graph condition), and text illustrated with brain images (brain image condition), the participants judged the credibility of the scientific reasoning in the materials. The credibility rating was higher under the brain image condition than under the text-only and bar graph conditions, suggesting that people tend to perceive brain images as plausible depictions of scientific information. This plausibility effect may result from the realism and complexity of brain images (Keehner et al., 2011).

Although people are apt to perceive brain images as plausible, do people believe that they understand the information presented by brain images? Do brain images affect people’s subjective judgments of their comprehension (i.e., metacomprehension judgments)? An examination of this issue is important because the monitoring of subjective states plays an important role in self-regulated learning (e.g., Anderson & Thiede, 2008; Thiede, Anderson, & Therrialut, 2003). According to the region of proximal learning framework, people stop trying to learn when they believe that they will not obtain a learning effect even if they were to spend more time trying to do so (Metcalfe, 2011). When metacomprehension judgments reflect actual comprehension, readers are learning effectively (Dunlosky & Rawson, 2012). Therefore, if the presentation of brain images were to affect metacomprehension accuracy rather than actual comprehension, people might be unable to appropriately learn neuroscientific material. In the present study, we focused on how brain images affect metacomprehension judgments and actual comprehension.

Previous studies on metacomprehension have suggested that the beliefs about a text format that readers hold before they begin reading affect their metacomprehension judgment. In particular, belief about multimedia presentations—that is, the belief that illustrated text is understood better than nonillustrated text— affects metacomprehension judgment. Serra and Dunlosky (2010) demonstrated that readers who read text accompanied by conceptual diagrams or photographs judged their level of comprehension as higher than did those who read text only, reflecting a multimedia belief among the participants. However, performance on the comprehension test was higher for text accompanied by conceptual diagrams than for text only and text accompanied by photographs. Therefore, this effect on metacomprehension judgment occurred regardless of whether the diagrams actually facilitated understanding of the text. Thus, readers may judge their comprehension on the basis of belief about multimedia presentations. Previous studies have suggested that people believe that realistic images improve their performance on tasks such as detecting changes (e.g., Levin, Momen, Drivdahl, & Simons, 2000; Smallman & St. John, 2005). If so, the format in which neuroscientific information is presented may affect not only the credibility of neuroscientific information, but also metacomprehension judgment, because brain images are realistic.

In contrast, the effect of brain images on actual comprehension may be different from this expectation. Specifically, it is possible that brain images do not facilitate the understanding of text. Many previous studies concerning the effects of multimedia presentation have suggested that when diagrams are related to text content, the diagrams help readers to understand the text (e.g., Harp & Mayer, 1997, 1998; Iwaki, 2000; Serra, 2010). However, complex diagrams may not facilitate understanding of the text even when they are related to textual content. Readers may not process all of the elements of complex diagrams but may, instead, select relevant pictorial information and disregard irrelevant information (Mayer, 2009). Therefore, more complex diagrams require the reader to allocate more resources for processing their content, and as a result, comprehension of the text may not be facilitated.

Diagrams with many elements were found to greatly overload working memory despite their direct relationship to text content (Carlson, Chandler, & Sweller, 2003). Butcher (2006) investigated how simple and complex diagrams affected comprehension by using text describing the function of the heart. Participants read text only, text with a simple diagram (i.e., a line drawing of the heart), or text with a complex diagram (i.e., a realistic diagram of the heart). Performance on a comprehension test was higher among those reading text with a simple diagram than among those reading text only, and comprehension among those reading text with a complex diagram was equal to that among those reading text only. These results suggest that even when illustrations are related to text content, complex diagrams do not facilitate the comprehension of text.

Given these multimedia effects on comprehension, the bias in favor of the value of brain images in metacomprehension judgment may conflict with the effect of brain images on actual comprehension. In the present study, we investigated how brain images affect metacomprehension judgment and actual comprehension.

Experiment 1

In Experiment 1, we investigated the effect of brain images on metacomprehension judgment by comparing reading text only with reading text accompanied by brain images. We expected that the metacomprehension judgment for reading text associated with brain images would be more positive than that for reading text only. Additionally, if the belief in brain images were to affect metacomprehension judgment, readers might perceive text with brain images as being more comprehensible before reading, and this judgment might not change after reading. In contrast, performance on a comprehension test might not differ between conditions if complex diagrams did not actually help readers to understand text.

Method

Participants and design

Sixty undergraduate students participated in the experiment. All participants were native Japanese speakers. A 2 (format: text only, brain image) × 2 (judgment type: prereading, postreading) mixed design was used. Participants were randomly assigned to the text-only or brain image condition, with 30 participants assigned to each condition.

Materials

The learning material consisted of a booklet containing practice and critical texts, diagrams, comprehension ratings, and comprehension tests. We used one practice text (brain activity when telling a lie) and one expository text (brain activity in patients with depression). The practice text consisted of one paragraph, and the critical text consisted of six paragraphs. Each paragraph was approximately 300 letters in length. Under the text-only condition, the practice text and each of the six paragraphs of critical text were printed as one paragraph per page. Under the brain image condition, the practice text and each of the six paragraphs of critical text were printed on a single page with its corresponding brain image. The brain images depicted information about brain activity that was discussed in the text, and no information that was not mentioned in the text was presented (Fig. 1). High brain activation was indicated by red, and low activation by blue. The locations of the brain areas presented in the six pages did not necessarily correct because all brain images were unified as side view form. Comprehension was tested using one detail and one inference multiple-choice question on the practice trial and two detail and two inference multiple-choice questions for each paragraph (total of 24 questions) on the critical trial. Answers to the detail questions required memory of the textual content, whereas the inference questions could not be answered on the basis of specific information in the text but required inference.
Fig. 1

Examples of diagrams (the captions were translated from Japanese into English). a Example of a brain image. b Example of a bar graph

Procedure

Participants were tested in one group. First, participants completed the practice trial. Although the practice and critical trials used the same condition, participants were not informed about the text format. Before reading the text, participants were asked, “How well do you think you understand brain activity when someone is telling a lie?” Answers were given on a scale ranging from 0 (very poorly) to 100 (very well). Participants had 5 min to read the practice text only or the practice text with the brain images, after which they were asked, “How well do you think you understood the text?” Answers were given on a scale ranging from 0 (very poorly) to 100 (very well). Then the participants completed the comprehension test for the practice text.

Next, participants completed the critical trial. Although participants were not informed about the text format, they knew the format through the practice trial. Participants were asked, “How well do you think you understand brain activity in patients with depression?” After prereading judgment, participants read the critical text under the text-only condition or the critical text with the brain images. The text was presented paragraph by paragraph, and the time allowed for reading each paragraph was 5 min. After reading all six paragraphs, participants rated their comprehension of the critical text and then completed the comprehension test for the text.

Results and discussion

We initially conducted t-tests to evaluate differences in the practice trial comprehension test performance and metacomprehension judgments between the two conditions (Table 1). One participant did not respond to the question eliciting a postreading judgment and was excluded from the analysis. There were no differences in the comprehension test performance, prereading judgment, and postreading judgment between the two conditions for the practice trial, ts < 1.43, ps > .15.
Table 1

Results of the practice and critical trials in Experiment 1

 

Practice Trial

Critical Trial

Comprehension Test Performance

Prereading Judgment

Postreading Judgment

Detail Questions

Inference Questions

Text only

1.40 (0.10)

49.43 (2.64)

62.03 (4.24)

.60 (0.04)

.46 (0.03)

Brain image

1.23 (0.10)

54.33 (2.22)

68.77 (2.87)

.65 (0.04)

.55 (0.03)

Note. Values are presented as means (with standard errors of the means). Comprehension test performance (the score of combined detail and inference questions) on the practice trial was used as a measure of comprehension, and values ranged from 0 to 2. Performance on detail and inference questions on the critical trial was percentage of questions answered correctly.

Comprehension test performance

We conducted a 2 (format: text only, brain image) × 2 (test type: detail, inference) mixed ANOVA to evaluate differences in comprehension test performances (Table 1). The main effect of format and the interaction between format and judgment type were not significant, F(1, 58) = 3.17, p = .08, η p 2 = .005; F(1, 58) = 0.47, p = .50, η p 2 = .008, whereas the main effect of test type was significant, F(1, 58) = 27.87, p < .001, η p 2 = .33, with higher scores on the detail questions than on the inference questions. In contrast, the performance on the comprehension test did not differ significantly between the brain image condition and the text-only condition. This result is consistent with findings reported by Bucher (2006). Thus, the brain images, which were complex diagrams, did not help readers to understand the text.

Metacomprehension judgment

Using a 2 (format: text only, brain image) × 2 (judgment type: prereading, postreading) mixed ANOVA, we evaluated differences in metacomprehension judgment between the two conditions (Fig. 2). The main effect of judgment type and the interaction between format and judgment type were not significant, F(1, 58) = 1.84, p = .18, η p 2 = .03; F(1, 58) = 0.06, p = .81, η p 2 = .001, whereas the main effect of format was significant, F(1, 41) = 6.22, p < .05, η p 2 = .08. This result indicates that metacomprehension judgment under the brain image condition was higher than that under the text-only condition. Thus, readers perceived that they understood the text better when it was presented with brain images than when it was presented as text only. Given that the practice and critical trials were given under the same conditions and that prereading judgments during the practice trial did not differ between the two conditions, we can conclude that this effect resulted from the presentation of brain images. Because no difference was found in actual comprehension between the brain image and text-only conditions, this suggests that the metacomprehension judgment differed from the actual comprehension. Furthermore, given that no difference was observed between prereading and postreading judgments of comprehension, it appears that readers held the belief even before reading that brain images would enhance their comprehension and that this belief was maintained during reading despite the fact that they did not actually understand the material better than those under the text-only condition.
Fig. 2

Metacomprehension judgments under each condition in Experiment 1 (error bars represent the standard errors)

Absolute metacomprehension accuracy

We examined absolute metacomprehension accuracy, which is the difference between metacomprehension judgment and actual comprehension. Accuracy was calculated as the square root of the squared value of the subtraction between postreading judgment and total performance of comprehension test (Mengelkamp & Bannert, 2010). Thus, the maximum value is 1, and 0 indicates accurate metacomprehension. On the basis of an evaluation using t-test, the accuracy under the text-only condition (M = .19, SE = 0.03) was equal to that under the brain image condition (M = .17, SE = 0.03), t(58) = 0.61, p = .55, d = 0.16, suggesting that the presentation of brain images did not improve absolute metacomprehension accuracy.

Experiment 2

In Experiment 1, metacomprehension judgment under the brain image condition was higher than that under the text-only condition. However, given the belief that a multimedia presentation of material is more comprehensible than a text-only presentation (Serra & Dunlosky, 2010), it is possible that the multimedia format could explain the higher metacomprehension judgment under the brain image condition. Therefore, in Experiment 2, we investigated whether a difference in the format of a diagram (i.e., brain image vs. bar graph) affects metacomprehension judgment. Additionally, the effect of brain images on the credibility of the text was also investigated in Experiment 2. We hypothesized that readers who read the text with brain images would express more credibility in the text and more confidence in their comprehension, as compared with those who read the text with bar graphs.

Method

Participants and design

Forty-six undergraduate students participated in the experiment. All participants were native Japanese speakers. Three participants were eliminated because they did not complete the critical reading task. A 2 (format: bar graph, brain image) × 2 (judgment type: prereading, postreading) mixed-factors design was used. Participants were randomly assigned to the bar graph (20 participants) or brain image (23 participants) condition.

Materials

The learning materials consisted of a booklet containing practice and critical texts, diagrams, comprehension ratings, and comprehension tests. The texts were the same as those in Experiment 1. Under the bar graph condition, the practice text and one corresponding bar graph were printed on one page, and six critical paragraphs and six bar graphs corresponding to these paragraphs were printed on six separate pages. The material under the brain image condition was the same as in Experiment 1. The bar graphs and brain images presented the same information about brain activity, with high activation shown in red and low activation shown in blue (Fig. 1). The comprehension tests were the same as those used in Experiment 1.

Procedure

The procedure was the same as in Experiment 1, except in the following respects. Participants under the bar graph condition read the text accompanied by bar graphs, and those under the brain image condition read the same text accompanied by brain images. After reading the six paragraphs of the critical text, participants were asked to judge their comprehension, as in Experiment 1, as well as the credibility of the text, using a scale ranging from 0 (very poor) to 100 (excellent).

Results and discussion

On the basis of t-tests that evaluated differences in comprehension test performance and metacomprehension judgments between the two conditions in the practice trial (Table 2), comprehension test performance and prereading judgment did not differ between the bar graph and brain image conditions, ts < 1.47, ps > .14. In contrast, postreading judgment was higher in the brain image condition than in the bar graph condition, t(41) = 2.78, p < .01, d = 0.87.
Table 2

Results of practice and critical trials in Experiment 2

 

Practice Trial

Critical Trial

Comprehension Test Performance

Prereading Judgment

Postreading Judgment

Detail Questions

Inference Questions

Credibility of Text

Bar graph

1.20 (0.09)

58.80 (3.33)

64.15 (5.25)

.82 (0.03)

.65 (0.03)

55.50 (4.58)

Brain image

1.26 (0.09)

62.65 (2.31)

79.70 (2.53)

.80 (0.05)

.71 (0.03)

68.22 (3.36)

Note. Values are presented as means (with standard errors of the means). The value ranges were the same as in Experiment 1 (see Table 1).

Comprehension test performance

Using a 2 (format: bar graph, brain image) × 2 (test type: detail, inference) mixed ANOVA, we evaluated the difference in the comprehension test performances (Table 2). The main effect of format and the interaction between format and judgment type were not significant, F(1, 41) = 0.29, p = .59, η p 2 = .007; F(1, 41) = 2.33, p = .14, η p 2 = .05, whereas the main effect of test type was significant, F(1, 41) = 24.70, p < .001, η p 2 = .38. As in Experiment 1, the performance on detail questions was higher than that on inference questions. Additionally, the comprehension test scores were equal under both the bar graph and brain image conditions. Thus, the brain images were not more helpful than the bar graphs in terms of readers’ understanding of the text.

Credibility of the text

With regard to the credibility of the text, we conducted a t-test to evaluate differences in the responses between the two conditions (Table 2). The format had a significant effect on the credibility rating, t(41) = 2.20, p < .05, d = 0.69. Specifically, the credibility of the text was rated higher when accompanied by brain images than when it was presented with bar graphs. This finding is consistent with results reported by McCabe and Castel (2008) and indicates that brain images tend to be perceived as more credible than bar graphs.

Metacomprehension judgment

Differences in comprehension ratings between the conditions were evaluated using a 2 (format: bar graph, brain image) × 2 (judgment type: prereading, postreading) mixed ANOVA (Fig. 3). The main effect of judgment type and the interaction between format and judgment type were not significantly different between the conditions, F(1, 41) = 0.09, p = .76, η p 2 = .002; F(2, 82) = 2.57, p = .12, η p 2 = .06, whereas the main effect of format differed significantly, F(1, 41) = 4.82, p < .05, η p 2 = .11. Thus, in both the prereading and postreading estimations, readers exposed to the brain images judged their comprehension as being better, as compared with those who saw the bar graphs, as in Experiment 1. Because no difference in comprehension test performance was found between the brain image and bar graph conditions, it was clear that metacomprehension judgments conflicted with actual comprehension level.
Fig. 3

Metacomprehension judgments under each condition in Experiment 2 (error bars represent the standard errors)

Absolute metacomprehension accuracy

Absolute metacomprehension accuracy was calculated in the same way as in Experiment 1, and differences in accuracy between the two conditions were evaluated by t-test. The accuracy under the bar graph condition (M = .22, SE = .03) was equal to that under the brain image condition (M = .16, SE = .02), t(41) = 1.31, p = .20, d = 0.41. Thus, there was a bias in metacomprehension judgment regardless of the text format. Additionally, the difference in format did not affect absolute metacomprehension accuracy. Thus, the presentation of brain images did not improve accuracy, as in Experiment 1.

General discussion

In the present study, we investigated how brain images affect metacomprehension judgment and actual comprehension. In Experiment 1, metacomprehension judgments by participants who were exposed to brain images were higher than those by participants who saw text only. In Experiment 2, metacomprehension judgments by participants who saw brain images were higher than those by participants who viewed bar graphs. These effects were evident in prereading ratings and were not modified by actual reading. Additionally, text that was accompanied by brain images was perceived as being more credible than text that was accompanied by bar graphs. Thus, the format of the neuroscientific information affected not only the metacomprehension judgment about the text, but also the credibility of the information.

On the other hand, no differences in comprehension test performance were observed between the conditions in Experiments 1 and 2, indicating that the brain images did not help readers to understand the text. Furthermore, metacomprehension judgment differed from actual comprehension, indicating that participants were not accurately gauging their actual comprehension when they offered judgments of their comprehension. Although brain images include information that can assist in the comprehension of neuroscientific data (e.g., location and extent of activation), the interpretation of this information requires some expertise in neuroscience. Because our participants were not experts in neuroscience, the brain images did not facilitate their comprehension.

Metacomprehension judgments and perceived credibility may influence each other, such that beliefs about credibility may lead to higher ratings of comprehension and high ratings of comprehension may increase the credibility of neuroscientific information. Indeed, metacomprehension judgments at prereading and postreading were associated with ratings of credibility (prereading, r = .34, p < .05; postreading, r = .55, p < .001). However, the present study did not examine this relationship, which is a topic for future research.

Additionally absolute metacomprehension accuracy under the brain image condition did not differ from that under the text-only or bar graph conditions. This result suggests that presenting brain images, which are believed to improve comprehension, did not improve absolute metacomprehension accuracy. The bias in metacomprehension judgments occurred regardless of the text format, and participants did not judge their comprehension on the basis of their actual comprehension. Thus, readers might not have been able to effectively use strategies for learning neuroscientific information despite the presence of brain images.

In conclusion, the present study suggests that reading text presented along with brain images leads to higher metacomprehension judgment, as compared with reading-only text or text presented with bar graphs. These data mirror those on the credibility of neuroscientific information. However, metacomprehension judgments did not reflect actual comprehension, and we found no difference in this regard according to format. In addition, absolute metacomprehension accuracy did not improve when brain images were presented. Although brain images tend to be preferred over other formats for the presentation of neuroscientific information, they may not be as useful as laypeople believe for this type of learning.

References

  1. Anderson, M. C. M., & Thiede, K. W. (2008). Why do delayed summaries improve metacomprehension accuracy? Acta Psychologica, 128, 110–118.PubMedCrossRefGoogle Scholar
  2. Butcher, K. R. (2006). Learning from text with diagram: Promoting metal model development and inference generation. Journal of Educational Psychology, 98, 182–197.CrossRefGoogle Scholar
  3. Carlson, R., Chandler, P., & Sweller, J. (2003). Learning and understanding science instructional material. Journal of Educational Psychology, 95, 629–640.CrossRefGoogle Scholar
  4. Dunlosky, J., & Rawson, K. A. (2012). Overconfidence produces underachievement: Inaccurate self evaluations undermine students’ learning and retention. Learning and Instruction, 22, 271–280.CrossRefGoogle Scholar
  5. Harp, S. F., & Mayer, R. E. (1997). The role of interest in learning from scientific text and illustration: On the distinction between emotional interest and cognitive interest. Journal of Educational Psychology, 89, 92–102.CrossRefGoogle Scholar
  6. Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage: A theory of cognitive interest in science learning. Journal of Educational Psychology, 90, 414–434.CrossRefGoogle Scholar
  7. Illes, J., Kirschen, M. P., & Gabrieli, J. D. E. (2003). From neuroimaging to neuroethics. Nature Neuroscience, 6, 205.PubMedCrossRefGoogle Scholar
  8. Iwaki, K. (2000). Comprehension of expository text: A line graph helps readers to build a situation model. The Japanese Journal of Educational Psychology, 48, 333–342.Google Scholar
  9. Keehner, M., Mayberry, L., & Fischer, M. H. (2011). Different clues from different views: The role of image format in public perceptions of neuroimaging results. Psychonomic Bulletin & Review, 18, 422–428.CrossRefGoogle Scholar
  10. Levin, D. T., Momen, N., Drivdahl, S. B., & Simons, D. J. (2000). Change blindness blindness: The metacognitive error of overestimating change-detection ability. Visual Cognition, 7, 397–412.CrossRefGoogle Scholar
  11. Mayer, R. E. (2009). Multimedia Learning (2nd ed.). New York: Cambridge University Press.CrossRefGoogle Scholar
  12. McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: The effect of brain images on judgment of scientific credibility. Cognition, 107, 343–352.PubMedCrossRefGoogle Scholar
  13. Mengelkamp, C., & Bannert, M. (2010). Accuracy of confidence judgments: Stability and generality in the learning process and predictive validity for learning outcome. Memory & Cognition, 38, 441–451.CrossRefGoogle Scholar
  14. Metcalfe, J. (2011). Desirable difficulties and studying in the Region of Proximal Learning. In A. S. Benjamin (Ed.), Successful remembering and successful forgetting: A Festschrift in honor of Robert A. Bjork (pp. 259–276). New York: Psychology Press.Google Scholar
  15. Morein-Zamir, S., & Sahakian, B. J. (2010). Neuroethics and public engagement training needed for neuroscience. Trends in Cognitive Science, 14, 49–51.CrossRefGoogle Scholar
  16. Serra, M. J. (2010). Diagram increase the recall of nondepicted text when understanding is also increased. Psychonomic Bulletin & Review, 27, 112–116.CrossRefGoogle Scholar
  17. Serra, M. J., & Dunlosky, J. (2010). Metacomprehension judgments reflect the belief that diagrams improve learning from text. Memory, 18, 698–711.PubMedCrossRefGoogle Scholar
  18. Smallman, H. S., & St. John, M. (2005). Naive realism: Misplaced faith in realistic displays. Ergonomics in Design, 13, 6–13.CrossRefGoogle Scholar
  19. Thiede, K. W., Anderson, M. C. M., & Therrialut, D. (2003). Accuracy of metacognitive monitoring affects learning texts. Journal of Educational Psychology, 95, 66–73.CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2013

Authors and Affiliations

  • Kenji Ikeda
    • 1
    • 3
  • Shinji Kitagami
    • 1
  • Tomoyo Takahashi
    • 1
    • 3
  • Yosuke Hattori
    • 2
    • 3
  • Yuichi Ito
    • 1
    • 3
  1. 1.Graduate School of Environmental StudiesNagoya UniversityNagoyaJapan
  2. 2.The University of TokyoTokyoJapan
  3. 3.Japan Society for the Promotion of ScienceTokyoJapan

Personalised recommendations