Advertisement

Metacognition and Learning

, Volume 13, Issue 3, pp 265–285 | Cite as

Confidence in performance judgment accuracy: the unskilled and unaware effect revisited

  • Marion HändelEmail author
  • Markus Dresel
Article

Abstract

Since its introduction in the late 1990s, the unskilled and unaware effect motivated several further studies. As it stands, low-performing students are assumed to provide inaccurate and overconfident performance judgments. However, as research with second-order judgments (SOJs) indicates, they apparently have some metacognitive awareness of this. The current study with 266 undergraduate students aimed to provide in-depth insights into both the reasons for (in)accurate performance judgments and the appropriateness of SOJs. We implemented a general linear mixed model (GLMM) approach to study item-specific performance judgments in the domain of mathematics at the person and item level. The analyses replicated the well-known effects. However, the GLMM analyses revealed that low-performing students’ lower confidence apparently did not indicate subjective awareness, given that these students made inappropriate SOJs (lower confidence in accurate than in inaccurate judgments). In addition, students’ self-generated explanations for their judgements indicated that low-performing students have difficulties recognizing that they possess topic knowledge to solve an item, whereas high-performing students struggle with admitting that they do not know the answer to a question. In sum, our results indicate that students at all performance levels have some metacognitive weaknesses, which, however, occur subject to different judgment accuracy.

Keywords

Metacognitive judgments Performance level Accuracy Item-specific judgments Second-order judgments 

Notes

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. Al-Harthy, I. S., Was, C. A., & Hassan, A. S. (2015). Poor performers are poor predictors of performance and they know it: can they improve their prediction accuracy? Journal of Global Research in Education and Social Science, 4, 93–100.Google Scholar
  2. Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48.  https://doi.org/10.18637/jss.v067.i01.CrossRefGoogle Scholar
  3. Bol, L., & Hacker, D. J. (2001). A comparison of the effects of practice tests and traditional review on performance and calibration. The Journal of Experimental Education, 69, 133–151.  https://doi.org/10.1080/00220970109600653.CrossRefGoogle Scholar
  4. Bol, L., & Hacker, D. J. (2012). Calibration research: where do we go from here? Frontiers in Psychology, 3.  https://doi.org/10.3389/fpsyg.2012.00229.
  5. Bol, L., Hacker, D. J., O’Shea, P., & Allen, D. (2005). The influence of overt practice, achievement level, and explanatory style on calibration accuracy and performance. The Journal of Experimental Education, 73, 269–290.CrossRefGoogle Scholar
  6. Bol, L., Riggs, R., Hacker, D. J., Dickerson, D., & Nunnery, J. (2010). The calibration accuracy of middle school students in math classes. Journal of Research in Education, 21, 81–96.Google Scholar
  7. Buratti, S., Allwood, C. M., & Kleitman, S. (2013). First- and second-order metacognitive judgments of semantic memory reports: the influence of personality traits and cognitive styles. Metacognition and Learning, 8, 79–102.  https://doi.org/10.1007/s11409-013-9096-5.CrossRefGoogle Scholar
  8. Burson, K. A., Larrick, R. P., & Klayman, J. (2006). Skilled or unskilled, but still unaware of it: How perceptions of difficulty drive miscalibration in relative comparisons. Journal of Personality and Social Psychology, 90, 60–77.  https://doi.org/10.1037/0022-3514.90.1.60.CrossRefGoogle Scholar
  9. de Bruin, A. B. H., Kok, E. M., Lobbestael, J., & de Grip, A. (2017). The impact of an online tool for monitoring and regulating learning at university: overconfidence, learning strategy, and personality. Metacognition and Learning, 12, 21–43.  https://doi.org/10.1007/s11409-016-9159-5.CrossRefGoogle Scholar
  10. Dickhäuser, O., & Reinhard, M.-A. (2006). Daumenregel oder Kopfzerbrechen? [rule of thumb or causing headache?]. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 38, 62–68.  https://doi.org/10.1026/0049-8637.38.2.62.CrossRefGoogle Scholar
  11. Dinsmore, D. L., & Parkinson, M. M. (2013). What are confidence judgments made of? Students’ explanations for their confidence ratings and what that means for calibration. Learning and Instruction, 24, 4–14.  https://doi.org/10.1016/j.learninstruc.2012.06.001.CrossRefGoogle Scholar
  12. Dunlosky, J., & Lipko, A. R. (2007). Metacomprehension. Current Directions in Psychological Science, 16, 228–232.  https://doi.org/10.1111/j.1467-8721.2007.00509.x.CrossRefGoogle Scholar
  13. Dunlosky, J., Serra, M. J., Matvey, G., & Rawson, K. A. (2005). Second-order judgments about judgments of learning. The Journal of General Psychology, 132, 335–346.CrossRefGoogle Scholar
  14. Dutke, S., Barenberg, J., & Leopold, C. (2010). Learning from text: knowing the test format enhanced metacognitive monitoring. Metacognition and Learning, 5, 195–206.  https://doi.org/10.1007/s11409-010-9057-1.CrossRefGoogle Scholar
  15. Efklides, A. (2011). Interactions of metacognition with motivation and affect in self-regulated learning: the MASRL model. Educational Psychologist, 46, 6–25.  https://doi.org/10.1080/00461520.2011.538645.CrossRefGoogle Scholar
  16. Egan, J. P. (1975). Signal detection theory and ROC analysis. New York: Academic Press.Google Scholar
  17. Ehrlinger, J., Johnson, K., Banner, M., Dunning, D., & Kruger, J. (2008). Why the unskilled are unaware: further explorations of (absent) self-insight among the incompetent. Organizational Behavior and Human Decision Processes, 105, 98–121.  https://doi.org/10.1016/j.obhdp.2007.05.002.CrossRefGoogle Scholar
  18. Erickson, S., & Heit, E. (2015). Metacognition and confidence: comparing math to other academic subjects. Frontiers in Psychology, 6, 1–10.  https://doi.org/10.3389/fpsyg.2015.00742.CrossRefGoogle Scholar
  19. Fiske, S. T., & Taylor, S. E. (2013). Social cognition. Los Angeles: Sage.CrossRefGoogle Scholar
  20. Foster, N. L., Was, C. A., Dunlosky, J., & Isaacson, R. M. (2016). Even after thirteen class exams, students are still overconfident: the role of memory for past exam performance in student predictions. Metacognition and Learning, 12, 1–19.  https://doi.org/10.1007/s11409-016-9158-6.CrossRefGoogle Scholar
  21. Fritzsche, E. S., Händel, M., & Kröner, S. (2018). What do second-order judgments tell us about low-performing students’ metacognitive awareness? Metacognition and Learning., 13, 159–177.  https://doi.org/10.1007/s11409-018-9182-9.CrossRefGoogle Scholar
  22. Gigerenzer, G., Hoffrage, U., & Kleinbölting, H. (1991). Probabilistic mental models: a Brunswikian theory of confidence. Psychological Review, 98, 506–528.CrossRefGoogle Scholar
  23. Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. New York: Wiley.Google Scholar
  24. Hacker, D. J., Bol, L., & Bahbahani, K. (2008). Explaining calibration accuracy in classroom contexts: the effects of incentives, reflection, and explanatory style. Metacognition and Learning, 3, 101–121.  https://doi.org/10.1007/s11409-008-9021-5.CrossRefGoogle Scholar
  25. Händel, M., & Fritzsche, E. S. (2016). Unskilled but subjectively aware: metacognitive monitoring ability and respective awareness in low-performing students. Memory & Cognition, 44, 229–241.  https://doi.org/10.3758/s13421-015-0552-0.CrossRefGoogle Scholar
  26. Johnson, A., Smyers, J., & Purvis, R. (2012). Improving exam performance by metacognitive strategies. Psychology Learning and Teaching, 11, 180–185.  https://doi.org/10.2304/plat.2012.11.2.180.CrossRefGoogle Scholar
  27. Koriat, A. (1997). Monitoring one’s own knowledge during study: a cue-utilization approach to judgments of learning. Journal of Experimental Psychology: General, 126, 349–370.CrossRefGoogle Scholar
  28. Koriat, A., Nussinson, R., Bless, H., & Shaked, N. (2008). Information-based and experience-based metacognitive judgments: Evidence from subjective confidence. In I. J. Dunlosky & R. A. Bjork (Eds.), Handbook of memory and metamemory (pp. 117–135). New York: Psychology Press.Google Scholar
  29. Kröner, S., & Biermann, A. (2007). The relationship between confidence and self-concept — Towards a model of response confidence. Intelligence, 35, 580–590.  https://doi.org/10.1016/j.intell.2006.09.009.CrossRefGoogle Scholar
  30. Krueger, J., & Mueller, R. A. (2002). Unskilled, unaware, or both? The better-than-average heuristic and statistical regression predict errors in estimates of own performance. Journal of Personality and Social Psychology, 82, 180–188.  https://doi.org/10.1037//0022-3514.82.2.180.CrossRefGoogle Scholar
  31. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121–1134.CrossRefGoogle Scholar
  32. Lienert, G. A., & Hofer, M. (1972). MTAS. Mathematiktest für Abiturienten und Studienanfänger [mathematics test for high-school graduates and freshmen]. Göttingen: Hogrefe.Google Scholar
  33. Marsh, H. W. (1986). Self-serving effect (bias?) in academic attributions: its relation to academic achievement and self-concept. Journal of Educational Psychology, 78, 190–200.CrossRefGoogle Scholar
  34. Marsh, H. W., & Craven, R. G. (2006). Reciprocal effects of self-concept and performance from a multidimensional perspective: beysond seductive pleasure and unidimensional perspectives. Perspectives on Psychological Science, 1, 133–163.  https://doi.org/10.1111/j.1745-6916.2006.00010.x.CrossRefGoogle Scholar
  35. Merkle, E. C. (2009). The disutility of the hard-easy effect in choice confidence. Psychonomic Bulletin & Review, 16, 204–213.  https://doi.org/10.3758/PBR.16.1.204.CrossRefGoogle Scholar
  36. Miller, T. M., & Geraci, L. (2011). Unskilled but aware: reinterpreting overconfidence in low-performing students. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 502–506.  https://doi.org/10.1037/a0021802.CrossRefGoogle Scholar
  37. Narciss, S., Koerndle, H., & Dresel, M. (2011). Self-evaluation accuracy and satisfaction with performance: are there affective costs or benefits of positive self-evaluation bias? International Journal of Educational Research, 50, 230–240.  https://doi.org/10.1016/j.ijer.2011.08.004.CrossRefGoogle Scholar
  38. Nelson, T. O., & Narens, L. (1990). Metamemory: a theoretical framework and new findings. The Psychology of Learning and Motivation, 26, 125–141.CrossRefGoogle Scholar
  39. Nietfeld, J. L., Cao, L., & Osborne, J. W. (2005). Metacognitive monitoring accuracy and student performance in the postsecondary classroom. The Journal of Experimental Education, 74, 7–28.Google Scholar
  40. R Development Core Team (2012). R: A language and environment for statistical computing. Retrieved from Vienna, Austria: http://www.R-project.org/.
  41. Roelle, J., Schmidt, E. M., Buchau, A., & Berthold, K. (2017). Effects of informing learners about the dangers of making overconfident judgments of learning. Journal of Educational Psychology, 109, 99–117.  https://doi.org/10.1037/edu0000132.CrossRefGoogle Scholar
  42. Saenz, G. D., Geraci, L., Miller, T. M., & Tirso, R. (2017). Metacognition in the classroom: the association between students’ exam predictions and their desired grades. Consciousness and Cognition, 51, 125–139.  https://doi.org/10.1016/j.concog.2017.03.002.CrossRefGoogle Scholar
  43. Schaefer, P. S., Williams, C. C., Goodie, A. S., & Campbell, W. K. (2004). Overconfidence and the big five. Journal of Research in Personality, 38, 473–480.  https://doi.org/10.1016/j.jrp.2003.09.010.CrossRefGoogle Scholar
  44. Schraw, G. (2009). Measuring metacognitive judgments. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 415–429). New York: Routledge.Google Scholar
  45. Schraw, G., & Nietfeld, J. L. (1998). A further test of the general monitoring skill hypothesis. Journal of Educational Psychology, 90, 236–248.CrossRefGoogle Scholar
  46. Schraw, G., Dunkle, M. E., Bendixen, L. D., & DeBacker Roedel, T. (1995). Does a general monitoring skill exist? Journal of Educational Psychology, 87, 433–444.CrossRefGoogle Scholar
  47. Schraw, G., Kuch, F., & Gutierrez, A. P. (2013). Measure for measure: calibrating ten commonly used calibration scores. Learning and Instruction, 24, 48–57.  https://doi.org/10.1016/j.learninstruc.2012.08.007.CrossRefGoogle Scholar
  48. Serra, M. J., & DeMarree, K. G. (2016). Unskilled and unaware in the classroom: college students’ desired grades predict their biased grade predictions. Memory & Cognition, 44, 1127–1137.  https://doi.org/10.3758/s13421-016-0624-9.CrossRefGoogle Scholar
  49. Shake, M. C., & Shulley, L. J. (2014). Differences between functional and subjective overconfidence in postdiction judgments of test performance. Electronic Journal of Research in Educational Psychology, 12, 263–282.  https://doi.org/10.14204/ejrep.33.14005.CrossRefGoogle Scholar
  50. Skaalvik, E. M. (1994). Attribution of perceived achievement in school in general and in maths and verbal areas: relations with academic self-concept and self-esteem. British Journal of Educational Psychology, 64, 133–143.CrossRefGoogle Scholar
  51. Thiede, K. W., Griffin, T. D., Wiley, J., & Anderson, M. C. M. (2010). Poor metacomprehension accuracy as a result of inappropriate cue use. Discourse Processes, 47, 331–362.  https://doi.org/10.1080/01638530902959927.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of PsychologyFriedrich-Alexander University Erlangen-NürnbergErlangenGermany
  2. 2.Department of PsychologyUniversity of AugsburgAugsburgGermany

Personalised recommendations