Confidence in performance judgment accuracy: the unskilled and unaware effect revisited
Since its introduction in the late 1990s, the unskilled and unaware effect motivated several further studies. As it stands, low-performing students are assumed to provide inaccurate and overconfident performance judgments. However, as research with second-order judgments (SOJs) indicates, they apparently have some metacognitive awareness of this. The current study with 266 undergraduate students aimed to provide in-depth insights into both the reasons for (in)accurate performance judgments and the appropriateness of SOJs. We implemented a general linear mixed model (GLMM) approach to study item-specific performance judgments in the domain of mathematics at the person and item level. The analyses replicated the well-known effects. However, the GLMM analyses revealed that low-performing students’ lower confidence apparently did not indicate subjective awareness, given that these students made inappropriate SOJs (lower confidence in accurate than in inaccurate judgments). In addition, students’ self-generated explanations for their judgements indicated that low-performing students have difficulties recognizing that they possess topic knowledge to solve an item, whereas high-performing students struggle with admitting that they do not know the answer to a question. In sum, our results indicate that students at all performance levels have some metacognitive weaknesses, which, however, occur subject to different judgment accuracy.
KeywordsMetacognitive judgments Performance level Accuracy Item-specific judgments Second-order judgments
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
- Al-Harthy, I. S., Was, C. A., & Hassan, A. S. (2015). Poor performers are poor predictors of performance and they know it: can they improve their prediction accuracy? Journal of Global Research in Education and Social Science, 4, 93–100.Google Scholar
- Bol, L., & Hacker, D. J. (2012). Calibration research: where do we go from here? Frontiers in Psychology, 3. https://doi.org/10.3389/fpsyg.2012.00229.
- Bol, L., Riggs, R., Hacker, D. J., Dickerson, D., & Nunnery, J. (2010). The calibration accuracy of middle school students in math classes. Journal of Research in Education, 21, 81–96.Google Scholar
- de Bruin, A. B. H., Kok, E. M., Lobbestael, J., & de Grip, A. (2017). The impact of an online tool for monitoring and regulating learning at university: overconfidence, learning strategy, and personality. Metacognition and Learning, 12, 21–43. https://doi.org/10.1007/s11409-016-9159-5.CrossRefGoogle Scholar
- Dinsmore, D. L., & Parkinson, M. M. (2013). What are confidence judgments made of? Students’ explanations for their confidence ratings and what that means for calibration. Learning and Instruction, 24, 4–14. https://doi.org/10.1016/j.learninstruc.2012.06.001.CrossRefGoogle Scholar
- Egan, J. P. (1975). Signal detection theory and ROC analysis. New York: Academic Press.Google Scholar
- Ehrlinger, J., Johnson, K., Banner, M., Dunning, D., & Kruger, J. (2008). Why the unskilled are unaware: further explorations of (absent) self-insight among the incompetent. Organizational Behavior and Human Decision Processes, 105, 98–121. https://doi.org/10.1016/j.obhdp.2007.05.002.CrossRefGoogle Scholar
- Foster, N. L., Was, C. A., Dunlosky, J., & Isaacson, R. M. (2016). Even after thirteen class exams, students are still overconfident: the role of memory for past exam performance in student predictions. Metacognition and Learning, 12, 1–19. https://doi.org/10.1007/s11409-016-9158-6.CrossRefGoogle Scholar
- Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. New York: Wiley.Google Scholar
- Koriat, A., Nussinson, R., Bless, H., & Shaked, N. (2008). Information-based and experience-based metacognitive judgments: Evidence from subjective confidence. In I. J. Dunlosky & R. A. Bjork (Eds.), Handbook of memory and metamemory (pp. 117–135). New York: Psychology Press.Google Scholar
- Krueger, J., & Mueller, R. A. (2002). Unskilled, unaware, or both? The better-than-average heuristic and statistical regression predict errors in estimates of own performance. Journal of Personality and Social Psychology, 82, 180–188. https://doi.org/10.1037//0022-35126.96.36.199.CrossRefGoogle Scholar
- Lienert, G. A., & Hofer, M. (1972). MTAS. Mathematiktest für Abiturienten und Studienanfänger [mathematics test for high-school graduates and freshmen]. Göttingen: Hogrefe.Google Scholar
- Marsh, H. W., & Craven, R. G. (2006). Reciprocal effects of self-concept and performance from a multidimensional perspective: beysond seductive pleasure and unidimensional perspectives. Perspectives on Psychological Science, 1, 133–163. https://doi.org/10.1111/j.1745-6916.2006.00010.x.CrossRefGoogle Scholar
- Narciss, S., Koerndle, H., & Dresel, M. (2011). Self-evaluation accuracy and satisfaction with performance: are there affective costs or benefits of positive self-evaluation bias? International Journal of Educational Research, 50, 230–240. https://doi.org/10.1016/j.ijer.2011.08.004.CrossRefGoogle Scholar
- Nietfeld, J. L., Cao, L., & Osborne, J. W. (2005). Metacognitive monitoring accuracy and student performance in the postsecondary classroom. The Journal of Experimental Education, 74, 7–28.Google Scholar
- R Development Core Team (2012). R: A language and environment for statistical computing. Retrieved from Vienna, Austria: http://www.R-project.org/.
- Schraw, G. (2009). Measuring metacognitive judgments. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 415–429). New York: Routledge.Google Scholar
- Schraw, G., Kuch, F., & Gutierrez, A. P. (2013). Measure for measure: calibrating ten commonly used calibration scores. Learning and Instruction, 24, 48–57. https://doi.org/10.1016/j.learninstruc.2012.08.007.CrossRefGoogle Scholar