What do second-order judgments tell us about low-performing students’ metacognitive awareness?
- 543 Downloads
According to the unskilled and unaware effect (Kruger and Dunning 1999), low-performing students tend to overestimate their performance. Differentiating the assessment of metacognitive judgments into performance judgments (PJs) and second-order judgments (SOJs), PJs of low-performing students tend to be inflated, while their SOJs are usually lower than those of high-performing students (Händel and Fritzsche 2016; Miller and Geraci 2011). This suggests some level of awareness. The present study investigated whether low-performers’ lower SOJs actually indicate metacognitive awareness. We studied SOJs after adequate and inadequate PJs, and investigated whether low-performers’ lower SOJs are made by default or whether their lower SOJs differ in a similar magnitude compared to those of high-performers (indicating metacognitive awareness). We address this issue by disentangling student and item effects via generalized linear mixed models. Reanalyzing the data of Händel and Fritzsche (2016) from N = 116 students, we found that SOJs depended on the students who provided the SOJ and on the items on which the SOJ was made. Overall, SOJs depended on the PJs and on the interaction of performance and PJs, but not on the performance itself. Separate analyses for students of different performance levels revealed that low-performing students showed less awareness, indicated by a non-significant interaction effect of performance and PJs. Thus, it takes mixed models to tell the whole story of low-performing students’ lower SOJs.
KeywordsMetacognitive judgments Second-order judgments Generalized linear mixed models Unskilled and unaware
This research was supported by a grant from the “Sonderfonds für wissenschaftliches Arbeiten an der Universität Erlangen-Nürnberg“.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
- Al-Harthy, I. S., Was, C. A., & Hassan, A. S. (2015). Poor performers are poor predictors of performance and they know it: Can they improve their prediction accuracy? Journal of Global Research in Education and Social Science, 4, 93–100.Google Scholar
- Budescu, D. V., & Johnson, T. R. (2011). A model-based approach for the analysis of the calibration of probability judgments. Judgment and Decision making, 6(8), 857–869.Google Scholar
- Buratti, S., & Allwood, C. M. (2015). Regulating metacognitive processes - Support for a meta-metacognitive ability. In A. Pena-Ayala (Ed.), Metacognition: Fundaments, applications, and trends. A profile of the current state-of-the-art (1 ed., Vol. 76, pp. 17–38). Switzerland: Springer International Publishing.Google Scholar
- Burson, K. A., Larrick, R. P., & Klayman, J. (2006). Skilled or unskilled, but still unaware of it: How perceptions of difficulty drive miscalibration in relative comparisons. Journal of Personality and Social Psychology, 90(1), 60–77. https://doi.org/10.1037/0022-35126.96.36.199.CrossRefGoogle Scholar
- Dinsmore, D. L., & Parkinson, M. M. (2013). What are confidence judgments made of? Students' explanations for their confidence ratings and what that means for calibration. Learning and Instruction, 24, 4–14. https://doi.org/10.1016/j.learninstruc.2012.06.001.CrossRefGoogle Scholar
- Ehrlinger, J., Johnson, K., Banner, M., Dunning, D., & Kruger, J. (2008). Why the unskilled are unaware: Further explorations of (absent) self-insight among the incompetent. Organizational Behavior and Human Decision Processes, 105(1), 98–121. https://doi.org/10.1016/j.obhdp.2007.05.002.CrossRefGoogle Scholar
- Fritzsche, E. S., Kröner, S., Dresel, M., Kopp, B., & Martschinke, S. (2012). Confidence scores as measures of metacognitive monitoring in primary students? (limited) validity in predicting academic achievement and the mediating role of self-concept. [Antwortsicherheiten als Maß für die metakognitive Überwachung bei Grundschulkindern? (Eingeschränkte) Validität bei der Vorhersage schulischer Leistungen und die mediierende Rolle des Selbstkonzepts]. Journal for Educational Research Online, 4(2), 120–142.Google Scholar
- Jäger, R., & Bortz, J. (2004). Ratings scales with smileys as symbolic labels: Determined and checked by methods of psychophysics. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.11.5405&rep=rep1&type=pdf
- Kröner, S., & Robitzsch, A. (2010). Towards a model of response confidence - person and task effects on item-level metacognitive self-evaluation. Paper presented at the 4th biennial meeting of the EARLI special interest group 16 metacognition, Münster, Germany. http://www.metacognition2010.de/pdf/Programmheft-SIG2010.pdf
- Krueger, J., & Mueller, R. A. (2002). Unskilled, unaware, or both? The better-than-average heuristic and statistical regression predict errors in estimates of own performance. Journal of Personality and Social Psychology, 82(2), 180–188. https://doi.org/10.1037//0022-35188.8.131.52.CrossRefGoogle Scholar
- Kugler, K. C., Trail, J. B., Dziak, J. J., & Collins, L. M. (2012). Effect coding versus dummy coding in analysis of data from factorial experiments. Technical report series. Retrieved from http://methodology.psu.edu/media/techreports/12-120.pdf
- Merkle, E. C. (2010). Calibrating subjective probabilities using hierarchical bayesian models. In S.-K. Chai, J. J. Salerno, & P. L. Mabry (Eds.), SBP 2010, 6007 LNCS (pp. 13–22). Berlin: Springer-Verlag.Google Scholar
- Murayama, K., Sakaki, M., Yan, V. X., & Smith, G. M. (2014). Type I error inflation in the traditional by-participant analysis to metamemory accuracy: A generalized mixed-effects model perspective. Journal of Experimental Psychology. Learning, Memory, and Cognition, 40(5), 1287–1306. https://doi.org/10.1037/a0036914.CrossRefGoogle Scholar
- Poinstingl, H. (2009). The linear logistic test model (LLTM) as the methodological foundation of item generating rules for a new verbal reasoning test. Psychology Science Quarterly, 51, 123–134.Google Scholar
- Raudenbush, S. W., Bryk, A. S., Cheong, Y. F., Congdon, R. T., & du Toit, M. (2011). HLM 7: Hierarchical linear and nonlinear modeling. Chicago: Scientific Software International.Google Scholar
- Schraw, G., Kuch, F., & Gutierrez, A. P. (2013). Measure for measure: Calibrating ten commonly used calibration scores. Learning and Instruction, 24(1), 48–57. https://doi.org/10.1016/j.learninstruc.2012.08.007.CrossRefGoogle Scholar
- Yates, J. F. (1994). Subjective probability accuracy analysis. In G. Wright & P. Ayton (Eds.), Subjective probability (pp. 381–410). Oxford: John Wiley & Sons.Google Scholar
- Zimmerman, B. J. (2000). Attaining self-regulation: A social cognitive perspective. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13–39). San Diego: Academic Press.Google Scholar