Skip to main content
Log in

Knowledge about others’ knowledge: how accurately do teachers estimate their students’ test scores?

  • Published:
Metacognition and Learning Aims and scope Submit manuscript

This article has been updated

Abstract

Besides learners’ awareness of their knowledge, a growing number of studies also emphasise the importance of teachers’ awareness of how well their students perform to adjust their teaching strategies accordingly. Therefore, proposing a multi-layered metacognitive regulatory model in teaching first, we investigated whether estimation type, item difficulty, and class performance affect teachers’ judgment accuracies ([JAs], i.e., score estimations). Teachers (N=38) of 86 classes made item-by-item and overall estimations of their classes’ test scores (N=2608 sixth-graders native in Turkish) at a PISA-equivalent mathematics test that was developed in the earliest phase of the current long-term research project. The results showed that teachers’ item-by-item estimations were below their classes’ actual performance, unlike their overall estimations. Teachers of low-performance classes were less accurate than those of high-performance classes. These teachers also showed the clearest underestimation for the easy questions, whereas teachers of high-performance classes overestimated their classes’ scores for the difficult questions. This dissociation implied that the teachers ‘must have’ primarily used their perceptions about their classes (e.g., classes’ existing performance) as a mnemonic judgment cue rather than item difficulty as an external cue when making their score estimations. The implications of the results were discussed in the light of existing literature and suggestions for prospective research were given.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2

Similar content being viewed by others

Change history

  • 26 January 2023

    The abstract part of the paper seems having all of the apostrophes (?) replaced with the question marks (?). This should be corrected

Notes

  1. The model can also involve -at least at a theoretical level- a fourth possible regulation, which is regarding the learners’ metacognitive regulation of their teachers and is also not mentioned in Thiede and his colleagues’ model (2019). However, we did not explicitly include this theoretically possible regulation loop that would emerge from the learner’s meta-level to the teacher’s object-object mainly because students do not have a clearly defined supervisory role in the educational settings just like their teachers. Nevertheless, learners may influence the teachers’ meta-levels (i.e., their metacognitive goals regarding their teaching objectives) by, for instance, course evaluations and/or various feedbacks; therefore, we show this as a bidirectional influence in the model rather than a fourth regulatory loop (see dashed arrows between learner’s and teacher’s meta-levels in Fig. 1).

  2. The school administrators’ and parents’ collaborations were so high that the number of parents who did not give consent was almost none.

  3. Being a part of this long-term research project, a separate study on the metacognitive monitoring performance of these students measured with type-2 signal detection theory’s calculation method along with their pre-test and immediate post-test score estimations has been reported elsewhere (see, Basokcu & Guzel 2022 for this earlier report). Since the present paper is primarily on teachers’ JAs, we didn’t repeat any of the findings regarding the students’ monitoring and score estimation performance in this current report. Therefore, we would like to refer the readers who would be curious about how sixth-grade mathematics learners behave metacognitively before, during, and immediately after the test is completed to the earlier report.

  4. The students made algebraic calculations on the test booklet to reach the correct answers, e.g., volume calculation, for the short-answer questions. Hence, only the ‘final answers’ given to the short-answer questions were scored in terms of their accuracies. Also, as indicated in the test booklet, all of the three sub-items in the true-false question must have been answered correctly to count this true-false question was accurately responded.

  5. The teachers in the present study confirmed that neither themselves nor their students who took part in the current study had already taken part in the test development phase of the research project earlier as an invigilator or as a participant.

  6. Absenteeism on the test-taking day was almost none and the number of students who didn’t wish to complete the test was negligible.

  7. Note that the test was not prepared by the teachers themselves and neither the students nor the teachers in the present study were instructed that the questions varied in terms of difficulty.

References

Download references

Funding

This work was financed by The Scientific and Technological Research Council of Türkiye (TÜBİTAK), Grant No. 115K531.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehmet Akif Güzel.

Ethics declarations

Conflict of interest

We have no known conflict of interest to disclose.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Güzel, M.A., Başokçu, T.O. Knowledge about others’ knowledge: how accurately do teachers estimate their students’ test scores?. Metacognition Learning 18, 295–312 (2023). https://doi.org/10.1007/s11409-023-09333-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11409-023-09333-2

Navigation