Skip to main content
Log in

Uninformative anchoring effect in judgments of learning

  • Published:
Metacognition and Learning Aims and scope Submit manuscript

Abstract

This experimental study examined whether the uninformative anchoring effect, which should be ignored, on judgments of learning (JOLs) was eliminated through the learning experience. In the experiments, the participants were asked to predict whether their performance on an upcoming test would be higher or lower than the anchor value (80% in the high anchor condition or 20% in the low anchor condition) before learning. Experiments 1a and 1b obtained consistent results, regardless of item difficulty. Specifically, the results showed that both the pre- and post-study JOLs in the high anchor condition were higher than those in the low anchor condition. Further, participants in the high (vs. low) anchor conditions made higher item-by-item JOLs during the learning process. This anchoring effect was maintained throughout the learning process. In contrast, there was no significant difference in recall performance between the two conditions. Experiment 3 demonstrated that the uninformative anchoring effect was not eliminated by obtaining test experience through a practice task before presenting anchoring information. These findings suggest that uninformative anchoring biases JOLs, but its effects are not eliminated by the learning experience.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

All data associated with this study are publicly available on OSF and can be accessed at https://osf.io/6zrpg/.

Notes

  1. Exact means, standard deviations and 95% CIs of variables represented in Figures are reported in the Appendix.

  2. Unlike η2 and η2p, η2G can be compared across studies, regardless of between- and within-subject designs, and thus, η2G is more useful effect size measure (for details, see Bakeman, 2005; Olejnik & Algina, 2003).

  3. Previous research showed that the analysis using aggregate data, such as item-by-item JOLs, can mask variability attributable to individuals and underestimate the true effect (Rouder & Lu, 2005). Given this fact, this study conducted additional analysis for item-by-item JOLs adopting a linear mixed effect model approach including the differences between participants and word pairs as random effects (for details, see Supplementary material), thereby demonstrating the anchoring effect on JOLs.

  4. Additional analysis for item-by-item JOLs adopting a linear mixed effect model approach including the differences between participants and word pairs as random effects also showed a linear trend for the serial position effect (for details, see Supplementary material).

  5. Although the results of global JOLs (i.e., pre- and post-study JOLs) and item-by-item JOLs was inconsistent, it is common to observe a disconnection between global post-study JOL and item-by-item JOLs (e.g., Hertzog et al., 2009).

  6. In this additional analysis, the anchor condition was coded as the low anchor condition = –0.5 and the high anchor condition = 0.5, and recall performance in the practice task was centralized. The effects of the anchor condition and the performance in the practice task were significant, β = 10.33, 95% CI [2.08, 18.57], p = .02 and β = 0.46, 95% CI [0.32, 0.59], p < .001, respectively.

References

Download references

Acknowledgements

This work was supported by JSPS KAKENHI, Grant Number 22K03089 (to Kenji Ikeda).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kenji Ikeda.

Ethics declarations

Ethics approval

APA ethical standards were followed in the conduct of this study, and the present study was approved by Tokai Gakuin University ethics committee.

Consent to participate

Before participation, all participants received informed consent and consented to participate the experiment.

Conflicts of interest

The author does not have any conflicts of interests.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 330 kb)

Appendix

Appendix

Table

Table 3 Means, standard deviations, and 95% confidence intervals of pre- and post-study JOLs and calibrations in each experiment

3

Table

Table 4 Means, standard deviations, and 95% confidence intervals of item-by-item JOLs and calibrations in each experiment

4

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ikeda, K. Uninformative anchoring effect in judgments of learning. Metacognition Learning 18, 527–548 (2023). https://doi.org/10.1007/s11409-023-09339-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11409-023-09339-w

Keywords

Navigation