Abstract
People can update their misconceptions or false beliefs by learning from corrective sources. However, research has shown that people vary drastically in the extent to which they learn from feedback and update their false beliefs accordingly. That past work drew attention to cognitive and motivational factors such as cognitive rigidity and closed-mindedness as inhibitors of belief updating. Here we examined a novel epistemic structure, misplaced certainty, a subjective sense of certainty while recognizing uncertainty in oneself or most people (e.g., I feel certain although I recognize X is technically uncertain or it is technically uncertain according to most people), as a unique predictor of lower belief updating. In a preregistered study, we hypothesized that those with high chronic misplaced certainty would be less likely to learn from feedback and revise their misconceptions in a feedback-learning task. In our analyses, we controlled for well-placed certainty—certainty while recognizing no doubt in oneself or most others. We also controlled for variables associated with closed-minded cognition. Consistent with our predictions, those with high misplaced certainty were less likely to revise their false beliefs in response to corrective feedback. In contrast, those with high well-placed certainty were more likely to learn from corrective feedback and revise their false beliefs. By shedding light on the nuances of different forms of subjective certainty, the present work aims to pave the way for further research on epistemic factors in the perseverance and correction of false beliefs.
Updating misconceptions or false beliefs is critical for staying well informed in the face of misinformation. Misconceptions prevail in many life domains, including health, politics, history, business, and the environment. However, individuals vary in how much they use corrective information to revise their false beliefs (Sinclair et al., 2020). Understanding the factors underlying such variance in openness to corrective feedback is critical to foster learning from corrective feedback and thus address the personal and social costs of staying misinformed (Lewandowsky et al., 2012).
Several cognitive and motivational factors underlie the resistance to updating false beliefs. Cognitively, integrating the corrective (vs. affirming) feedback into memory requires attention and working memory capacity, as initially embraced beliefs typically prevail in memory regardless of accuracy (e.g., Winkielman et al., 2012). Those with limited working memory capacity are less likely to update false beliefs (Brydges et al., 2018). Other research highlighted cognitive rigidity and motivational factors, showing firmly held beliefs (e.g., partisanship; Ecker & Ang, 2018) or closed-minded ideological tendencies such as right-wing authoritarianism (Sinclair et al., 2020) predict the resistance to changing false beliefs.
Despite identifying cognitive and motivational predictors, a limited amount of research has examined the epistemic predictors of poor updating. We examine a particularly underinvestigated epistemic structure, misplaced certainty, as a unique contributor to the perseverance of false beliefs.
Misplaced certainty
Mitzen and Schweller (2009) defined misplaced certainty as holding an unwarranted certainty despite disconfirming evidence. Accordingly, with misplaced certainty, one aims to take control of a deeply uncertain state. Misplaced certainty is a motivated bias with cognitive and affective roots, serving the basic need to reduce uncertainty. Mitzen and Schweller (2011) theorized that misplaced certainty is common in international politics, as leaders must make decisions in the face of perpetuating uncertainty. They suggested that leaders who embrace misplaced certainty preserve their confidence in the presence of new information and thus decide and act in potentially unfortunate ways. Through fostering such decisive actions, misplaced certainty may operate as the driver of conflict and war.
Psychological research found that people sometimes take an epistemic shortcut and embrace certainty even for things that are uncertain or unknowable (e.g., “I know that the pandemic will be over soon”). Notably, such a misplaced sense of certainty differs from the sense of certainty perceived to be confirmed by outside information or most people (i.e., well-placed certainty). Specifically, certainty is misplaced when it tackles a topic that the person acknowledges they or most other people cannot be certain about (Gollwitzer et al., 2022; Gollwitzer & Oettingen, 2019; Oettingen et al., 2022). For instance, a person may be certain that their presidential candidate will win the election while recognizing that this is technically unknowable as they or most other people cannot know the future. In short, misplaced certainty refers to a subjective sense of certainty while recognizing uncertainty in oneself or most others (Gollwitzer et al., 2022).
Paradoxical knowing and discordant knowing
There are two forms of misplaced certainty. Focusing on the subjective experience of certainty in entities that one recognizes to be technically unknowable (e.g., the future, other people’s thoughts), Gollwitzer and Oettingen (2019) studied what they termed paradoxical knowing as one form of misplaced certainty. Paradoxical knowing refers to the internal discordance of holding certainty about something that, at the same time, one recognizes as uncertain. More recently, Gollwitzer et al. (2022) examined discordant knowing, certainty in things one recognizes to be doubted by most others, as a second form of misplaced certainty. Discordant knowing refers to the external discordance of holding certainty about something that at the same time is recognized as doubted or opposed by most or important others.
Both forms of misplaced certainty have been associated with an epistemic threat that stems from the feeling of an internal or external (respectively) doubt or opposition and, in turn, defensive reactions in the form of determined ignorance and antisocial tendencies. People who hold greater paradoxical knowing or discordant knowing tend to report that they ignore information opposing their viewpoint and even aggress toward skeptics (Gollwitzer et al., 2022; Gollwitzer & Oettingen, 2019).
Still, the evidence from this rising area of research (i.e., paradoxical knowing and discordant knowing) is largely based on self-reported ignorance (e.g., agreeing with the statement, “You don’t have to consider all information”; Gollwitzer & Oettingen, 2019), and little is known about the behavioral manifestations of ignorance as an outcome of misplaced certainty. Can a chronic sense of misplaced certainty motivate one’s disregard of corrective information and foster the perseverance of false beliefs? The present study aims to answer this question.
Misplaced certainty versus well-placed certainty and closed-minded cognition
Misplaced certainty includes recognizing an internal or external doubt or opposition (hence, “misplaced”). Such misplacement distinguishes it from well-placed certainty—namely, certainty about things that one recognizes as undoubted or confirmed internally or externally (“concordant knowing”; Gollwitzer & Oettingen, 2019). Unlike misplaced certainty, well-placed certainty did not lead to defensive reactions such as ignorance or hostility (Gollwitzer et al., 2022), and the relationship between chronic misplaced certainty and ignorance of opposing information held true even after controlling for well-placed certainty (Gollwitzer & Oettingen, 2019). Therefore, only misplaced forms of certainty may predict ignorance of corrective feedback and the persistence of false beliefs.
We also examined the unique role of misplaced certainty in updating false beliefs beyond other variables associated with closed-minded cognition. Recent work by Sinclair et al. (2020) showed that right-wing authoritarianism (RWA), an ideological construct of adherence to social norms and resistance to change (Duckitt, 2001), relates to less updating of misconceptions or false beliefs. Unlike RWA, which is a construct referring to a specific ideology, misplaced certainty is an epistemic structure that generalizes across different ideologies. Therefore, we predicted that the identified relationships between misplaced certainty and belief updating should hold independent of RWA.
The present study
A preregistered study examined the relationship between misplaced certainty and updating false beliefs upon learning corrective feedback in a feedback-learning task. Specifically, we examined whether misplaced certainty relates to a reluctance to update false beliefs more so than other epistemic structures, particularly well-placed certainty. Furthermore, we tested whether misplaced certainty predicts lower belief updating even after accounting for right-wing authoritarianism, an ideological tendency that has been previously shown to relate to lower belief updating (Sinclair et al., 2020).
Method
Measures and analysis plan were preregistered online (https://osf.io/wdgnb).
Participants
We recruited 250 U.S. resident Amazon Mechanical Turk workers. We requested participants whose approval rate was above 90% and who had completed 50+ tasks to protect our data from bots. The study was only available to those who did not participate in our pilot study. Participants were compensated with $3.25 for approximately thirty-minute participation. The minimum target sample size (N = 250) was determined via an a priori power analysis for our main analysis (linear regression; see Supplementary Materials for details). Fifteen participants were excluded for failing more than one attention check question (as preregistered), leaving us a final sample of 235 participants (73 female, 160 male, one nonbinary, one unknown, Mage = 37.1 years, SDage = 10.76).
Materials
After providing consent, participants completed the following tasks in the following order: feedback-learning task (baseline), individual difference assessments, feedback-learning task (test), and demographics. Below we explain each task in detail.
Feedback-learning task
Participants were tested on 60 trivia statements (forty misconceptions and twenty correct statements) randomly selected from Sinclair et al.’s (2020) feedback-learning task. We shortened the original task by half (the original task included 120 trivia statements) to ensure that all tasks could be completed within 30-minutes. Example statements for misconceptions and correct statements, respectively, are as follows: “Thomas Edison invented the lightbulb” (FALSE); “Spiders have eight legs” (TRUE). The statements included a mix of declarative statements (i.e., facts; as in previous examples) and assertions with evidence or counterevidence (e.g., “Waking a sleepwalker is bad for their health” -FALSE-). Correct statements were used as fillers and all analyses focused on responses to the misconceptions.
The task included two rounds: a first round of feedback-learning (baseline) and a second round of feedback-learning (test). In the first round, participants reviewed each statement independently and indicated whether they thought the statement was TRUE or FALSE. Following each response, they rated their confidence level (0 = not at all confident; 100 = very confident). The task was self-paced, but we delayed the appearance of each “submit” button for 2 seconds to prevent accidental or random submissions. After each trial, participants were presented with the same statement again with feedback (“This statement is FALSE/TRUE”). In the second (test) round (which took place after the individual difference assessments), participants completed the same task without the feedback and with a different randomized order of statements.
Belief updating scores in the correlational analyses were calculated by taking the proportion of correct updating (in the test phase) for the initially incorrect responses (e.g., .50 if a participant updated half of their incorrect responses, 1 if a participant updated all their incorrect responses). In the mixed model analyses, we employed a dichotomous updating score for each initially incorrect response (0: not updated, 1: updated).
Misplaced certainty
We measured two forms of misplaced certainty in line with the past literature: certainty in the unknowable (paradoxical knowing) and certainty about things that are doubted by others (discordant knowing). Paradoxical knowing (“I know things that one can’t actually know”; “I know things that can’t be known”; “I know things that are unknowable”; α = .920; Gollwitzer & Oettingen, 2019) and discordant knowing (“I know things that most other people would say can’t be known”; “I know things where most other people would say one can’t know them”; “I know things that most other people would say are unknowable”; α = .914; Bläser & Oettingen, 2021) were measured via three items each. As preregistered, we collapsed across these subscales due to a very high correlation (r = .821, p < .001; α = .944) and called the collapsed measure misplaced certainty.
Epistemic controls
Other epistemic structures were measured as control variables. We measured well-placed certainty via a concordant knowing scale (e.g., “I know things that one can actually know,” α = .818). For the sake of design completion (to cover all four quadrants of the epistemic structures), we also measured well-placed and misplaced forms of uncertainty. We measured well-placed uncertainty via a beliefs-about-the-unknowable scale (e.g., “I believe things that one can’t actually know,” α = .881; the verb “believe” has been validated to imply less certainty than the verb “know”; Gollwitzer & Oettingen, 2019). We measured misplaced uncertainty via a paradoxical not knowing scale used by Gollwitzer and Oettingen (2019; e.g., “I don’t know things that one can actually know,” α = .886). This exploratory (not preregistered) variable did not relate to belief updating (see Supplementary Materials). All items were evaluated on 7-point scales (1 = not at all agree to 7 = strongly agree).
Other control variables
To examine the unique predictive power of misplaced certainty on belief updating, in addition to the epistemic control variables, we measured two other variables that were shown to relate to closed-minded cognition: right-wing authoritarianism (RWA) and conservatism. Fifteen items measured RWA, such as “The ‘old-fashioned ways’ and ‘old-fashioned values’ still show the best way to live” (1 = not at all agree to 7 = strongly agree; α = .826; Zakrisson, 2005). A self-placement scale included in the demographics section measured conservatism (1 = very liberal to 4 = neither liberal nor conservative to 7 = very conservative).
Social desirability
To account for possible social desirability concerns that may arise in self-report, we included a social desirability scale with 14 items rated as True or False (Reynolds, 1982). The average agreement with socially desirable items (e.g., “I’m always willing to admit it when I make a mistake,” α = .664) composed the social desirability score.
Attention checks
We added three attention checks, two in the baseline and one in the test phase of the feedback-learning task. Attention checks required participants to follow the instruction in the prompt (e.g., set your confidence rating at 80). As preregistered, participants who failed at least two of the three attention checks were excluded from the analyses.
Discouraging cheating
The nature of the feedback-learning task may incentivize cheating behavior among those with high-performance goals. We discouraged and controlled for cheating effects by (1) emphasizing in the consent form that the monetary compensation was fixed regardless of performance, (2) preventing changing previously answered questions by not providing a “back” button, and (3) conducting robustness tests by eliminating responses longer than 9 seconds (as in Sinclair et al., 2020; results remained unimpacted, see Supplementary Materials).
Results
Belief updating and epistemic structures
Misplaced certainty strongly predicted less belief updating (r = −.39, p < .001; Table 1). Contrarily, well-placed certainty related to more belief updating (r = .18, p < .01).Footnote 1 Well-placed uncertainty also related to lower belief updating (r = −.22, p < .001), yet this relationship stemmed from the former’s shared variance with misplaced certainty (partial correlation controlling for misplaced certainty: r = .008, p = .905). Replicating Sinclair et al. (2020), RWA negatively related to belief updating (r = −.30, p < .01). In contrast to this past research, conservatism also negatively related to belief updating (r = −.27, p < .01). Controlling for test accuracy, the number of false beliefs in the baseline (the total number of misconceptions stated as TRUE), did not affect these findings (see Supplementary Materials).
Next, we conducted a multiple linear regression (as preregistered) to directly test the relationship between misplaced certainty and belief updating, controlling for right-wing authoritarianism, conservatism, and the other epistemic variables which correlated with belief updating (well-placed certainty and well-placed uncertainty; Table 2). All variance inflation factors were below 2, suggesting no issues of multicollinearity.
As expected, misplaced certainty explained significant variance (b = −.309, p < .001) in the negative direction, over and above other variables in predicting belief updating. Conversely, well-placed certainty explained significant variance in the positive direction (more updating; b = .133, p = .040; Fig. 1). Conservatism also predicted lower belief updating (b = −.135, p = .044). Other variables did not relate to belief updating, ps > .2.
Misplaced certainty predicted a lower tendency to update false beliefs, whereas well-placed certainty predicted a higher tendency to update false beliefs. Shaded bands illustrate 95% confidence intervals. The regression lines indicate estimated values after accounting for other variables in the model (see Table 2). All predictors in the model were z-transformed. For plots showing the bivariate relationships with exact values, see Supplementary Materials
Bayesian estimations
As multiple regression did not account for the number of observations (false beliefs) across participants, we conducted multilevel analyses. Bayesian estimations increase the interpretability of probability estimates and flexibility in fitting complex models (Muth et al., 2018).Footnote 2 Therefore, we conducted a two-level logistic hierarchical Bayesian parameter estimation with the Markov chain Monte Carlo (MCMC) sampling algorithm using rstanarm R package (Gabry & Goodrich, 2017) to fit binary responses (0: no updating; 1: updating). This model allowed regression coefficients to vary by participant and item. Following Muth et al. (2018), we used weakly informative default priors for all parameters and facilitated convergence via 8,000 iterations per chain (two warm-ups and two post-warm-ups).
Our model successfully converged (Rhats < 1.1, Neff s > 2,000; Table S11 and Figs. S4–S5). The mean estimates for the posterior distributions are in Table 2 and Fig. 2. Controlling for other predictors, misplaced certainty was associated with lower belief updating (95% Prediction Interval; PI: [−0.6, −0.2]) with all slope estimates on negative values. Contrarily, well-placed certainty was associated with higher belief updating (95% PI: [0, 0.4]). An examination of posterior draws from the fitted model showed that 98% of the slope estimates were on positive values, suggesting a robust positive association. Conservatism related to lower belief updating (95% PI: [−0.3, 0]); 99% of the slope estimates were on negative values, suggesting a reliable negative association. Other predictors did not show a credible association with belief updating.
Sensitivity to corrective feedback
Does misplaced certainty relate to less updating overall or updating false beliefs specifically? To explore this, we conducted the same Bayesian parameter estimation for updating initially correct responses or disimprovement (model converged; Rhats < 1.1, Neff s > 2,000; Table S12 and Fig. S6). In this model, misplaced certainty was associated with more disimprovement (95% PI: [0.4, 0.9]), with all slope estimates falling on positive values (Fig. S6). Contrarily, well-placed certainty related to less disimprovement (95% PI: [−0.6, 0]), with 98% of the slope estimates on negative values. Conservatism also related to more disimprovement (95% PI: [0.1, 0.4]), with 99.8% of the slope estimates on positive values. The remaining estimates suggested no conclusive associations. Taken together, misplaced certainty (and conservatism) related to lower sensitivity and well-placed certainty related to higher sensitivity to corrective feedback (a signal detection approach also confirmed these findings, see Supplementary Materials).
Confidence in updated false beliefs
In the feedback-learning task, participants also rated their confidence in their responses. Confidence ratings allowed us to examine more nuanced responses than the forced choice (see Salovich et al., 2021) and to explore the strength of updating in an alternative way—by examining the change in confidence upon corrective feedback.Footnote 3
In a linear mixed-effects model (following Sinclair et al., 2020), we entered misplaced certainty, test accuracy (correct/updated, incorrect/reproduced), and their interaction to predict change in confidence (test minus baseline; higher scores indicating increased confidence after feedback).Footnote 4 A significant interaction between misplaced certainty and test accuracy emerged (b = −2.35, p < .001), suggesting a lower change in confidence among those with higher misplaced certainty after correct updating (b = −4.24, p < .001; Fig. S10).Footnote 5 Even after updating false beliefs, people with high misplaced certainty did not show increased confidence.Footnote 6
When we replaced misplaced certainty with well-placed certainty, a different pattern emerged. Well-placed certainty also interacted with test accuracy in predicting the change in confidence (b = 1.72, p = .018).Footnote 7 However, it did not relate to a change in confidence after updating false beliefs. Those with high and low well-placed certainty similarly reported increased confidence after updating false beliefs (b = .36, p = .679). Furthermore, those with high well-placed certainty reported marginally lower confidence than those with low well-placed certainty when they reproduced false beliefs (b = −1.37, p = .067). These patterns again differentiate misplaced certainty from well-placed certainty in terms of updating false beliefs.
Discussion
Testing the tendency to correct the false beliefs on urban myths upon corrective feedback, we found that those with high (vs. low) misplaced certainty—certainty about unknowable things in life—were less likely to update false beliefs and were less confident in the beliefs that they correctly updated. Contrarily, those with high (vs. low) well-placed certainty—certainty on knowable things in life—were more likely to update false beliefs and were as confident as those with low well-placed certainty on the beliefs they correctly updated. These relationships held independent of other epistemic tendencies (low certainty about unknowable or knowable things) and variables associated with closed-minded cognition (right-wing authoritarianism and political conservatism). Indeed, right-wing authoritarianism, which had been found to predict less belief updating (Sinclair et al., 2020), was no longer a significant predictor after controlling for misplaced certainty.
The present study contributes to the literature in several ways. First, it examined a recent psychological construct, misplaced certainty, as a unique predictor of perseverance of false beliefs. Second, it showed misplaced certainty operates in contrast to well-placed certainty in ignoring corrective feedback. Third, it examined the relationship between misplaced certainty and resistance to updating false beliefs via a behavioral (performance-dependent) rather than a self-report measure of information processing and updating.
We examined the relationship between a general sense of certainty rather than certainty in specific beliefs. However, research on certainty in specific beliefs provides consistent findings. For instance, partisanship predicts certainty about a specific belief (e.g., that a false statement made by a political leader is actually correct) and resistance to updating that belief in response to corrective feedback (Li & Wagner, 2020). We expand this work by showing that a general sense of certainty—independently of the life domain—may trigger such resistance.
Other theoretically relevant constructs, including overclaiming (Pennycook & Rand, 2019), overconfidence (Salovich et al., 2021), certainty about the future (Olcaysoy Okten et al., 2022), or reliance on a single (vs. multiple) hypothesis in evaluations (Dougherty et al., 2010) were also shown to relate to poor patterns of information processing (e.g., confirmation bias, difficulty filtering out inaccurate information). However, this past research was silent about whether the person who claims certainty or confidence recognizes any uncertainty in themselves (e.g., “this is a topic one can/can’t be certain about”) or other people (e.g., “most others are uncertain/certain”).Footnote 8 We systematically examined the conditions and domains where certainty is experienced as misplaced or well-placed, considering their distinct consequences in updating false beliefs.
Limitations and future directions
Several limitations can be considered. First, our hypotheses were based on research linking misplaced certainty to determined ignorance of opposing information (Gollwitzer & Oettingen, 2019). Epistemic threat has been found to mediate this relationship; certainty in a specific domain relates to feeling threatened by the opposition, leading to ignorance of contradictory information (Gollwitzer et al., 2022). Similarly, the work on the continued influence effect (being continually influenced by misinformation after corrective feedback) suggests this effect may stem from experiencing discomfort upon correction (Susmann & Wegener, 2022). Perhaps, those who embrace misplaced certainty experience discomfort upon correction and thus tightly hold onto their initial views.
Future research should examine the specific mechanism underlying the relationship between misplaced certainty and belief updating. Does misplaced certainty lead to lower attention to corrective feedback or a reluctance to use the attended corrective feedback? Sensitivity analyses provided preliminary evidence consistent with the attention hypothesis: Misplaced certainty related to less correct and more incorrect updating. However, the motivational mechanism, and a combined mechanism of attention and motivation, are also possible; those with misplaced certainty may have been more suspicious of our feedback and therefore paid less attention to it.
Additionally, misplaced certainty relates to various indicators of mental rigidity, including dogmatism, self-righteousness, and moral vitalism, in addition to RWA (Gollwitzer & Oettingen, 2019), all of which could predict lower belief updating. However, misplaced certainty predicts fanaticism (determined ignorance, aggression, and joining extreme groups; Mead, 1977) over and above mental rigidity (Gollwitzer & Oettingen, 2019), and here the epistemic threat is the main mediator. Thus, misplaced certainty may uniquely predict lower belief updating also via epistemic threat. Finally, ambiguity intolerance may contribute to the relationship between the quest for certainty and lower belief updating (see Lewandowsky et al., 2012) though past research observed no link between misplaced certainty and ambiguity intolerance (Gollwitzer & Oettingen, 2019).
A second limitation is that the relationship documented in Sinclair et al.’s (2020) study between RWA and belief updating became nonsignificant after accounting for misplaced certainty. Consistent with past work (Gollwitzer & Oettingen, 2019), RWA significantly related to misplaced certainty (r = .53, p < .001). However, we refrain from concluding that misplaced certainty embraced by those with high RWA explains why they did not revise their false beliefs, as our study design was correlational and cross-sectional.
Third, unlike Sinclair et al.’s (2020) study, the relationship between conservatism and lower belief updating was significant. This discrepancy may stem from the differences in the ideology distributions across samples, but further research should verify that (our sample was more skewed towards conservative than Sinclair et al.’s). Future studies should also replicate our findings with more comprehensive political orientation measures.
Fourth, our main analyses of belief updating relied on a forced-choice paradigm, though supplemented with continuous confidence ratings. Forced-choice alone may not be sensitive to nuanced changes in beliefs due to updating (Salovich et al., 2021). However, misplaced certainty related to both preserving incorrect responses and confidence in incorrect responses. Future research may benefit from employing more sensitive belief updating measures (e.g., reaction time, mouse-tracking).
Finally, our paradigm was limited to everyday misconceptions about which one may not hold strong beliefs. It also mixed declarative statements (facts) with assertions with evidence. Future research should examine whether our findings generalize to, first, updating false beliefs on issues deemed important (e.g., political views), and second, updating false beliefs about facts, assertions, and other kinds of false claims.
Conclusion
We show that a mentality of misplaced certainty relates to less updating of misconceptions or false beliefs. Well-placed certainty, however, relates to more updating. Showing the nuances between different forms of epistemic structures, we shed light on a subjective experience—embracing misplaced rather than well-placed certainty—that may play a critical role in the perseverance of false beliefs.
Data availability
All verbatim material files, data files, and preregistration can be found online (osf.io/htyjn).
Notes
We conducted exploratory correlational analyses to test whether misplaced certainty related to false beliefs in the baseline or test phases (independently) of the feedback-learning task. Misplaced certainty showed a marginal positive correlation with baseline false beliefs (r = .12, p = .059), and a significant positive correlation with false beliefs in the test (r = .43, p < .001). All analyses comparing the magnitude of these correlations (calculated via “cocor” package; see Diedenhofen & Musch, 2015) concurrently showed that they significantly differ in degree—for example, William’s t(232) = −5.71, p < .001 (see Supplementary Materials). That is, misplaced certainty related more to the tendency to preserve false beliefs (after the feedback) than holding false beliefs in the first place. In contrast, well-placed certainty did not relate to baseline false beliefs (r = −.02, p = .787), and related to fewer false beliefs in the test (r = −.15, p = .018).
For a mixed-model analysis with a frequentist approach, see Supplementary Materials.
For correlations with confidence ratings see Supplementary Materials.
We also included random intercepts for subjects and statement type and random slope for test accuracy by subject. The model did not converge when we added the random slope of test accuracy by statement type.
The fixed effect of misplaced certainty was significant, suggesting a higher increase in confidence following corrective feedback among those with low (vs. high) misplaced certainty (b = −1.89, p < .001; see Fig. S10). A fixed effect of test accuracy also emerged (b = 3.43, p < .001), showing a higher increase in confidence scores for correctly updated (vs. reproduced) statements.
Replicating Sinclair et al. (2020), similar patterns were observed for high RWA (see Supplementary Materials). For exploratory analyses with prediction error, also see Supplementary Materials.
There was again a main effect of test accuracy (b = 3.49, p < .001). The main effect of well-placed certainty was not significant (b = −.50, p = .488).
Certainty in general knowledge of trivia statements as used here was found to relate to a greater tendency to update false beliefs overall (Metcalfe, 2017). This tendency (also known as the prediction error in computational reinforcement learning) is elicited by greater surprise experienced upon learning that a belief held with high certainty is indeed incorrect. Yet again, it is unclear whether this item-specific sense of certainty corresponds to a misplaced or well-placed certainty. Perhaps, prediction error effects are more prominent when such certainty is well-placed (e.g., with past experience of confirmation by others or data) rather than misplaced. Future research should examine this possibility.
References
Bläser, A., & Oettingen, G. (2021). Discordant Knowing Scale [Unpublished raw data].
Brydges, C. R., Gignac, G. E., & Ecker, U. K. (2018). Working memory capacity, short-term memory capacity, and the continued influence effect: A latent-variable analysis. Intelligence, 69, 117–122.
Diedenhofen, B., & Musch, J. (2015). cocor: A comprehensive solution for the statistical comparison of correlations. PLOS ONE, 10(4), Article e0121945.
Dougherty, M., Thomas, R., & Lange, N. (2010). Toward an integrative theory of hypothesis generation, probability judgment, and hypothesis testing. In Psychology of learning and motivation (Vol. 52, pp. 299–342). Academic Press.
Duckitt, J. (2001). A dual-process cognitive-motivational theory of ideology and prejudice. In M. P. Zanna (Ed.), Advances in experimental social psychology (Vol. 33, pp. 41–113). Academic Press.
Ecker, U. K., & Ang, L. C. (2018). Political attitudes and the processing of misinformation corrections. Political Psychology, 40(2), 241–260.
Gabry, J., & Goodrich, B. (2017). rstanarm: Bayesian applied regression modeling via Stan. (R Package Version 2.15.3) [Computer software]. http://mc-stan.org/rstanarm/
Gollwitzer, A., & Oettingen, G. (2019). Paradoxical knowing: A shortcut to knowledge and its antisocial correlates. Social Psychology, 50(3), 145–161.
Gollwitzer, A. G., Okten, I. O., Pizarro, A. O.; Oettingen, G. (2022). Discordant knowing: A social cognitive structure underlying fanaticism. Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0001219
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131.
Li, J., & Wagner, M. W. (2020). The value of not knowing: Partisan cue-taking and belief updating of the uninformed, the ambiguous, and the misinformed. Journal of Communication, 70(5), 646–669.
Mead, M. (1977). Fanatical thinking: The panhuman disorder. ETC: A Review of General Semantics, 34, 35–38.
Metcalfe, J. (2017). Learning from errors. Annual Review of Psychology, 68, 465–489.
Mitzen, J., & Schweller, R. L. (2009). Misplaced certainty and the onset of war. APSA Toronto Meeting Paper. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1449304
Mitzen, J., & Schweller, R. L. (2011). Knowing the unknown unknowns: Misplaced certainty and the onset of war. Security Studies, 20(1), 2–35.
Muth, C., Oravecz, Z., & Gabry, J. (2018). User-friendly Bayesian regression modeling: A tutorial with rstanarm and shinystan. The Quantitative Methods for Psychology, 14(2), 99–119.
Oettingen, G., Gollwitzer, A., Jung, J., & Olcaysoy Okten, I. (2022). Misplaced certainty in the context of conspiracy theories. Current Opinion in Psychology, 46, 101393.
Olcaysoy Okten, I., Gollwitzer, A., & Oettingen, G. (2022). When knowledge is blinding: The dangers of being certain about the future during uncertain societal events. Personality and Individual Differences. Advanced online publication. https://doi.org/10.1016/j.paid.2022.111606
Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences, 116(7), 2521–2526.
Reynolds, W. M. (1982). Development of reliable and valid short forms of the Marlowe‐Crowne Social Desirability Scale. Journal of Clinical Psychology, 38(1), 119–125.
Salovich, N. A., Donovan, A. M., Hinze, S. R., & Rapp, D. N. (2021). Can confidence help account for and redress the effects of reading inaccurate information? Memory & Cognition, 49, 293–310.
Sinclair, A. H., Stanley, M. L., & Seli, P. (2020). Closed-minded cognition: Right-wing authoritarianism is negatively related to belief updating following prediction error. Psychonomic Bulletin & Review, 27(6), 1348–1361.
Susmann, M. W., & Wegener, D. T. (2022). The role of discomfort in the continued influence effect of misinformation. Memory & Cognition, 50(2), 435–448.
Winkielman, P., Huber, D. E., Kavanagh, L., & Schwarz, N. (2012). Fluency of consistency: When thoughts fit nicely and flow smoothly. In B. Gawronski & F. Strack (Eds.), Cognitive consistency: A fundamental principle in social cognition (pp. 89–111). Guilford Press.
Zakrisson, I. (2005). Construction of a short version of the Right-Wing Authoritarianism (RWA) scale. Personality and Individual Differences, 39(5), 863–872.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare no conflicts of interest. IRB number is IRB-FY2022-5807.
Additional information
Open practices statement
The data, analysis codes, and materials are available online (https://osf.io/htyjn). The study was preregistered (https://osf.io/wdgnb).
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
ESM 1
(DOCX 1212 kb)
Rights and permissions
About this article
Cite this article
Olcaysoy Okten, I., Huang, T. & Oettingen, G. Updating false beliefs: The role of misplaced vs. well-placed certainty. Psychon Bull Rev 30, 712–721 (2023). https://doi.org/10.3758/s13423-022-02196-9
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13423-022-02196-9