Skip to main content
Log in

Trust and the value of overconfidence: a Bayesian perspective on social network communication

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

The paper presents and defends a Bayesian theory of trust in social networks. In the first part of the paper, we provide justifications for the basic assumptions behind the model, and we give reasons for thinking that the model has plausible consequences for certain kinds of communication. In the second part of the paper we investigate the phenomenon of overconfidence. Many psychological studies have found that people think they are more reliable than they actually are. Using a simulation environment that has been developed in order to make our model computationally tractable we show that in our model inquirers are indeed sometimes better off from an epistemic perspective overestimating the reliability of their own inquiries. We also show, by contrast, that people are rarely better off overestimating the reliability of others. On the basis of these observations we formulate a novel hypothesis about the value of overconfidence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. For a discussion of this assumption and how it can be relaxed, see Olsson (2011).

  2. Despite this similarity, Bayesian networks should be carefully distinguished from networks in our sense.

  3. Incidentally that move also solves a problem of repetition. Suppose one inquirer \(S\) in the network is repeatedly reporting the same message, say, \(p\). This will make that inquirer’s peers repeatedly update with the information “\(S\) said that \(p\)”. If the messages exchanged between inquirers are simply thought of as claims to the effect that \(p\) is true or false, this is not very plausible. If, however, we instead interpret a message that \(p\) (\(\lnot p\)) as a message to the effect that there is a novel or independent reason for \(p\) (\(\lnot p\)), this reaction to repetition is as it should be.

  4. As pointed out by an anonymous referee, source independence is not a necessary condition for confirmation. Consider a case in which several inquirers believe that \(p\) (e.g., “global warming is real”) on account of deferring to one and the same expert. The testimonial judgments to the effect that \(p\) that these deferring inquirer may make are not independent of one another in the conditional sense. Still, it seems that the fact that a large number of inquirers (dependently) report that \(p\) should increase one’s credence in the proposition that \(p\). This kind of scenario is studied at length in Olsson (2002a); Olsson (2002b) and in (Olsson (2005), Sect. 3.2.3), where it is characterized as involving “dependent reliability”. The question whether such cases can be modeled in Laputa is a complex one which depends on various other issues, such as how we choose to interpret communication in the system. We would prefer to save that discussion for a later occasion as it does not bear directly on the points we wish to make in the present article.

  5. See Zollman (2007) for an alternative Bayesian model of communication in social networks which does not, however, allow trust to be represented and updated.

  6. We could of course imagine an extended model in which communication links are dynamically created in the process of collective inquiry. In such a model, inquirers could be biased to establish links to other inquirers whom they think will confirm their current view, in which case the issue of confirmation bias could indeed be legitimately raised.

  7. See Harvey (1997) for a review of the psychological literature on overconfidence.

  8. The program Laputa can be downloaded from http://sourceforge.net/projects/epistemenet/.

  9. The following parameter values were used in Laputa. Starting belief, inquiry chance and communication chance were all set to a flat distribution over the unit interval. Population was set to 20, certainty threshold to 0.99, steps to 100 and link change to 0.

  10. For more on the veritistic effect of varying the threshold of assertion, see Olsson and Vallinder (2013).

  11. The same parameter values were used as for the preceding experiment, except that inquiry chance was set to 0.6 and link chance to 0.25.

References

  • Angere, S. (2008). Coherence as a heuristic. Mind, 117, 1–26.

    Article  Google Scholar 

  • Angere, S. (to appear). Knowledge in a social network. Synthese.

  • Christensen, D. (2007). Epistemology of disagreement: The good news. Philosophical Review, 116(2), 187–217.

    Article  Google Scholar 

  • Edman, M. (1973). Adding independent pieces of evidence. In B. Hansson (Ed.), Modality, morality and other problems of sense and nonsense (pp. 180–188). Lund: Gleerup.

    Google Scholar 

  • Ekelöf, P.-O. (1983). My thoughts on evidentiary value. In P. Gärdefors, B. Hansson, & N.-E. Sahlin (Eds.), Evidentiary value: Philosophical, judicial and psychological aspects of a theory (pp. 9–26). Lund: Library of Theoria.

    Google Scholar 

  • Elga, A. (2005). Reflection and disagreement. Noûs, 41(3), 478–502.

    Article  Google Scholar 

  • Faulkner, P. (2010). Norms of trust. In A. Haddock, A. Millar, & D. Pritchard (Eds.), Social epistemology (pp. 129–147). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Goldman, A. I. (1999). Knowledge in a social world. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Halldén, S. (1973). Indiciemekanismer. Tidskrift for Rettsvitenskap, 86, 55–64.

    Google Scholar 

  • Hansson, B. (1983). Epistemology and evidence. In P. Gärdefors, B. Hansson, & N.-E. Sahlin (Eds.), Evidentiary value: Philosophical, judicial and psychological aspects of a theory (pp. 75–97). Lund: Library of Theoria.

    Google Scholar 

  • Harvey, N. (1997). Confidence in judgment. Trends in Cognitive Science, 1(2), 78–82.

    Article  Google Scholar 

  • Johnson, D., & Fowler, J. (2011). The evolution of overconfidence. Nature, 477, 317–320.

    Article  Google Scholar 

  • Kelly, T. (2005). The epistemic significance of disagreement. Oxford Studies in Epistemology, 1, 167–196.

    Google Scholar 

  • Lewis, D. (1980). A subjectivist’s guide to objective chance. In R. C. Jeffrey (Ed.), Studies in inductive logic and probability (Vol. 2). Berkeley, CA: University of California Press.

    Google Scholar 

  • McKay, R., & Dennet, D. (2009). The evolution of misbelief. Behavioral and Brain Sciences, 32, 493–561.

    Article  Google Scholar 

  • Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.

    Article  Google Scholar 

  • Olsson, E. J. (2002a). Corroborating testimony, probability and surprise. British Journal for the Philosophy of Science, 53, 273–288.

    Article  Google Scholar 

  • Olsson, E. J. (2002b). Corroborating testimony and ignorance: A reply to Bovens, Fitelson, Hartmann and Snyder. British Journal for the Philosophy of Science, 53, 565–572.

    Article  Google Scholar 

  • Olsson, E. J. (2005). Against coherence: Truth, probability and justification. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Olsson, E. J. (2011). A simulation approach to veritistic social epistemology. Episteme, 8(2), 127–143.

    Article  Google Scholar 

  • Olsson, E. J. (2013). A Bayesian simulation model of group deliberation and polarization. In F. Zenker (Ed.), Bayesian argumentation, Synthese library (pp. 113–134). New York: Springer.

    Chapter  Google Scholar 

  • Olsson, E. J., & Vallinder, A. (2013b). Norms of assertion and communication in social networks. Synthese, 190, 1437–1454.

    Article  Google Scholar 

  • Pearl, J. (1988). Probabilistic reasoning in intelligent systems. Palo Alto, CA: Morgan-Kaufmann.

    Google Scholar 

  • Schubert, S. (2010). Coherence and reliability: The case of overlapping testimonies. Erkenntnis, 74, 263–275.

    Article  Google Scholar 

  • Spohn, W. (1980). Stochastic independence, causal independence, and shieldability. Journal of Philosophical Logic, 9, 73–99.

    Article  Google Scholar 

  • Zollman, K. J. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.

    Google Scholar 

Download references

Acknowledgments

We would like to thank two anonymous referees for their input which led to many significant improvements and clarifications.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erik J. Olsson.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Vallinder, A., Olsson, E.J. Trust and the value of overconfidence: a Bayesian perspective on social network communication. Synthese 191, 1991–2007 (2014). https://doi.org/10.1007/s11229-013-0375-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-013-0375-0

Keywords

Navigation