Abstract
The paper presents and defends a Bayesian theory of trust in social networks. In the first part of the paper, we provide justifications for the basic assumptions behind the model, and we give reasons for thinking that the model has plausible consequences for certain kinds of communication. In the second part of the paper we investigate the phenomenon of overconfidence. Many psychological studies have found that people think they are more reliable than they actually are. Using a simulation environment that has been developed in order to make our model computationally tractable we show that in our model inquirers are indeed sometimes better off from an epistemic perspective overestimating the reliability of their own inquiries. We also show, by contrast, that people are rarely better off overestimating the reliability of others. On the basis of these observations we formulate a novel hypothesis about the value of overconfidence.
Similar content being viewed by others
Notes
For a discussion of this assumption and how it can be relaxed, see Olsson (2011).
Despite this similarity, Bayesian networks should be carefully distinguished from networks in our sense.
Incidentally that move also solves a problem of repetition. Suppose one inquirer \(S\) in the network is repeatedly reporting the same message, say, \(p\). This will make that inquirer’s peers repeatedly update with the information “\(S\) said that \(p\)”. If the messages exchanged between inquirers are simply thought of as claims to the effect that \(p\) is true or false, this is not very plausible. If, however, we instead interpret a message that \(p\) (\(\lnot p\)) as a message to the effect that there is a novel or independent reason for \(p\) (\(\lnot p\)), this reaction to repetition is as it should be.
As pointed out by an anonymous referee, source independence is not a necessary condition for confirmation. Consider a case in which several inquirers believe that \(p\) (e.g., “global warming is real”) on account of deferring to one and the same expert. The testimonial judgments to the effect that \(p\) that these deferring inquirer may make are not independent of one another in the conditional sense. Still, it seems that the fact that a large number of inquirers (dependently) report that \(p\) should increase one’s credence in the proposition that \(p\). This kind of scenario is studied at length in Olsson (2002a); Olsson (2002b) and in (Olsson (2005), Sect. 3.2.3), where it is characterized as involving “dependent reliability”. The question whether such cases can be modeled in Laputa is a complex one which depends on various other issues, such as how we choose to interpret communication in the system. We would prefer to save that discussion for a later occasion as it does not bear directly on the points we wish to make in the present article.
See Zollman (2007) for an alternative Bayesian model of communication in social networks which does not, however, allow trust to be represented and updated.
We could of course imagine an extended model in which communication links are dynamically created in the process of collective inquiry. In such a model, inquirers could be biased to establish links to other inquirers whom they think will confirm their current view, in which case the issue of confirmation bias could indeed be legitimately raised.
See Harvey (1997) for a review of the psychological literature on overconfidence.
The program Laputa can be downloaded from http://sourceforge.net/projects/epistemenet/.
The following parameter values were used in Laputa. Starting belief, inquiry chance and communication chance were all set to a flat distribution over the unit interval. Population was set to 20, certainty threshold to 0.99, steps to 100 and link change to 0.
For more on the veritistic effect of varying the threshold of assertion, see Olsson and Vallinder (2013).
The same parameter values were used as for the preceding experiment, except that inquiry chance was set to 0.6 and link chance to 0.25.
References
Angere, S. (2008). Coherence as a heuristic. Mind, 117, 1–26.
Angere, S. (to appear). Knowledge in a social network. Synthese.
Christensen, D. (2007). Epistemology of disagreement: The good news. Philosophical Review, 116(2), 187–217.
Edman, M. (1973). Adding independent pieces of evidence. In B. Hansson (Ed.), Modality, morality and other problems of sense and nonsense (pp. 180–188). Lund: Gleerup.
Ekelöf, P.-O. (1983). My thoughts on evidentiary value. In P. Gärdefors, B. Hansson, & N.-E. Sahlin (Eds.), Evidentiary value: Philosophical, judicial and psychological aspects of a theory (pp. 9–26). Lund: Library of Theoria.
Elga, A. (2005). Reflection and disagreement. Noûs, 41(3), 478–502.
Faulkner, P. (2010). Norms of trust. In A. Haddock, A. Millar, & D. Pritchard (Eds.), Social epistemology (pp. 129–147). Oxford: Oxford University Press.
Goldman, A. I. (1999). Knowledge in a social world. Oxford: Clarendon Press.
Halldén, S. (1973). Indiciemekanismer. Tidskrift for Rettsvitenskap, 86, 55–64.
Hansson, B. (1983). Epistemology and evidence. In P. Gärdefors, B. Hansson, & N.-E. Sahlin (Eds.), Evidentiary value: Philosophical, judicial and psychological aspects of a theory (pp. 75–97). Lund: Library of Theoria.
Harvey, N. (1997). Confidence in judgment. Trends in Cognitive Science, 1(2), 78–82.
Johnson, D., & Fowler, J. (2011). The evolution of overconfidence. Nature, 477, 317–320.
Kelly, T. (2005). The epistemic significance of disagreement. Oxford Studies in Epistemology, 1, 167–196.
Lewis, D. (1980). A subjectivist’s guide to objective chance. In R. C. Jeffrey (Ed.), Studies in inductive logic and probability (Vol. 2). Berkeley, CA: University of California Press.
McKay, R., & Dennet, D. (2009). The evolution of misbelief. Behavioral and Brain Sciences, 32, 493–561.
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.
Olsson, E. J. (2002a). Corroborating testimony, probability and surprise. British Journal for the Philosophy of Science, 53, 273–288.
Olsson, E. J. (2002b). Corroborating testimony and ignorance: A reply to Bovens, Fitelson, Hartmann and Snyder. British Journal for the Philosophy of Science, 53, 565–572.
Olsson, E. J. (2005). Against coherence: Truth, probability and justification. Oxford: Oxford University Press.
Olsson, E. J. (2011). A simulation approach to veritistic social epistemology. Episteme, 8(2), 127–143.
Olsson, E. J. (2013). A Bayesian simulation model of group deliberation and polarization. In F. Zenker (Ed.), Bayesian argumentation, Synthese library (pp. 113–134). New York: Springer.
Olsson, E. J., & Vallinder, A. (2013b). Norms of assertion and communication in social networks. Synthese, 190, 1437–1454.
Pearl, J. (1988). Probabilistic reasoning in intelligent systems. Palo Alto, CA: Morgan-Kaufmann.
Schubert, S. (2010). Coherence and reliability: The case of overlapping testimonies. Erkenntnis, 74, 263–275.
Spohn, W. (1980). Stochastic independence, causal independence, and shieldability. Journal of Philosophical Logic, 9, 73–99.
Zollman, K. J. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.
Acknowledgments
We would like to thank two anonymous referees for their input which led to many significant improvements and clarifications.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Vallinder, A., Olsson, E.J. Trust and the value of overconfidence: a Bayesian perspective on social network communication. Synthese 191, 1991–2007 (2014). https://doi.org/10.1007/s11229-013-0375-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-013-0375-0