Endogenous epistemic factionalization


Why do people who disagree about one subject tend to disagree about other subjects as well? In this paper, we introduce a model to explore this phenomenon of ‘epistemic factionization’. Agents attempt to discover the truth about multiple propositions by testing the world and sharing evidence gathered. But agents tend to mistrust evidence shared by those who do not hold similar beliefs. This mistrust leads to the endogenous emergence of factions of agents with multiple, highly correlated, polarized beliefs.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10


  1. 1.

    For evidence that these beliefs are, in fact, correlated, see Kahan (2014).

  2. 2.

    See Benegal (2018) for recent empirical work on this phenomenon. Our inference that such factions exist more generally is based on voting behavior in the United States and correlations between beliefs about matters of scientific consensus and political party identification, as, for instance, in Newport and Dugan (2015).

  3. 3.

    Of course, one can always define an ideology by a disjunctive procedure, basically by stipulating that the ideology consists in believing precisely those things that members of an epistemic faction believe. But to do so would be to eliminate all explanatory power of positing an underlying ideology, and so we set this possibility aside. One could also argue that factions are constructed by explicit political alliance building, and so one should not expect there to be any underlying ideological explanation, aside from broad compatibility of policy goals. We are sympathetic with this suggestion, but set it aside, because we do not think it completely accounts for the epistemic character of the phenomenon we are discussing.

  4. 4.

    As we describe below, these models are based on the ‘network epistemology’ framework developed by Bala and Goyal (1998) and introduced to philosophy of science by Zollman (2007).

  5. 5.

    In contrast, ‘belief polarization’ often refers to the more limited phenomenon where individuals update their credences in different directions in light of the same evidence (Dixit and Weibull 2007; Jern et al. 2014; Benoît and Dubra 2014). Psychologists sometimes refer to ‘group polarization’ or ‘attitude polarization’, which is the phenomenon where a group of individuals will develop more extreme beliefs as a result of discussion of a topic. Both of these phenomena likely relate to the larger phenomenon we address here, but they are not the focus of the current study. Bramson et al. (2017) give a nice discussion of the various ways that groups can polarize in our sense.

  6. 6.

    We emphasize that we do not mean to argue that epistemic factions never arise due to a common cause. Rather, this is an example of ‘how possibly’ modeling intended to explore what some minimal conditions for a phenomenon might be.

  7. 7.

    Adorno et al. (1950) give a very influential treatment of authoritarian personalities. See Jost et al. (2003) for a good overview of this literature.

  8. 8.

    Bramson et al. (2017), in a broader theoretical discussion of polarization, call this ‘belief convergence’.

  9. 9.

    In a related point, DellaPosta et al. (2015) argue that we should not try to explain things like liberal preferences for lattes via appeal to some deep ideological pattern. As they point out, correlations between latte drinking and liberal politics can emerge as a social phenomenon.

  10. 10.

    For instance, a recent study showed that individuals trust articles on Facebook based on their trust in the person who shared them much more than the original source (Project 2017).

  11. 11.

    In recent work Marks et al. (2019) found that individuals were less likely to trust individuals with successful track records on an academic task if they believed these individuals did not share their political beliefs.

  12. 12.

    See, for example, Festinger et al. (1950) who provide early evidence for this claim.

  13. 13.

    For more complete literature reviews, see Bramson et al. (2017) or O’Connor and Weatherall (2018).

  14. 14.

    Deffuant et al. (2002) and Deffuant (2006) also use this modeling paradigm to explore polarization.

  15. 15.

    These models differ in that Olsson (2013) focuses on individuals who share statements of belief, while O’Connor and Weatherall (2018), in an attempt to more closely model scientific communities, consider a model where agents share evidence. In addition, Singer et al. (2018) consider a polarization model where agents share ‘reasons’ for a belief, in the form of positive and negative weights, which might be interpreted as evidence from the world.

  16. 16.

    For more on the use of this framework in philosophy of science see Zollman (2010); Mayo-Wilson et al. (2011); Kummerfeld and Zollman (2015); Holman and Bruner (2015); Rosenstock et al. (2017); Weatherall et al. (2018); Weatherall and O’Connor (2017); O’Connor and Weatherall (2019); Borg et al. (2017); Frey and Šešelja (2017a, b); Holman and Bruner (2017). Zollman (2013) provides a review of the literature up to 2013.

  17. 17.

    Note that false consensus is always stable in this model, because no agents test action B. True consensus, on the other hand, is not strictly stable, as stochastic effects, in the form of sufficiently long strings of spurious results, can always push agents from true consensus to false consensus. However, the probability of this occurring goes to zero as the beliefs of the agents approach 1. We remark that disagreement occurs on the way to consensus, but this does not capture the phenomenon of polarization, in which disagreement is, at least approximately, stable.

  18. 18.

    O’Connor and Weatherall (2018) consider several functions: a linear function, as below, as well as logistic and exponential functions, and found that the results were stable across these modeling choices.

  19. 19.

    Observe that this means that all agents privilege their own evidence, necessarily treating it as certain.

  20. 20.

    To be explicit: The distance between beliefs, d, is the Euclidean distance between them, \(\sqrt{(P_1(B_1)-P_2(B_1))^2 + (P_1(B_2)-P_2(B_2))^2}\), where indices on P reflect the two different agents.

  21. 21.

    Or, to be more precise, approximately stable; recall the considerations in Note 17.

  22. 22.

    Note that if m is such that agents always update on the evidence from all others, at least a little bit, transient polarization is possible, but agents eventually reach consensus. Whenever \(m \le 1/\sqrt{2}\) in our two-problem models, all agents will have some influence on all others.

  23. 23.

    To determine these baseline cases, we ran simulations with identical parameters, but in which the model dynamics was altered so that the agents treated each of the problems separately, and then determined the average absolute value of r in these simulations.

  24. 24.

    In general, we randomly choose parameter values to illustrate each point we make; all of the trends we consider are stable across parameter choices.

  25. 25.

    Notice that because levels of polarization vary across values, the results in this figure are averaged over different numbers of simulations for each data point.

  26. 26.

    We remark that comparing values of m between the baseline and the full model, is somewhat subtle, because the minimum value of m at which agents can possibly polarize differs in the two models. We have not attempted to correct m in these figures; one can imagine, if one likes, translating the dotted line to the left so that the first values at which non-zero correlation occurs coincide.

  27. 27.

    Modulo one exception, discussed below.

  28. 28.

    There is one exception, which occurs when m goes from 0 to \(1/\sqrt{2}\). The average percentage of true beliefs increases slightly. This seems to be because when \(m=1/\sqrt{2}\), all actors will eventually reach consensus, but \(m\ne 0\) slows learning. As Zollman (2007, 2010) has shown, there is sometimes a benefit in this sort of epistemic network model to slow learning processes where actors do not too quickly lock into possibly false beliefs. This likely explains the small increase in true beliefs at this value.


  1. Adorno, T. W., Frenkel-Brunswik, E., Levinson, D. J., & Sanford, R. N. (1950). The authoritarian personality. London: Verso Books.

    Google Scholar 

  2. Axelrod, R. (1997). The dissemination of culture: A model with local convergence and global polarization. Journal of Conflict Resolution, 41(2), 203–226.

    Google Scholar 

  3. Bala, V., & Goyal, S. (1998). Learning from neighbors. Review of Economic Studies, 65(3), 595–621.

    Google Scholar 

  4. Benegal, S. D. (2018). The spillover of race and racial attitudes into public opinion about climate change. Environmental Politics, 27, 1–24.

    Google Scholar 

  5. Benoît, J.-P., & Dubra, J. (2014). A theory of rational attitude polarization. https://doi.org/10.2139/ssrn.2529494.

  6. Borg, A., Frey, D., Šešelja, D., & Straßer, C. (2017). Examining network effects in an argumentative agent-based model of scientific inquiry. In International workshop on logic, rationality and interaction (pp. 391–406). Springer.

  7. Bramson, A., Grim, P., Singer, D. J., Berger, W. J., Sack, G., Fisher, S., et al. (2017). Understanding polarization: Meanings, measures, and model evaluation. Philosophy of Science, 84(1), 115–159.

    Google Scholar 

  8. Centola, D., Gonzalez-Avella, J. C., Eguiluz, V. M., & San Miguel, M. (2007). Homophily, cultural drift, and the co-evolution of cultural groups. Journal of Conflict Resolution, 51(6), 905–929.

    Google Scholar 

  9. Cook, J., & Lewandowsky, S. (2016). Rational irrationality: Modeling climate change belief polarization using Bayesian networks. Topics in Cognitive Science, 8(1), 160–179.

    Google Scholar 

  10. Deffuant, G. (2006). Comparing extremism propagation patterns in continuous opinion models. Journal of Artificial Societies and Social Simulation, 9(3).

  11. Deffuant, G., Amblard, F., Weisbuch, G., & Faure, T. (2002). How can extremism prevail? A study based on the relative agreement interaction model. Journal of Artificial Societies and Social Simulation, 5(4), 1.

    Google Scholar 

  12. DellaPosta, D., Shi, Y., & Macy, M. (2015). Why do liberals drink lattes? American Journal of Sociology, 120(5), 1473–1511.

    Google Scholar 

  13. Dixit, A. K., & Weibull, J. W. (2007). Political polarization. Proceedings of the National Academy of Sciences, 104(18), 7351–7356.

    Google Scholar 

  14. Festinger, L., Schachter, S., & Back, K. (1950). Social pressures in informal groups: A study of human factors in housing. New York: Harper.

    Google Scholar 

  15. Frey, D., & Šešelja, D. (2017a). Robustness and idealizations in agent-based models of scientific interaction.

  16. Frey, D., & Šešelja, D. (2017b). What is the function of highly idealized agent-based models of scientific inquiry?.

  17. Friedkin, N. E., Proskurnikov, A. V., Tempo, R., & Parsegov, S. E. (2016). Network science on belief system dynamics under logic constraints. Science, 354(6310), 321–326.

    Google Scholar 

  18. Goldman, A. I. (2001). Experts: which ones should you trust? Philosophy and Phenomenological Research, 63(1), 85–110.

    Google Scholar 

  19. Hegselmann, R., Krause, U., et al. (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of artificial societies and social simulation, 5(3).

  20. Holman, B., & Bruner, J. (2017). Experimentation by industrial selection. Philosophy of Science, 84(5), 1008–1019.

    Google Scholar 

  21. Holman, B., & Bruner, J. P. (2015). The problem of intransigently biased agents. Philosophy of Science, 82(5), 956–968.

    Google Scholar 

  22. Jeffrey, R. C. (1990). The logic of decision (2nd ed.). Chicago, IL: University of Chicago Press.

    Google Scholar 

  23. Jern, A., Chang, K.-M. K., & Kemp, C. (2014). Belief polarization is not always irrational. Psychological Review, 121(2), 206.

    Google Scholar 

  24. Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003). Political conservatism as motivated social cognition. Psychological Bulletin, 129(3), 339.

    Google Scholar 

  25. Kahan, D. M. (2014). Vaccine risk perceptions and ad hoc risk communication: An empirical assessment. CCP risk perception studies (report no. 17), available at SSRN https://ssrn.com/abstract=2386034.

  26. Kitcher, P. (1995). The advancement of science: Science without legend, objectivity without illusions. Oxford: Oxford University Press.

    Google Scholar 

  27. Klemm, K., Eguíluz, V. M., Toral, R., & San Miguel, M. (2003). Global culture: A noise-induced transition in finite systems. Physical Review E, 67(4), 045101.

    Google Scholar 

  28. Kummerfeld, E., & Zollman, K. J. (2015). Conservatism and the scientific state of nature. The British Journal for the Philosophy of Science, 67(4), 1057–1076.

    Google Scholar 

  29. Lakoff, G. (2010). Moral politics: How liberals and conservatives think. Chicago: University of Chicago Press.

    Google Scholar 

  30. Marks, J., Copland, E., Loh, E., Sunstein, C. R., & Sharot, T. (2019). Epistemic spillovers: Learning others’ political views reduces the ability to assess and use their expertise in nonpolitical domains. Cognition, 188, 74–84.

    Google Scholar 

  31. Mayo-Wilson, C., Zollman, K. J., & Danks, D. (2011). The independence thesis: When individual and social epistemology diverge. Philosophy of Science, 78(4), 653–677.

    Google Scholar 

  32. Newport, F., & Dugan, A. (2015). College-educated republicans most skeptical of global warming. https://news.gallup.com/poll/182159/college-educated-republicans-skeptical-global-warming.aspx. Retrieved December 13, 2018.

  33. O’Connor, C., & Weatherall, J. O. (2018). Scientific polarization. European Journal for Philosophy of Science, 8(3), 855–875.

    Google Scholar 

  34. O’Connor, C., & Weatherall, J. O. (2019). The misinformation age: How false beliefs spread. New Haven: Yale University Press.

    Google Scholar 

  35. Olsson, E. J. (2013). A Bayesian simulation model of group deliberation and polarization. In F. Zenker (Ed.), Bayesian argumentation (pp. 113–133). New York: Springer.

    Google Scholar 

  36. Project, T. M. I. (2017). Who shared it? How Americans decide what news to trust on social media. Technical report, American Press Institute, and NORC at the University of Chicago, and The Associated Press.

  37. Rogers, E. M. (1983). Diffusion of innovations. New York: Simon and Schuster.

    Google Scholar 

  38. Rosenstock, S., Bruner, J., & O’Connor, C. (2017). In epistemic networks, is less really more? Philosophy of Science, 84(2), 234–252.

    Google Scholar 

  39. Shibanai, Y., Yasuno, S., & Ishiguro, I. (2001). Effects of global information feedback on diversity: Extensions to axelrod’s adaptive culture model. Journal of Conflict Resolution, 45(1), 80–96.

    Google Scholar 

  40. Singer, D. J., Bramson, A., Grim, P., Holman, B., Jung, J., Kovaka, K., et al. (2018). Rational social and political polarization. Philosophical Studies(forthcoming).

  41. Weatherall, J. O., & O’Connor, C. (2017). Do as I say, not as I do, or, conformity in scientific networks. arXiv:1803.09905 [physics.soc-ph].

  42. Weatherall, J. O., O’Connor, C., & Bruner, J. (2018). How to beat science and influence people. British Journal for Philosophy of Science,. https://doi.org/10.1093/bjps/axy062.

    Google Scholar 

  43. Zollman, K. J. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.

    Google Scholar 

  44. Zollman, K. J. (2010). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17.

    Google Scholar 

  45. Zollman, K. J. (2013). Network epistemology: Communication in epistemic communities. Philosophy Compass, 8(1), 15–27.

    Google Scholar 

Download references


This work is based on research supported by the National Science Foundation under Grant #1535139 (O’Connor). Many thanks to anonymous referees for feedback. And thanks to the audience at the Agent-Based Models in Philosophy conference for feedback.

Author information



Corresponding author

Correspondence to James Owen Weatherall.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Weatherall, J.O., O’Connor, C. Endogenous epistemic factionalization. Synthese (2020). https://doi.org/10.1007/s11229-020-02675-3

Download citation


  • Polarization
  • Factionalization
  • Network epistemology
  • Correlated belief