Scientific polarization

Abstract

Contemporary societies are often “polarized”, in the sense that sub-groups within these societies hold stably opposing beliefs, even when there is a fact of the matter. Extant models of polarization do not capture the idea that some beliefs are true and others false. Here we present a model, based on the network epistemology framework of Bala and Goyal (Learning from neighbors, Rev. Econ. Stud. 65(3), 784–811 1998), in which polarization emerges even though agents gather evidence about their beliefs, and true belief yields a pay-off advantage. As we discuss, these results are especially relevant to polarization in scientific communities, for these reasons. The key mechanism that generates polarization involves treating evidence generated by other agents as uncertain when their beliefs are relatively different from one’s own.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Notes

  1. 1.

    Some authors use the term “polarization”, or more specifically, “belief” or “attitude polarization”, to refer to the more limited phenomenon in which two individuals with opposing credences both strengthen their beliefs in light of identical evidence. Other authors, particularly in psychology, use “group polarization” to refer to situations in which discussion among like-minded individuals strengthens individual beliefs beyond what anyone in the group started with. As noted, we are using the term “polarization” in a sense common in political discourse, to describe situations in which beliefs or opinions of a group fail to converge towards a consensus, or else actually diverge, over time. Bramson et al. (2017) differentiate between ways one might define or measure polarization in this more general sense.

  2. 2.

    We survey this literature in Section 3; see Bramson et al. (2017) for a review.

  3. 3.

    As we discuss below, there are some exceptions to this generalization—most notably, in work by Olsson (2013)—but the model we present here is substantially different and, we believe, more perspicuous.

  4. 4.

    Psychologists often appeal to motivated reasoning in explaining polarization. For example, humans tend to engage in confirmation bias, which involves seeking out and assimilating new information supporting their already deeply held beliefs (Lord et al. 1979). But that is not what is going on here: agents to not selectively update on evidence that is probable given their current beliefs; they do so on the basis of their judgments about the source of the evidence, irrespective of what the evidence tends to support. We take this to be more epistemically justifiable—which makes the appearance of polarization in the presence of this heuristic more surprising.

  5. 5.

    This history was reported in the New York Times article ’Stalking Dr. Steere over Lyme Disease’, published June 17, 2001.

  6. 6.

    It is, of course, possible that there are some Lyme researchers influenced by industry funding, or who are trying to bilk patients. This does not seem to be the case for most of the physicians involved.

  7. 7.

    For example, Festinger et al. (1950), in a classic study, showed how location around housing courts, and thus social interaction, importantly determined opinions in a study of MIT students. Students tended to adopt the beliefs of their neighbors.

  8. 8.

    Again, for a philosophically sensitive review of models of polarization see Bramson et al. (2017).

  9. 9.

    In work that predates this, Axelrod (1997) provides a model where cultures are represented by variants (lists of numbers) and where similarity of these variants determines how likely they are to adopt other variants from neighbors in a grid. In this way ‘cultural similarity’ determines cultural influence. As he shows, stably different cultures, which we might think of as polarized in some sense, can co-exist if they have no overlap and thus do not influence each other at all.

  10. 10.

    They label the outcome where just two subgroups with divergent opinions emerge as ‘polarization’.

  11. 11.

    In this tradition, see also Deffuant et al. (2002) and Deffuant (2006).

  12. 12.

    See also Galam and Moscovici (1991), Galam (2010), Galam (2011), Nowak et al. (1990), Mäs and Flache (2013), and La Rocca et al. (2014). In addition, a number of modelers have shown how belief polarization—updating in different directions for the same evidence—can be rational. This can occur under the right conditions for agents with different priors or with different background beliefs (Dixit and Weibull 2007; Jern et al. 2014; Benoît and Dubra 2014).

  13. 13.

    For more work in this framework see Kurz and Rambau (2011) and Liu et al. (2014). One key difference between these models and ours is that their agents can never come to disregard, or give up on, a possibly true theory.

  14. 14.

    This framework is first presented in Angere (2010).

  15. 15.

    In addition, the way their agents gather evidence arguably less closely mimics many cases of scientific process as they receive private signals from a distribution, rather than sampling data points.

  16. 16.

    This framework has been used in philosophy of science by Zollman (2010), Mayo-Wilson et al. (2011), Kummerfeld and Zollman (2015), Holman and Bruner (2015), Rosenstock et al. (2017), Weatherall et al. (2018), O’Connor and Weatherall (2017, 2018). Zollman (2013) provides a review of the literature up to 2013.

  17. 17.

    The version of the model we consider here follows Zollman (2007) very closely, because unlike later versions considered by Zollman (2010) and others, the beliefs of the agents in the 2007 model are captured by a single number. This made representing distance between agents beliefs in our model much more tractable.

  18. 18.

    Note, this parameter was added to the model by Zollman (2007) and does not appear in the work of Bala and Goyal (1998).

  19. 19.

    Notice that this framework can model situations outside the realm of science. The main features of interest are agents who choose between two actions, have beliefs about the efficacy of these actions, and share evidence relevant to these beliefs. We take these to be key features of scientific communities, but also some other sorts of everyday communities where individuals share evidence relevant to belief.

  20. 20.

    See Jeffrey (1990, Ch. 11).

  21. 21.

    Observe that the values of m that make sense to consider vary between these functions and the linear functions we focus on; for instance, for the logistic function, we studied m = 5, 7.5, 10, 12.5, 15, 17.5, 20, and for the exponential we looked at m = 1, 2, 3, 4, 5, 6, 7.

  22. 22.

    So, for instance, if one considers a function of the form \(P_{f}(E)(d) = (1-P_{I}(E))/(1+\exp (m*(d-1/2)))+P_{i}(E)\), which is bounded from below by Pi(E) and never achieves this value on d ∈ [0, 1], polarization is not stable. However, this sort of exponential drop-off in influence as a function of d dramatically increases converge times, and so we find that polarization may still be effectively stable, a result that amplifies the arguments we give below.

  23. 23.

    There is another way of doing all of this, which is to suppose some probability distribution that describes Jill’s credences about Ian’s dispositions to share E given that E did and did not obtain, given Jill’s own prior Pi(E) and d, and then have her use Bayes’ rule to find her posterior Pf(E), given that Ian reports E. But observe that doing this in detail would require an enormous number of modeling choices that would also be largely arbitrary, and at the end of the day, one would find a formula with the salient features of (1) and (2) (i.e., a monotonically decreasing function in d whose range lies in the relevant interval). More, one can always use Bayes’ rule to work backward from Eqs. (1) or (2) to a relationship between the conditional probabilities P(Ian shared E|E) and \(P(\text {Ian shared E}|\sim E)\) that must hold if we assume that Jill had such credences and that she arrives at Pf(E) via strict conditionalization. And so these formulae can themselves be interpreted as reflecting precisely the results of this procedure for (families of) distributions that might represent Jill’s beliefs about Ian’s dispositions.

  24. 24.

    Increaing m beyond this range had little effect on the results since trust already drops off steeply when m = 3.

  25. 25.

    Note that this means that distance in belief may be reconceptualized as a weight on each edge of the network, so that initially there are random weights assigned, and then over time the network evolves so that some connections become stronger and others weaker. In this sense, the model can be conceptualized as a dynamic network.

  26. 26.

    Notice that this operationalization of polarization means that simulations where one individual holds a stable minority opinion still counts as polarization. One might object that true cases of polarization will involve more evenly sized subgroups. For practical reasons, we prefer not to choose an arbitrary cut-off for what proportion of a population must hold each opinion in order to count as truly polarized. Bramson et al. (2017) discuss subtleties of how groups can polarize.

  27. 27.

    The probability of correct versus incorrect convergence varies based on parameter values. See Zollman (2007), Zollman (2010), and Rosenstock et al. (2017) for more.

  28. 28.

    The values in this example were calculated assuming that pB = .6, n = 10, and assuming that the .6 agent sees 7 successes in their test.

  29. 29.

    The significance of the difference between the anti-updating case and the ignoring case varies across parameter values. In a few cases the community did slightly better on average in the anti-updating case, usually for small communities where results were more stochastic.

  30. 30.

    As Mayo-Wilson et al. (2011) prove using network epistemology models similar to the ones we employ here, there are rules for exploration in such models that are ideal for the individual, but not the group, and vice versa. Other formal work in social epistemology focuses on this idea as well. Both Kitcher (1990) and Strevens (2003), for example, explore how to generate an ideal division of cognitive labor in science despite the individual rationality of always working on the most promising theory.

  31. 31.

    Notice that we do not discuss here potential benefits from transient polarization. For example, Zollman (2010) argues for the importance of transient diversity of opinions in epistemic groups. (Without this diversity, there is less chance that scientists spend enough time testing every plausible theory to see which is best.) Since polarization ensures an extended diversity of beliefs, it may increase the chances that the scientific communities as a whole gathers good evidence about all plausible theories. Likewise, in Zollman (2010), a community can benefit from the presence of individuals with strong priors, who keep exploring a theory even when it looks unpromising. The problem, in his models and in ours, is individuals who are too stubborn, or who never update in light of untrusted evidence. We also do not discuss potential benefits of political polarization identified by political scientists, such as a more robust, argumentative discourse. (See Abramowitz 2010 for a discussion.)

  32. 32.

    This history is drawn from Oreskes and Conway (2010), who document in great detail the work done by big tobacco to obscure the emerging consensus over the health dangers of smoking. See also Holman and Bruner (2015), O’Connor and Weatherall (2018), and Weatherall et al. (2018).

References

  1. Abramowitz, A. (2010). The disappearing center: Engaged citizens, polarization, and American democracy. New Haven: Yale University Press.

    Google Scholar 

  2. Angere, S. (2010). Knowledge in a social network. Synthese, 167–203.

  3. Axelrod, R. (1997). The dissemination of culture: a model with local convergence and global polarization. Journal of Conflict Resolution, 41(2), 203–226.

    Article  Google Scholar 

  4. Bala, V., & Goyal, S. (1998). Learning from neighbors. Review of Economic Studies, 65(3), 595–621.

    Article  Google Scholar 

  5. Baldassarri, D., & Bearman, P. (2007). Dynamics of political polarization. American Sociological Review, 72(5), 784–811.

    Article  Google Scholar 

  6. Barrett, J.A., Mohseni, A., Skyrms, B. (2017). Self assembling networks. The British Journal for the Philosophy of Science, (forthcoming).

  7. Benoît, J.-P., & Dubra, J. (2014). A theory of rational attitude polarization.

  8. Bramson, A., Grim, P., Singer, D.J., Berger, W.J., Sack, G., Fisher, S., Flocken, C., Holman, B. (2017). Understanding polarization: meanings, measures, and model evaluation. Philosophy of Science, 84(1), 115–159.

    Article  Google Scholar 

  9. Burgdorfer, W., Barbour, A.G., Hayes, S.F., Benach, J.L., Grunwaldt, E., Davis, J.P. (1982). Lyme disease-a tick-borne spirochetosis? Science, 216(4552), 1317–1319.

    Article  Google Scholar 

  10. Cook, J., & Lewandowsky, S. (2016). Rational irrationality: Modeling climate change belief polarization using Bayesian networks. Topics in Cognitive Science, 8(1), 160–179.

    Article  Google Scholar 

  11. Deffuant, G. (2006). Comparing extremism propagation patterns in continuous opinion models. Journal of Artificial Societies and Social Simulation, 9(3).

  12. Deffuant, G., Amblard, F., Weisbuch, G., Faure, T. (2002). How can extremism prevail? A study based on the relative agreement interaction model. Journal of Artificial Societies and Social Simulation 5(4).

  13. Dixit, A.K., & Weibull, J.W. (2007). Political polarization. Proceedings of the National Academy of Sciences, 104(18), 7351–7356.

    Article  Google Scholar 

  14. Embers, M.E., Barthold, S.W., Borda, J.T., Bowers, L., Doyle, L., Hodzic, E., Jacobs, M.B., Hasenkampf, N.R., Martin, D.S., Narasimhan, S., et al. (2012). Persistence of Borrelia burgdorferi in rhesus macaques following antibiotic treatment of disseminated infection. PloS One, 7(1), e29914.

    Article  Google Scholar 

  15. Festinger, L., Schachter, S., Back, K. (1950). Social pressures in informal groups; a study of human factors in housing.

  16. Galam, S. (2010). Public debates driven by incomplete scientific data: the cases of evolution theory, global warming and H1N1 pandemic influenza. Physica A: Statistical Mechanics and its Applications, 389(17), 3619–3631.

    Article  Google Scholar 

  17. Galam, S. (2011). Collective beliefs versus individual inflexibility: The unavoidable biases of a public debate. Physica A: Statistical Mechanics and its Applications, 390(17), 3036–3054.

    Article  Google Scholar 

  18. Galam, S., & Moscovici, S. (1991). Towards a theory of collective phenomena: consensus and attitude changes in groups. European Journal of Social Psychology, 21 (1), 49–74.

    Article  Google Scholar 

  19. Hegselmann, R., Krause, U., et al. (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5(3).

  20. Hegselmann, R., Krause, U., et al. (2006). Truth and cognitive division of labor: First steps towards a computer aided social epistemology. Journal of Artificial Societies and Social Simulation, 9(3), 10.

    Google Scholar 

  21. Holman, B., & Bruner, J.P. (2015). The problem of intransigently biased agents. Philosophy of Science, 82(5), 956–968.

    Article  Google Scholar 

  22. Jeffrey, R.C. (1990). The logic of decision. 2nd edn.

  23. Jern, A., Chang, K.-M.K., Kemp, C. (2014). Belief polarization is not always irrational. Psychological Review, 121(2), 206.

    Article  Google Scholar 

  24. Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87(1), 5–22.

    Article  Google Scholar 

  25. Klempner, M.S., Linden, T.H., Evans, J., Schmid, C.H., Johnson, G.M., Trevino, R.P., Norton, D., Levy, L., Wall, D., McCall, J., et al. (2001). Two controlled trials of antibiotic treatment in patients with persistent symptoms and a history of Lyme disease. New England Journal of Medicine, 345(2), 85–92.

    Article  Google Scholar 

  26. Kummerfeld, E., & Zollman, K.J.S. (2015). Conservatism and the scientific state of nature. The British Journal for the Philosophy of Science, 67(4), 1057–1076.

    Article  Google Scholar 

  27. Kurz, S., & Rambau, J. (2011). On the Hegselmann–Krause conjecture in opinion dynamics. Journal of Difference Equations and Applications, 17(6), 859–876.

    Article  Google Scholar 

  28. La Rocca, C.E., Braunstein, L.A., Vazquez, F. (2014). The influence of persuasion in opinion formation and polarization. EPL (Europhysics Letters), 106(4), 40004.

    Article  Google Scholar 

  29. Liu, Q., Zhao, J., Wang, L., Wang, X. (2014). A Multi-Agent model of opinion formation with truth seeking and endogenous leaders. IFAC Proceedings Volumes, 47(3), 11709–11714.

    Article  Google Scholar 

  30. Lord, C.G., Ross, L., Lepper, M.R. (1979). Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098.

    Article  Google Scholar 

  31. Macy, M.W., Kitts, J.A., Flache, A., Benard, S. (2003). Polarization in dynamic networks: a Hopfield model of emergent structure. In R. Brieger, K. Carley, P. Pattison (Eds.) Dynamic social network modeling and analysis (pp. 162–173). Washington, DC: National Academic Press.

  32. Mäs, M., & Flache, A. (2013). Differentiation without distancing. Explaining bi-polarization of opinions without negative influence. PloS One, 8(11), e74516.

    Article  Google Scholar 

  33. Mayo-Wilson, C., Zollman, K.J.S., Danks, D. (2011). The independence thesis: when individual and social epistemology diverge. Philosophy of Science, 78(4), 653–677.

    Article  Google Scholar 

  34. McCright, A.M., & Dunlap, R.E. (2011). The politicization of climate change and polarization in the American public’s views of global warming, 2001–2010. The Sociological Quarterly, 52(2), 155–194.

    Article  Google Scholar 

  35. Nowak, A., Szamrej, J., Latané, B. (1990). From private attitude to public opinion: a dynamic theory of social impact. Psychological Review, 97(3), 362.

    Article  Google Scholar 

  36. O’Connor, C., & Weatherall, J.O. (2017). Do as I say, Not as I do, or, Conformity in scientific networks. https://arxiv.org/abs/1803.09905.

  37. O’Connor, C., & Weatherall, J.O. (2018). The misinformation age: how false beliefs spread. New Haven: Yale University Press. In press.

  38. Olsson, E.J. (2013). A Bayesian simulation model of group deliberation and polarization. Bayesian argumentation. Springer, 113–133.

  39. Oreskes, N. (2004). The scientific consensus on climate change. Science, 306(5702), 1686–1686.

    Article  Google Scholar 

  40. Oreskes, N., & Conway, E. (2010). Merchants of doubt. New York: Bloomsbury Press.

    Google Scholar 

  41. Rosenstock, S., Bruner, J., O’Connor, C. (2017). In epistemic networks, is less really more? Philosophy of Science, 84(2), 234–252.

    Article  Google Scholar 

  42. Singer, D.J., Bramson, A., Grim, P., Holman, B., Jung, J., Kovaka, K., Ranginani, A., Berger, W. (2017). Rational social and political polarization.

  43. Steere, A.C., Coburn, J., Glickstein, L. (2004). The emergence of Lyme disease. Journal of Clinical Investigation, 113(8), 1093.

    Article  Google Scholar 

  44. Steere, A.C., Malawista, S.E., Snydman, D.R., Shope, R.E., Andiman, W.A., Ross, M.R., Steele, F.M. (1977). An epidemic of oligoarticular arthritis in children and adults in three Connecticut communities. Arthritis & Rheumatology, 20 (1), 7–17.

    Article  Google Scholar 

  45. Steere, A.C., Taylor, E., McHugh, G.L., Logigian, E.L. (1993). The overdiagnosis of Lyme disease. Jama, 269(14), 1812–1816.

    Article  Google Scholar 

  46. Straubinger, R.K., Straubinger, A.F., Summers, B.A., Jacobson, R.H. (2000). Status of Borrelia burgdorferi infection after antibiotic treatment and the effects of corticosteroids: an experimental study. The Journal of Infectious Diseases, 181(3), 1069–1081.

    Article  Google Scholar 

  47. Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100(2), 55–79.

    Article  Google Scholar 

  48. Weatherall, J.O., O’Connor, C., Bruner, J. (2018). How to Beat Science and Influence People. The British Journal for the Philosophy of Science. https://arxiv.org/abs/1801.01239.

  49. Zollman, K.J.S. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.

    Article  Google Scholar 

  50. Zollman, K.J.S. (2010). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17.

    Article  Google Scholar 

  51. Zollman, K.J.S. (2013). Network epistemology: communication in epistemic communities. Philosophy Compass, 8(1), 15–27.

    Article  Google Scholar 

Download references

Acknowledgments

Thanks to Justin P. Bruner, Calvin Cochran, and the School of Philosophy at Australian National University where most of the research for the paper was carried out. This material is based upon work supported by the National Science Foundation under grant no. STS-1535139.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Cailin O’Connor.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

O’Connor, C., Weatherall, J.O. Scientific polarization. Euro Jnl Phil Sci 8, 855–875 (2018). https://doi.org/10.1007/s13194-018-0213-9

Download citation

Keywords

  • Polarization
  • Network
  • Network epistemology
  • Social epistemology
  • Agent based modeling
  • Theory change