Rational social and political polarization

Abstract

Public discussions of political and social issues are often characterized by deep and persistent polarization. In social psychology, it’s standard to treat belief polarization as the product of epistemic irrationality. In contrast, we argue that the persistent disagreement that grounds political and social polarization can be produced by epistemically rational agents, when those agents have limited cognitive resources. Using an agent-based model of group deliberation, we show that groups of deliberating agents using coherence-based strategies for managing their limited resources tend to polarize into different subgroups. We argue that using that strategy is epistemically rational for limited agents. So even though group polarization looks like it must be the product of human irrationality, polarization can be the result of fully rational deliberation with natural human limitations.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Change history

  • 11 July 2018

    In the original publication of the article, the Acknowledgement section was inadvertently not included. The Acknowledgement is given in this Correction.

Notes

  1. 1.

    While Sunstein (1999, 2017) does think that groups can become more extreme in their beliefs via informational cascades and other mechanisms, none of those mechanisms is sufficient to break groups into polarized subgroups. Besides that, Sunstein (1999) offers no reason to think that polarization is epistemically rational, and his summary comments about polarization possibly producing “factual mistakes” suggests he believes it is not: “The problem [with group polarization] is … that people may be shifted, as a result of entirely rational processes, in the direction of factual … mistakes" (20).

  2. 2.

    For ease of exposition, in some places, we’ll talk as though reasons support propositions or contents, rather than belief in those contents, but this is only a shorthand.

  3. 3.

    We can think of these as mirroring something fixed in the world, like the time-indexed eternal facts.

  4. 4.

    We assume that the weights of reasons do not vary across agents (either because they are perfectly shared or because the weight of a reason is a priori or a matter of logic, about which our agents are omniscient). Notice that this assumption only makes our case harder to show, since if agents could reasonably assign differing weights to the same reasons, it would be easier for them to reasonably disagree.

  5. 5.

    Although we use these notions for quite different purposes, note the similarity of the ontology of our model to models in the hidden profile paradigm (Stasser 1988; Stasser and Birchmeier 2003; Lu et al. 2012 for a survey).

  6. 6.

    Previous work has studied similar agents with limited memories. Following Hellman and Cover (1970), it’s popular to model memory limitations as limitations on states of finite automata. Wilson (2014) analyses these limited automata and shows that they can polarize when the agents have differing priors. Also see Halpern and Pass (2010). These models are quite different from ours and are subject to a number of limitations discussed in Sect. 6.

  7. 7.

    The reader should think of these as the agents’ reasons that bear on the relevant proposition, not all of the reasons they have. We use 7 as the limit following Miller (1956), though recently Cowan (2001) has argued that the number should be 4. See the discussion of the robustness of this result below.

  8. 8.

    Our results are robust for various other distributions of reasons that have a similar qualitative characteristic. We don’t discuss distributions that make put more reasons strongly on either side, since polarization would be less surprising in those cases.

  9. 9.

    Of the 1000 runs done to test this, 32 of the runs didn’t converge in fewer than 100,000 steps, the limit we set in testing. These runs would have converged, given more time. So the real averages are even higher.

  10. 10.

    These numbers assessed in different sets of 1000 runs. We stopped the run if convergence didn’t happen by 100,000 steps, since most converging runs did so very quickly (fewer than 350 steps for convergence on a view and fewer than 1500 steps for convergence on a set of reasons).

  11. 11.

    We ran 1,000,000 runs to see if consensus is ever reached in this setup. Of those, less than 0.01% of them converged on a view and less than 0.005% converged on a set of reasons. When those cases did converge, it always happened in the first 9 steps of the model (many in the first or second step), which indicates a rare combination of initial conditions and early steps is required.

  12. 12.

    In fact, it would be possible for groups to polarize even if their memory were limited to only 1 fewer than the total number of reasons, but we’d expect this to occur quite infrequently.

  13. 13.

    We point the reader to Vitz (n.d.) for background on the connection between doxastic volunteerism and evaluation.

  14. 14.

    If the reader hasn’t already given up on the rationality of simple-mindedness, she is encouraged to notice that simple-mindedness is subject to this same kind of worry.

  15. 15.

    See, for example, Murphy (2016) for a discussion of how foundationalists often appeal to notions of coherence. Cohen (1984, pp. 283–284) also argues that “justification” and “rationality” are synonymous as used by epistemologists. As such, coherentists’ claims about justification ought to translate to rationality as well.

  16. 16.

    That said, groups of agents following that more extreme rule still polarize.

  17. 17.

    One might think that if a reason’s strength is a measure of how misleading it is, agents should forget the strongest opposing reasons (not the weakest, as coherence-mindedness requires). But, strength is a measure of how much the reason supports a view, and whether it is misleading is a question of whether it supports the truth. If one thinks that rationality requires that we drop the strongest opposing reason, that rule would still produce polarization, and so our primary conclusion still holds.

  18. 18.

    We’re thankful to an anonymous reviewer for bringing this case to our attention.

  19. 19.

    In our actual tests, we assumed that an agent had a strong enough belief to be coherence-minded when the strength of their belief was at least a quarter of the weighted strengths of all of the reasons.

  20. 20.

    In our tests, we implemented this by asking agents to treat reasons in favor of their view as 2 times as important as reasons against and then asked the agents to forget the weakest reason in that new ranking. So a reason of weight 1 in favor of their view would be saved over a reason of weight 1.5 against, but not against a reason of weight 2.5 against.

  21. 21.

    We take ourselves to be adding to the literature that emphasizes the importance of investigating non-ideal agents in epistemology, political philosophy, game theory, economics, and related fields. Other prominent voices in that chorus include Simon (1957), Cherniak (1981) Kahneman and Tversky (1979), and Epstein (2006).

References

  1. Abrams, D., Wetherell, M., Cochrane, S., Hogg, M. A., & Turner, J. C. (1990). Knowing what to think by knowing who you are: Self-categorization and the nature of norm formation, conformity and group polarization. British Journal of Social Psychology, 29(2), 97–119.

    Article  Google Scholar 

  2. Axelrod, R. (1997). The dissemination of culture: A model with local convergence and global polarization. Journal of Conflict Resolution, 41(2), 203–226.

    Article  Google Scholar 

  3. Benoît, J. P., & Dubra, J. (2014). A theory of rational attitude polarization. Available at SSRN 2529494.

  4. Bonjour, Lawrence. (1980). Externalist theories of empirical knowledge. Midwest Studies in Philosophy, 5, 53–74.

    Article  Google Scholar 

  5. Bramson, A., Grim, P., Singer, D. J., Berger, W. J., Fisher, S., Sack, G., et al. (2017). Understanding polarization: Meanings, measures, and model evaluation. Philosophy of Science, 84, 115–159.

    Article  Google Scholar 

  6. Bramson, A., Grim, P., Singer, D. J., Fisher, S., Berger, W., Sack, G., et al. (2016). Disambiguation of social polarization concepts and measures. The Journal of Mathematical Sociology, 40(2), 80–111.

    Article  Google Scholar 

  7. Bruner, J. & Holman, B. (forthcoming). Complicating consensus. In Garbayo, L. (Ed.), Expert disagreement and measurement: Philosophical disunity in logic, epistemology and philosophy of science. Dordrecht: Springer.

  8. Campbell, J. E. (2016). Polarized: Making sense of a divided America. Princeton: Princeton University Press.

    Google Scholar 

  9. Cherniak, C. (1981). Minimal rationality. Mind, 90(358), 161–183.

    Article  Google Scholar 

  10. Cohen, S. (1984). Justification and truth. Philosophical Studies, 46(3), 279–295.

    Article  Google Scholar 

  11. Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24(1), 87–114.

    Article  Google Scholar 

  12. DiMaggio, P., Evans, J., & Bryson, B. (1996). Have Americans’ social attitudes become more polarized? American Journal of Sociology, 102, 690–755. https://doi.org/10.1086/230995.

    Article  Google Scholar 

  13. Epstein, J. M. (2006). Generative social science: Studies in agent-based computational modeling. Princeton: Princeton University Press.

    Google Scholar 

  14. Fiorina, M. P., & Abrams, S. J. (2008). Political polarization in the American public. Annual Review of Political Science, 11, 563–588.

    Article  Google Scholar 

  15. Fiorina, M. P., Abrams, S. J., & Pope, J. (2010). Culture war?. New York, NY: Pearson Longman.

    Google Scholar 

  16. Foley, Richard. (1993). Working without a net: A study of egocentric epistemology. Oxford: Oxford University Press.

    Google Scholar 

  17. Fryer, R. G., Jr., Harms, P., & Jackson, M. O. (2015). Updating beliefs when evidence is open to interpretation: Implications for bias and polarization. Working Paper. Retrieved from http://scholar.harvard.edu/fryer/publications/updating-beliefs-ambiguous-evidence-implications-polarization.

  18. Gaffney, A. M., Rast, D. E., III, Hackett, J. D., & Hogg, M. A. (2014). Further to the right: Uncertainty, political polarization and the American “Tea Party” movement. Social Influence, 9, 272–288.

    Article  Google Scholar 

  19. Greaves, Hilary. (2013). Epistemic decision theory. Mind, 122(488), 915–952.

    Article  Google Scholar 

  20. Grim, P., Singer, D. J., Fisher, S., Bramson, A., Berger, W. J., Reade, C., et al. (2013). Scientific networks on data landscapes: Question difficulty, epistemic success, and convergence. Episteme, 10(4), 441–464.

    Article  Google Scholar 

  21. Großer, J., & Palfrey, T. R. (2013). Candidate entry and political polarization: An antimedian voter theorem. American Journal of Political Science, 58(1), 127–143.

    Article  Google Scholar 

  22. Gruzd, A., & Roy, J. (2014). Investigating political polarization on Twitter: A Canadian perspective. Policy and Internet, 6, 28–45.

    Article  Google Scholar 

  23. Gutmann, A., & Thompson, D. (1996). Democracy and disagreement. Cambridge: Harvard University Press.

    Google Scholar 

  24. Halpern, J. Y. & Pass, R. (2010). I don’t want to think about it now: Decision theory with costly computation. In Twelfth international conference on the principles of knowledge representation and reasoning.

  25. Harman, G. (1973). Thought. Princeton: Princeton University Press.

    Google Scholar 

  26. Hegselmann, R., & Krause, U. (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5(3). http://jasss.soc.surrey.ac.uk/5/3/2.html.

  27. Hegselmann, R., & Krause, U. (2005). Opinion dynamics driven by various ways of averaging. Computational Economics, 25(4), 381–405.

    Article  Google Scholar 

  28. Hegselmann, R., & Krause, U. (2006). Truth and cognitive division of labour: First steps towards a computer aided social epistemology. Journal of Artificial Societies and Social Simulation, 9(3), 10.

    Google Scholar 

  29. Hellman, M. A., & Cover, T. M. (1970). Learning with finite memory. The Annals of Mathematical Statistics, 41(3), 765–782.

    Article  Google Scholar 

  30. Jern, A., Chang, K. K., & Kemp, C. (2014). Belief polarization is not always irrational. Psychological Review, 121(2), 206–224.

    Article  Google Scholar 

  31. Joyce, J. M. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65, 575–603.

    Article  Google Scholar 

  32. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica: Journal of the econometric society, 47(2), 263–291.

    Article  Google Scholar 

  33. Kelly, T. (2008). Disagreement, dogmatism, and belief polarization. The Journal of Philosophy, CV, 10, 611–633.

    Article  Google Scholar 

  34. Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87(1), 5–22.

    Article  Google Scholar 

  35. Knight, J., & Johnson, J. (2011). The priority of democracy: Political consequences of pragmatism. Princeton: Princeton University Press.

    Google Scholar 

  36. Landemore, H. (2013). Democratic reason: Politics, collective intelligence, and the rule of the many. Princeton: Princeton University Press.

    Google Scholar 

  37. Lehrer, K. (1990). Theory of knowledge. Boulder, CO: Westview.

    Google Scholar 

  38. Liberman, A., & Chaiken, S. (1992). Defensive processing of personally relevant health messages. Personality and Social Psychology Bulletin, 18, 669–679.

    Article  Google Scholar 

  39. Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37, 2098–2109.

    Article  Google Scholar 

  40. Lu, L., Yuan, Y. C., & McLeod, P. L. (2012). Twenty-five years of hidden profiles in group decision making a meta-analysis. Personality and Social Psychology Review, 16(1), 54–75.

    Article  Google Scholar 

  41. Lumet, S., & Rose, R. (1957). Twelve angry men. Los Angeles: Orion-Nova Twelve Angry Men.

    Google Scholar 

  42. McCain, K. (2014). Evidentialism and epistemic justification. London: Routledge.

    Google Scholar 

  43. McHoskey, J. W. (1995). Case closed? On the John F. Kennedy assassination: Biased assimilation of evidence and attitude polarization. Basic and Applied Social Psychology, 17, 395–409. https://doi.org/10.1207/s15324834basp1703_7.

    Article  Google Scholar 

  44. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81.

    Article  Google Scholar 

  45. Munro, G. D., & Ditto, P. H. (1997). Biased assimilation, attitude polarization, and affect in reactions to stereotype-relevant scientific information. Personality and Social Psychology Bulletin, 23, 636–653.

    Article  Google Scholar 

  46. Murphy, P. (2016). Coherentism in epistemology. Internet Encyclopedia of Philosophy. http://www.iep.utm.edu/coherent/.

  47. Plous, S. (1991). Biases in the assimilation of technological breakdowns: Do accidents make us safer? Journal of Applied Social Psychology, 21, 1058–1082.

    Article  Google Scholar 

  48. Prior, M. (2013). Media and political polarization. Annual Review of Political Science, 16, 101–127.

    Article  Google Scholar 

  49. Ramsey, P. F. (1926). Truth and probability. In H. E. Kyburg & H. E. K. Smokler (Eds.), Studies in subjective probability. Huntington, NY: Robert E. Kreiger Publishing Co.

    Google Scholar 

  50. Ross, L., & Anderson, C. A. (1982). Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 129–152). Cambridge: Cambridge University Press. https://doi.org/10.1017/cbo9780511809477.010.

    Google Scholar 

  51. Schelling, T. C. (1969). Models of segregation. The American Economic Review, 59(2), 488–493.

    Google Scholar 

  52. Schroeder, M. (2010). What makes reasons sufficient? Unpublished manuscript, University of Southern California.

  53. Schroeder, M. (2015). Knowledge is belief for sufficient (objective and subjective) reason. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology, (5) (pp. 226–252). Oxford: Oxford University Press.

    Google Scholar 

  54. Sherman, D. K., Hogg, M. A., & Maitner, A. T. (2009). Perceived polarization: Reconciling ingroup and intergroup perceptions under uncertainty. Group Processes and Intergroup Relations, 12, 95–109.

    Article  Google Scholar 

  55. Simon, H. A. (1957). Models of man: Social and rational. New York: Wiley.

    Google Scholar 

  56. Sosa, Ernest. (1985). Knowledge and intellectual virtue. The Monist, 68, 224–245.

    Article  Google Scholar 

  57. Stasser, G. (1988). Computer simulation as a research tool: The DISCUSS model of group decision making. Journal of Experimental Social Psychology, 24, 393–422.

    Article  Google Scholar 

  58. Stasser, G., & Birchmeier, Z. (2003). Group creativity and collective choice. In P. B. Paulus & B. A. Nijstad (Eds.), Group creativity: Innovation through collaboration (pp. 85–109). New York: Oxford University Press.

    Google Scholar 

  59. Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100(2), 55–79.

    Article  Google Scholar 

  60. Sunstein, C. R. (2002). The law of group polarization. Journal of Political Philosophy, 10, 175–195.

    Article  Google Scholar 

  61. Sunstein, C. R. (2007). Republic.com 2.0. Princeton: Princeton University Press.

    Google Scholar 

  62. Sunstein, C. R. (2017). #Republic. Princeton: Princeton University Press.

    Google Scholar 

  63. Sunstein, C. R., The law of group polarization (1999). University of Chicago Law School, John M. Olin Law & Economics Working Paper No. 91. Available at SSRN: https://ssrn.com/abstract=199668.

  64. Taber, C. S., Cann, D., & Kucsova, S. (2009). The motivated processing of political arguments. Political Behavior, 31, 137–155. https://doi.org/10.1007/s11109-008-9075-8.

    Article  Google Scholar 

  65. Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50, 755–769.

    Article  Google Scholar 

  66. Talbott, W. (2016). Bayesian epistemology. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/win2016/entries/epistemology-bayesian.

  67. Vitz, R. (n.d.). Doxastic volunteerism. The Internet Encyclopedia of Philosophy, ISSN 2161-0002. http://www.iep.utm.edu/doxa-vol/.

  68. Wilson, A. (2014). Bounded memory and biases in information processing. Econometrica, 82(6), 2257–2294.

    Article  Google Scholar 

  69. Zollman, K. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.

    Article  Google Scholar 

  70. Zollman, K. (2010). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17–35.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Daniel J. Singer.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Singer, D.J., Bramson, A., Grim, P. et al. Rational social and political polarization. Philos Stud 176, 2243–2267 (2019). https://doi.org/10.1007/s11098-018-1124-5

Download citation

Keywords

  • Polarization
  • Epistemic rationality
  • Group deliberation
  • Social epistemology