Public discussions of political and social issues are often characterized by deep and persistent polarization. In social psychology, it’s standard to treat belief polarization as the product of epistemic irrationality. In contrast, we argue that the persistent disagreement that grounds political and social polarization can be produced by epistemically rational agents, when those agents have limited cognitive resources. Using an agent-based model of group deliberation, we show that groups of deliberating agents using coherence-based strategies for managing their limited resources tend to polarize into different subgroups. We argue that using that strategy is epistemically rational for limited agents. So even though group polarization looks like it must be the product of human irrationality, polarization can be the result of fully rational deliberation with natural human limitations.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
While Sunstein (1999, 2017) does think that groups can become more extreme in their beliefs via informational cascades and other mechanisms, none of those mechanisms is sufficient to break groups into polarized subgroups. Besides that, Sunstein (1999) offers no reason to think that polarization is epistemically rational, and his summary comments about polarization possibly producing “factual mistakes” suggests he believes it is not: “The problem [with group polarization] is … that people may be shifted, as a result of entirely rational processes, in the direction of factual … mistakes" (20).
For ease of exposition, in some places, we’ll talk as though reasons support propositions or contents, rather than belief in those contents, but this is only a shorthand.
We can think of these as mirroring something fixed in the world, like the time-indexed eternal facts.
We assume that the weights of reasons do not vary across agents (either because they are perfectly shared or because the weight of a reason is a priori or a matter of logic, about which our agents are omniscient). Notice that this assumption only makes our case harder to show, since if agents could reasonably assign differing weights to the same reasons, it would be easier for them to reasonably disagree.
Previous work has studied similar agents with limited memories. Following Hellman and Cover (1970), it’s popular to model memory limitations as limitations on states of finite automata. Wilson (2014) analyses these limited automata and shows that they can polarize when the agents have differing priors. Also see Halpern and Pass (2010). These models are quite different from ours and are subject to a number of limitations discussed in Sect. 6.
The reader should think of these as the agents’ reasons that bear on the relevant proposition, not all of the reasons they have. We use 7 as the limit following Miller (1956), though recently Cowan (2001) has argued that the number should be 4. See the discussion of the robustness of this result below.
Our results are robust for various other distributions of reasons that have a similar qualitative characteristic. We don’t discuss distributions that make put more reasons strongly on either side, since polarization would be less surprising in those cases.
Of the 1000 runs done to test this, 32 of the runs didn’t converge in fewer than 100,000 steps, the limit we set in testing. These runs would have converged, given more time. So the real averages are even higher.
These numbers assessed in different sets of 1000 runs. We stopped the run if convergence didn’t happen by 100,000 steps, since most converging runs did so very quickly (fewer than 350 steps for convergence on a view and fewer than 1500 steps for convergence on a set of reasons).
We ran 1,000,000 runs to see if consensus is ever reached in this setup. Of those, less than 0.01% of them converged on a view and less than 0.005% converged on a set of reasons. When those cases did converge, it always happened in the first 9 steps of the model (many in the first or second step), which indicates a rare combination of initial conditions and early steps is required.
In fact, it would be possible for groups to polarize even if their memory were limited to only 1 fewer than the total number of reasons, but we’d expect this to occur quite infrequently.
We point the reader to Vitz (n.d.) for background on the connection between doxastic volunteerism and evaluation.
If the reader hasn’t already given up on the rationality of simple-mindedness, she is encouraged to notice that simple-mindedness is subject to this same kind of worry.
See, for example, Murphy (2016) for a discussion of how foundationalists often appeal to notions of coherence. Cohen (1984, pp. 283–284) also argues that “justification” and “rationality” are synonymous as used by epistemologists. As such, coherentists’ claims about justification ought to translate to rationality as well.
That said, groups of agents following that more extreme rule still polarize.
One might think that if a reason’s strength is a measure of how misleading it is, agents should forget the strongest opposing reasons (not the weakest, as coherence-mindedness requires). But, strength is a measure of how much the reason supports a view, and whether it is misleading is a question of whether it supports the truth. If one thinks that rationality requires that we drop the strongest opposing reason, that rule would still produce polarization, and so our primary conclusion still holds.
We’re thankful to an anonymous reviewer for bringing this case to our attention.
In our actual tests, we assumed that an agent had a strong enough belief to be coherence-minded when the strength of their belief was at least a quarter of the weighted strengths of all of the reasons.
In our tests, we implemented this by asking agents to treat reasons in favor of their view as 2 times as important as reasons against and then asked the agents to forget the weakest reason in that new ranking. So a reason of weight 1 in favor of their view would be saved over a reason of weight 1.5 against, but not against a reason of weight 2.5 against.
We take ourselves to be adding to the literature that emphasizes the importance of investigating non-ideal agents in epistemology, political philosophy, game theory, economics, and related fields. Other prominent voices in that chorus include Simon (1957), Cherniak (1981) Kahneman and Tversky (1979), and Epstein (2006).
Abrams, D., Wetherell, M., Cochrane, S., Hogg, M. A., & Turner, J. C. (1990). Knowing what to think by knowing who you are: Self-categorization and the nature of norm formation, conformity and group polarization. British Journal of Social Psychology, 29(2), 97–119.
Axelrod, R. (1997). The dissemination of culture: A model with local convergence and global polarization. Journal of Conflict Resolution, 41(2), 203–226.
Benoît, J. P., & Dubra, J. (2014). A theory of rational attitude polarization. Available at SSRN 2529494.
Bonjour, Lawrence. (1980). Externalist theories of empirical knowledge. Midwest Studies in Philosophy, 5, 53–74.
Bramson, A., Grim, P., Singer, D. J., Berger, W. J., Fisher, S., Sack, G., et al. (2017). Understanding polarization: Meanings, measures, and model evaluation. Philosophy of Science, 84, 115–159.
Bramson, A., Grim, P., Singer, D. J., Fisher, S., Berger, W., Sack, G., et al. (2016). Disambiguation of social polarization concepts and measures. The Journal of Mathematical Sociology, 40(2), 80–111.
Bruner, J. & Holman, B. (forthcoming). Complicating consensus. In Garbayo, L. (Ed.), Expert disagreement and measurement: Philosophical disunity in logic, epistemology and philosophy of science. Dordrecht: Springer.
Campbell, J. E. (2016). Polarized: Making sense of a divided America. Princeton: Princeton University Press.
Cherniak, C. (1981). Minimal rationality. Mind, 90(358), 161–183.
Cohen, S. (1984). Justification and truth. Philosophical Studies, 46(3), 279–295.
Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24(1), 87–114.
DiMaggio, P., Evans, J., & Bryson, B. (1996). Have Americans’ social attitudes become more polarized? American Journal of Sociology, 102, 690–755. https://doi.org/10.1086/230995.
Epstein, J. M. (2006). Generative social science: Studies in agent-based computational modeling. Princeton: Princeton University Press.
Fiorina, M. P., & Abrams, S. J. (2008). Political polarization in the American public. Annual Review of Political Science, 11, 563–588.
Fiorina, M. P., Abrams, S. J., & Pope, J. (2010). Culture war?. New York, NY: Pearson Longman.
Foley, Richard. (1993). Working without a net: A study of egocentric epistemology. Oxford: Oxford University Press.
Fryer, R. G., Jr., Harms, P., & Jackson, M. O. (2015). Updating beliefs when evidence is open to interpretation: Implications for bias and polarization. Working Paper. Retrieved from http://scholar.harvard.edu/fryer/publications/updating-beliefs-ambiguous-evidence-implications-polarization.
Gaffney, A. M., Rast, D. E., III, Hackett, J. D., & Hogg, M. A. (2014). Further to the right: Uncertainty, political polarization and the American “Tea Party” movement. Social Influence, 9, 272–288.
Greaves, Hilary. (2013). Epistemic decision theory. Mind, 122(488), 915–952.
Grim, P., Singer, D. J., Fisher, S., Bramson, A., Berger, W. J., Reade, C., et al. (2013). Scientific networks on data landscapes: Question difficulty, epistemic success, and convergence. Episteme, 10(4), 441–464.
Großer, J., & Palfrey, T. R. (2013). Candidate entry and political polarization: An antimedian voter theorem. American Journal of Political Science, 58(1), 127–143.
Gruzd, A., & Roy, J. (2014). Investigating political polarization on Twitter: A Canadian perspective. Policy and Internet, 6, 28–45.
Gutmann, A., & Thompson, D. (1996). Democracy and disagreement. Cambridge: Harvard University Press.
Halpern, J. Y. & Pass, R. (2010). I don’t want to think about it now: Decision theory with costly computation. In Twelfth international conference on the principles of knowledge representation and reasoning.
Harman, G. (1973). Thought. Princeton: Princeton University Press.
Hegselmann, R., & Krause, U. (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5(3). http://jasss.soc.surrey.ac.uk/5/3/2.html.
Hegselmann, R., & Krause, U. (2005). Opinion dynamics driven by various ways of averaging. Computational Economics, 25(4), 381–405.
Hegselmann, R., & Krause, U. (2006). Truth and cognitive division of labour: First steps towards a computer aided social epistemology. Journal of Artificial Societies and Social Simulation, 9(3), 10.
Hellman, M. A., & Cover, T. M. (1970). Learning with finite memory. The Annals of Mathematical Statistics, 41(3), 765–782.
Jern, A., Chang, K. K., & Kemp, C. (2014). Belief polarization is not always irrational. Psychological Review, 121(2), 206–224.
Joyce, J. M. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65, 575–603.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica: Journal of the econometric society, 47(2), 263–291.
Kelly, T. (2008). Disagreement, dogmatism, and belief polarization. The Journal of Philosophy, CV, 10, 611–633.
Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87(1), 5–22.
Knight, J., & Johnson, J. (2011). The priority of democracy: Political consequences of pragmatism. Princeton: Princeton University Press.
Landemore, H. (2013). Democratic reason: Politics, collective intelligence, and the rule of the many. Princeton: Princeton University Press.
Lehrer, K. (1990). Theory of knowledge. Boulder, CO: Westview.
Liberman, A., & Chaiken, S. (1992). Defensive processing of personally relevant health messages. Personality and Social Psychology Bulletin, 18, 669–679.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37, 2098–2109.
Lu, L., Yuan, Y. C., & McLeod, P. L. (2012). Twenty-five years of hidden profiles in group decision making a meta-analysis. Personality and Social Psychology Review, 16(1), 54–75.
Lumet, S., & Rose, R. (1957). Twelve angry men. Los Angeles: Orion-Nova Twelve Angry Men.
McCain, K. (2014). Evidentialism and epistemic justification. London: Routledge.
McHoskey, J. W. (1995). Case closed? On the John F. Kennedy assassination: Biased assimilation of evidence and attitude polarization. Basic and Applied Social Psychology, 17, 395–409. https://doi.org/10.1207/s15324834basp1703_7.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81.
Munro, G. D., & Ditto, P. H. (1997). Biased assimilation, attitude polarization, and affect in reactions to stereotype-relevant scientific information. Personality and Social Psychology Bulletin, 23, 636–653.
Murphy, P. (2016). Coherentism in epistemology. Internet Encyclopedia of Philosophy. http://www.iep.utm.edu/coherent/.
Plous, S. (1991). Biases in the assimilation of technological breakdowns: Do accidents make us safer? Journal of Applied Social Psychology, 21, 1058–1082.
Prior, M. (2013). Media and political polarization. Annual Review of Political Science, 16, 101–127.
Ramsey, P. F. (1926). Truth and probability. In H. E. Kyburg & H. E. K. Smokler (Eds.), Studies in subjective probability. Huntington, NY: Robert E. Kreiger Publishing Co.
Ross, L., & Anderson, C. A. (1982). Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 129–152). Cambridge: Cambridge University Press. https://doi.org/10.1017/cbo9780511809477.010.
Schelling, T. C. (1969). Models of segregation. The American Economic Review, 59(2), 488–493.
Schroeder, M. (2010). What makes reasons sufficient? Unpublished manuscript, University of Southern California.
Schroeder, M. (2015). Knowledge is belief for sufficient (objective and subjective) reason. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology, (5) (pp. 226–252). Oxford: Oxford University Press.
Sherman, D. K., Hogg, M. A., & Maitner, A. T. (2009). Perceived polarization: Reconciling ingroup and intergroup perceptions under uncertainty. Group Processes and Intergroup Relations, 12, 95–109.
Simon, H. A. (1957). Models of man: Social and rational. New York: Wiley.
Sosa, Ernest. (1985). Knowledge and intellectual virtue. The Monist, 68, 224–245.
Stasser, G. (1988). Computer simulation as a research tool: The DISCUSS model of group decision making. Journal of Experimental Social Psychology, 24, 393–422.
Stasser, G., & Birchmeier, Z. (2003). Group creativity and collective choice. In P. B. Paulus & B. A. Nijstad (Eds.), Group creativity: Innovation through collaboration (pp. 85–109). New York: Oxford University Press.
Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100(2), 55–79.
Sunstein, C. R. (2002). The law of group polarization. Journal of Political Philosophy, 10, 175–195.
Sunstein, C. R. (2007). Republic.com 2.0. Princeton: Princeton University Press.
Sunstein, C. R. (2017). #Republic. Princeton: Princeton University Press.
Sunstein, C. R., The law of group polarization (1999). University of Chicago Law School, John M. Olin Law & Economics Working Paper No. 91. Available at SSRN: https://ssrn.com/abstract=199668.
Taber, C. S., Cann, D., & Kucsova, S. (2009). The motivated processing of political arguments. Political Behavior, 31, 137–155. https://doi.org/10.1007/s11109-008-9075-8.
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50, 755–769.
Talbott, W. (2016). Bayesian epistemology. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/win2016/entries/epistemology-bayesian.
Vitz, R. (n.d.). Doxastic volunteerism. The Internet Encyclopedia of Philosophy, ISSN 2161-0002. http://www.iep.utm.edu/doxa-vol/.
Wilson, A. (2014). Bounded memory and biases in information processing. Econometrica, 82(6), 2257–2294.
Zollman, K. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.
Zollman, K. (2010). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17–35.
About this article
Cite this article
Singer, D.J., Bramson, A., Grim, P. et al. Rational social and political polarization. Philos Stud 176, 2243–2267 (2019). https://doi.org/10.1007/s11098-018-1124-5
- Epistemic rationality
- Group deliberation
- Social epistemology