It is well-known that people will adjust their first-order beliefs based on observations of others. We explore how such adjustments interact with second-order beliefs regarding universalism and relativism in a population. Across a range of simulations, we show that populations where individuals have a tendency toward universalism converge more quickly in coordination problems, and generate higher total payoffs, than do populations where individuals have a tendency toward relativism. Thus, in contexts where coordination is important, belief in universalism is advantageous. However, we also show, across a range of simulations, that universalism will enshrine inequalities and eliminate diversity, and in these cases it seems that relativism has its own advantages.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
The terminology in the empirical literature on folk metaethics is not yet systematic. Relativism is the central notion of interest for us. According to relativism about morality, there is no single true morality (Harman, 1985). We use “universalism” as the contrary of relativism, such that universalism about morality holds that there is a single true morality (e.g., Wong, 2006). “Objectivism” is another term that is contrasted with relativism (including in Nichols, 2004; Mackie, 1977, pp. 36–38). But “objectivism” is naturally treated as the contrary for subjectivism rather than relativism (see, e.g., Finlay, 2007), so we prefer “universalism” as the closest to an antonym for “relativism.”
We treat this as a conceptual claim here, taking these responses to follow from the way we’ve characterized universalism and relativism. It’s an open empirical question, however, whether people’s second-order beliefs affect their first-order beliefs in this way.
Note that this is distinct from the “conformity bias” approach first found in Boyd and Richerson’s Culture and the Evolutionary Process (1985). That model, and others in that tradition, focuses on first order beliefs only.
Of course, this is a simplification. For one thing, on our models, the only way for consensus to lead a relativist to change her first-order view is for her first to become a universalist. But a relativist might change her view simply because her view is in a tiny minority, without giving up on her relativism. To take a homey example, if 49% of the people think that it's summer and 49% of the people think that it's winter and 2% of the people think that it's fall, then if I find myself in that 2%, I'm likely to change my first-order belief without becoming a universalist about seasons (see, e.g., Ayars & Nichols, 2020, experiment 3).
See Becklloyd and Sytsma (2019) for simulations involving error, including both sampling error and recall error.
Following Becklloyd and Sytsma (2019), we used a 2 × recency multiplier: the first set of evaluations by a given individual was given a base weighting, then each subsequent set of evaluations by that individual was weighted two times the previous weighting.
We also ran a simulation using stag hunt with higher payoff ratios (namely, getting a stag yielded 10 units for each agent, while the rabbit remained at 1 unit for each). This made it clearer that a preference for relativism produces somewhat better results than a preference for universalism when the simulation starts with an initial probability of φ that is slightly lower than 0.5. But again, overall preference for universalism produces better results overall.
By contrast, Stanford’s claim that “externalizing” moral demands facilitate cooperation relies on a stronger notion, according to which “we regard such demands as imposing unconditional obligations not only on ourselves, but also on any and all agents whatsoever, regardless of their preferences and desires” (2018, p. 1).
Classic presentations of coordination games, such as Lewis (1969) or Schelling (1960), or cooperation games in Axelrod (1984), present coordination or cooperation to be desirable. However, there have long been conflictual coordination problems, such as Battle of the Sexes. More recently, there is a growing literature that uses coordination models to describe how undesirable or unjust states can be emerge and be stable. O’Connor (2019) offers an in-depth treatment, in part building from Skyrms (1996).
O’Connor (2019) has an in-depth game-theoretic analysis of the gendered division of labor.
Astute readers might note that anti-coordination games can be converted to coordination games if one introduces the notion of roles (say, the “chips” role and the “dips” role) and redefine coordinating as “doing what your role requires.” While this is indeed a way of constructing a model, this is much more difficult to evolve without putting one’s thumb on the scale. We would have to ask how the roles emerged and were coordinated on (most naturally with a correlated equilibrium concept) before we could get to how they are used in the newly-created coordination context. Universalism could help under these circumstances, but not without extra conceptual moves and coordination on those concepts, especially given that there are many possible role-pairs that could emerge in such a setting.
While it is beyond the scope of the present paper, the introduction of strategic interactions is planned for future work.
In a more complex model, one could introduce the possibility of choosing a mixed strategy rather than pure strategies, or alternatively instead of relying on a Nash equilibrium solution concept, we could employ a correlated equilibrium concept, which could allow for efficient “taking turns” outcomes. However, we note that it is strikingly easy to find examples of unequal coordination games in the real world that have settled into a Nash solution with pure strategies. The gendered division of labor offers a large class of examples of this.
This last class of problems with coordination—coordination where none is needed—is not something that we have modeled for this paper, as it requires a different modeling approach, but is planned for future work.
See, for instance, Muldoon (2016) for an in-depth account.
Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books.
Ayars, A., & Nichols, S. (2020). Rational learners and metaethics: Universalism, relativism, and evidence from consensus. Mind & Language, 35(1), 67–89.
Becklloyd, D., & Sytsma, J. (2019). Simulating metaethics: Consensus and the independence of moral beliefs. Retrieved January 2020 from http://philsci-archive.pitt.edu/16461/
Beebe, J. R., & Sackris, D. (2016). Moral objectivism across the lifespan. Philosophical Psychology, 29(6), 912–929.
Boyd, R., & Richerson, P. J. (1985). Culture and the evolutionary process. University of Chicago Press.
Clarke, S. (1728). A discourse concerning the unchangeable obligations of natural religion, and the truth and certainty of Christian revelation (7th ed.). James & John Knapton.
Finlay, S. (2007). Four faces of moral realism. Philosophy Compass, 2(6), 820–849.
Goodwin, G., & Darley, J. (2008). The psychology of meta-ethics: Exploring objectivism. Cognition, 106, 1339–1366.
Goodwin, G., & Darley, J. (2012). Why are some moral beliefs perceived to be more objective than others? Journal of Experimental Social Psychology, 48, 250–256.
Harman, G. (1985). Is there a single true morality? In D. Copp & D. Zimmerman (Eds.), Morality, reason and truth: New essays on the foundations of ethics. Rowman & Allanheld.
Lewis, D. (1969). Convention. Cambridge: Harvard University Press.
Mackie, J. (1977). Ethics: Inventing right and wrong. Penguin.
Muldoon, R. (2015). Expanding the justificatory framework of Mill’s experiments in living. Utilitas, 27(2), 179–194.
Muldoon, R. (2016). Social contract theory for a diverse world: Beyond tolerance. Routledge.
Nichols, S. (2004). After objectivity: An empirical study of moral judgment. Philosophical Psychology, 17, 3–26.
O’Connor, C. (2019). The origins of unfairness. Oxford University Press.
Schelling, T. (1960). The Strategy of Conflict. Cambridge: Harvard University Press.
Skyrms, B. (1996). Evolution of the social contract. Cambridge University Press.
Stanford, P. K. (2018). The difference between ice cream and Nazis: Moral externalization and the evolution of human cooperation. Behavioral and Brain Sciences, 41, E95.
Wong, D. B. (2006). Natural moralities: A defence of pluralistic relativism. Oxford University Press.
Wright, J. C., Grandjean, P. T., & McWhite, C. B. (2013). The meta-ethical grounding of our moral beliefs: Evidence for meta-ethical pluralism. Philosophical Psychology, 26(3), 336–361.
Wright, J., McWhite, C., & Grandjean, P. (2014). The cognitive mechanisms of intolerance: Do our meta-ethical commitments matter? In T. Lombrozo, J. Knobe, & S. Nichols (Eds.), Oxford studies in experimental philosophy. (Vol. 1). Oxford University Press.
We would like to thank Dan Becklloyd, James Beebe, Justin Bruner, Jerry Gaus, an anonymous referee for Synthese, and the audience at the 2019 Formal and Experimental Workshop at Northeastern University for their helpful comments and suggestions.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Sytsma, J., Muldoon, R. & Nichols, S. The meta-wisdom of crowds. Synthese (2021). https://doi.org/10.1007/s11229-021-03279-1