The existence of deep and persistent moral disagreement poses a problem for a defender of moral knowledge. It seems particularly clear that a philosopher who thinks that we know a great many moral truths should explain how human populations have failed to converge on those truths. In this paper, I do two things. First, I show that the problem is more difficult than it is often taken to be, and second, I criticize a popular response, which involves claiming that many false moral beliefs are the product of nonmoral ignorance.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Technically, a realist, qua metaphysician, need not take an epistemological position. However, it is probably no accident that existing moral realists are not skeptics: the view is designed to allow us to vindicate the objective truth of our central moral beliefs, or to allow us to “take morality seriously” (Enoch 2011).
Michelle Moody-Adams argues against this impression (see Moody-Adams 1997). I won’t address her arguments here, except to say that many of them are irrelevant to the problem of moral disagreement that should concern the realist. Moody-Adams spends a great deal of time emphasizing that cultures are not unified, clearly definable units, but no skeptic who wants to deploy disagreement-based arguments should place any weight on the notion of cultural disagreement. Moral diversity amongst human beings, past and present, is enough to get the argument off the ground.
Here, the reader could scarcely do better than to consult histories of the Mongol empire, wherein accounts of socially sanctioned mass murder, torture and ethnic cleansing often seem to be on almost every page (Ratchnevsky 1993).
See, for example, Alan Gewirth’s defense of a broadly Kantian theory in Gewirth (1994).
This is denied by some philosophers. I’ll discuss them in Sect. 1.6.
Some historical agents may be off the hook, but I take it that this is intuitive. Suppose that the earliest human populations shared roughly the same basic moral beliefs; it is intuitive that they would not have to confront the problem of moral disagreement.
While this might seem too strong, it’s important to remember that as the realist makes moral truths less accessible, they threaten to debunk their own moral beliefs. That is, even if we grant the realist the right to describe themselves as sincere, unbiased inquirers operating under favorable conditions, they still need to believe that such inquirers are quite likely to arrive at the truth. Otherwise, it’s hard to know how they can place much confidence in the proposition that they have arrived at the truth.
By “sincere moral inquiry”, I do not wish to imply anything like rarefied philosophical inquiry, or even consciously conducted rational reflection. Rather, I mean something more like preparedness to respond to reasons, where the inquirer’s emotional and cognitive faculties are attuned to the possibility of newer and better moral evidence.
The qualifier “important moral matters” is only meant to capture the idea that we must take our core more beliefs to be about important issues. It seems reasonable to say that the beliefs we most care about are those which we also take to concern deeply important matters.
Many genuine moral disagreements depend on disagreements over the nonmoral facts (Brink 1989)
Careful philosophical examination will reveal, I believe, that agreement on nonmoral issues would eliminate almost all disagreement about the sorts of moral issues which arise in ordinary moral practice (Boyd 1988).
Much moral disagreement stems either from disagreement about what the relevant nonmoral facts are, or is due to some error of instrumental reasoning. Clearing up these errors and getting consensus on non-moral facts would remove a great deal of moral disagreement (Shafer-Landau 2003).
As quoted in Brown (1992)
Importantly, Livingstone-Smith anticipates and rejects the notion that contemporary people have abandoned this conceptual presupposition, noting that “the idea of a normative hierarchy is still very much alive in our moral psychology… [w]e regard our own kind as having the greatest value, and think of animals as having greater value than plants. We esteem “higher” animals like primates more than “lower” animals like invertebrates… terms like “higher” and “lower,” which roll off the tongue so easily, are hierarchical and ultimately normative notions that are inconsistent with a scientific conception of the biosphere.”.
Moreover, as psychologists continue to discover that more and more of our apparently “factual” concepts are heavily moralized—that we are, in Joshua Knobe’s words, “moralizing creatures through and through”—the range of disagreements that an NMI-strategist can even in principle explain might shrink even further (Knobe 2010).
Of course, an argument which reflected the actual thinking of cockroach-haters would include a number of further background assumptions about cockroaches: that they serve no useful purpose, that they feel no pain, etc. I omit these details only for the sake of clarity. I thank an anonymous reviewer for encouraging me to be clearer about this.
Nor does it even entail that I believe (N2), after all, I might just be saying this because I think that you believe (N2).
I use the term “definitively” because the middle cases—where only one disagreeing party has the required sort of belief—introduce complexities that I cannot fully address here. Roughly: NMI hypotheses can explain away those disagreements, but they will not always do so.
Indeed, this may be something very near to a definitional truth, since it is hard to know how something could count as a “core” moral belief unless it were deeply entwined with emotional commitment. I believe that there’s nothing morally wrong with spitting on the street, and someone might disagree. But I would not leap to a defense of my view or be particularly troubled by the prospect of dropping it. Plausibly, the reason I would respond differently to a disagreement over the moral status of slavery is that my moral belief is deeply rooted in my emotional and intuitive sensibility.
Of course, there are extremely silly views about the deterrence effect of capital punishment which might move us if we held them, for example, the view that a single execution will prevent all future violent crime. But this is not relevant to the present debate, since both pro- and anti-capital punishment advocates reject this empirical claim.
Here, I am entirely sympathetic to Karen Jones’ discussion in Jones (2005). She suggests that any coherence-method in ethics must adjust its principles in accordance with the huge amount of information about morality that is being produced by the social, historical and anthropological sciences.
Indeed, Haidt himself has despaired at the fact that critics of his theory have neglected its deeply social character (Haidt and Bjorklund 2008).
For Brink, such a belief is “well informed… results from good inference patterns… is not distorted by obvious forms of prejudice or self-interest… held with some confidence, and is relatively stable over time.” (Brink 1989, p. 132).
I should stress that this is consistent with their rejecting a great deal of contemporary moral opinion. All I want to say is that virtually every existing realist moral philosophy is committed to a set of very basic propositions concerning the fundamental equality of persons. This is our contemporary moral legacy, and it is not shared by a huge number of historical cultures.
Skeptical arguments which merely cite the possibility of our being massively mistaken in some domain may be more vulnerable to this reply.
For example, Brink’s long discussion (1989) of the “distorting” factors which might explain false moral belief contains no historical or psychological detail.
Audi, R. (2008). Rational disagreement as a challenge to practical ethics and moral theory: An essay in moral epistemology. In Q. Smith (Ed.), Epistemology: New essays. Oxford: Oxford University Press.
Boyd, R. (1988). How to be a moral realist. In G. Sayre-McCord (Ed.), Essays on moral realism (pp. 181–228). Ithaca: Cornell University Press.
Bloomfield, P. (2008). Disagreement about disagreement. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 303–331). Cambridge, MA: MIT Press.
Brink, D. (1984). Moral realism and the sceptical arguments from disagreement and queerness. Australasian Journal of Philosophy, 62(2), 111–125.
Brink, D. (1989). Moral realism and the foundations of ethics. Cambridge: Cambridge University Press.
Brown, M. L. (1992). Our hands are stained with blood. Shippensburg: Destiny Image Publishers.
Daniels, N. (1979). Wide reflective equilibrium and theory acceptance in ethics. Journal of Philosophy, 76(5), 256–282.
Doris, J. M. (2015). Talking to our selves: Reflection, ignorance, and agency. Oxford: Oxford University Press.
Doris, J. M., & Plakias, A. (2008). How to argue about disagreement: Evaluative diversity and moral realism. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 303–331). Cambridge, MA: MIT Press.
Ellsworth, P. C., & Gross, S. R. (1994). Hardening of the attitudes: Americans’ views on the death penalty. Journal of Social Issues, 50(2), 19–52.
Enoch, D. (2009). How is moral disagreement a problem for realism? Journal of Ethics, 13(1), 15–50.
Enoch, D. (2010). Not just a truthometer: Taking oneself seriously (but not too seriously) in cases of peer disagreement. Mind, 119(476), 953–997.
Enoch, D. (2011). Taking morality seriously: A defense of robust realism. Oxford University Press.
Firth, R. (1952). Ethical absolutism and the ideal observer. Philosophy and Phenomenological Research, 12(3), 317–345.
Fitzpatrick, S. (2014). Moral realism, moral disagreement, and moral psychology. Philosophical Papers, 43(2), 161–190.
Foucault, M. (1977). Discipline and punish: The birth of the prison. New York: Vintage.
Gewirth, A. (1994). Is cultural pluralism relevant to moral knowledge? Social Philosophy and Policy, 11(1), 22–43.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814.
Haidt, J., & Bjorklund, F. (2008). Social intuitionists answer six questions about morality. In W. Sinnott-Armstrong (Ed.), Moral psychology (Vol. 2). Cambridge: MIT Press.
Harman, G. (1975). Moral relativism defended. Philosophical Review, 84(1), 3–22.
Jones, K. (2005). Moral epistemology. In F. Jackson & M. Smith (Eds.), The Oxford handbook of contemporary philosophy. Oxford: Oxford University Press.
Kahneman, D. (2011). Thinking, fast and slow. New York: Macmillan.
Kelly, T. (2005). The epistemic significance of disagreement (Vol. 1, pp. 167–196)., Oxford studies in epistemology Oxford: Oxford University Press.
Kelly, D., Faucher, L., & Machery, E. (2010). Getting rid of racism: Assessing three proposals in light of psychological evidence. Journal of Social Philosophy, 41(3), 293–322.
Kennett, J., & Fine, C. (2009). Will the real moral judgment please stand up? Ethical Theory and Moral Practice, 12(1), 77–96.
Knobe, J. (2010). Person as scientist, person as moralist. Behavioral and Brain Sciences, 33(4), 315.
Kruglanski, A. W. (1996). Motivated social cognition: Principles of the interface. In E. T. Higgins & A. W. Kruglanski (Eds.), Social psychology: Handbook of basic principles. New York: Guilford Press.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480.
Kundra, Z., & Sinclair, L. (1999). Motivated reasoning with stereotypes: Activation, application, and inhibition. Psychological Inquiry, 10(1), 12–22.
Lambert, E. G., Clarke, A., & Lambert, J. (2004). Reasons for supporting and opposing capital punishment in the USA, Toronto. Internet Journal of Criminology, 1, 1–34.
Lenman, J. (2000). Consequentialism and cluelessness. Philosophy and Public Affairs, 29(4), 342–370.
Loeb, D. (1998). Moral realism and the argument from disagreement. Philosophical Studies, 90(3), 281–303.
Mackie, J. L. (1977). Ethics: Inventing right and wrong. Baltimore: Penguin.
McGrath, S. (2007). Moral disagreement and moral expertise. In R. Shafer-Landau (Ed.), Oxford studies in metaethics (Vol. 4, pp. 87–108). Oxford: Oxford University Press.
Moody-Adams, M. M. (1997). Fieldwork in familiar places: Morality, culture, and philosophy. Harvard University Press.
Murray, S. L. (1999). The quest for conviction: Motivated cognition in romantic relationships. Psychological Inquiry, 10(1), 23–34.
Nietzsche, F. W. (1886/1990). Beyond good and evil: prelude to a philosophy of the future. Baltimore: Penguin Books.
Parfit, D. (2011). On What Matters. Oxford University Press.
Plunkett, D., & Sundell, T. (2013). Disagreement and the semantics of normative and evaluative terms. Philosophers’ Imprint, 13(23), 1–37.
Pritchard, D. (2008). Sensitivity, safety, and anti-luck epistemology. In J. Greco (Ed.), The Oxford handbook of skepticism. Oxford: Oxford University Press.
Ratchnevsky, P. (1993). Genghis Khan: His life and legacy. New York: Wiley.
Sauer, H. (2012). Educated intuitions. Automaticity and rationality in moral judgement. Philosophical Explorations, 15(3), 255–275.
Sayre-McCord, G. (1996). Coherentist epistemology and moral theory. In W. Sinnott-Armstrong & M. Timmons (Eds.), Moral Knowledge? New Readings in Moral Epistemology. Oxford: Oxford University Press.
Scanlon, T. M. (2014). Being realistic about reasons. Oxford: Oxford University Press.
Setiya, K. (2012). Knowing right from wrong. Oxford: Oxford University Press.
Shafer-Landau, R. (1994). Ethical disagreement, ethical objectivism and moral indeterminacy. Philosophy and Phenomenological Research, 54(2), 331–344.
Shafer-Landau, R. (2003). Moral realism: A defence. Oxford: Oxford University Press.
Smith, D. L. (2014). Dehumanization, essentialism, and moral psychology. Philosophy Compass, 9(11), 814–824.
Smyth, N. (2017). Moral knowledge and the genealogy of error. Journal of Value Inquiry, 51(3), 455–474.
Sripada, C., & Stich, S. (2006). A framework for the psychology of norms. In P. Carruthers, S. Laurence, & S. P. Stich (Eds.), The innate mind, volume 2: Culture and cognition. Oxford: Oxford University Press.
Street, S. (2009). In defense of future Tuesday indifference: Ideally coherent eccentrics and the contingency of what matters. Philosophical Issues, 19(1), 273–298.
Vavova, K. (2014). Moral disagreement and moral skepticism. Philosophical Perspectives, 28(1), 302–333.
Weatherson, B. (2010). Do judgments screen evidence? (unpublished manuscript).
Wedgwood, R. (2010). The moral evil demons. In R. Feldman & T. Warfield (Eds.), Disagreement. Oxford: Oxford University Press.
Wedgwood, R. (2014). Moral disagreement among philosophers. In M. Bergmann & P. Kain (Eds.), Challenges to moral and religious belief: Disagreement and evolution (pp. 23–39). Oxford: Oxford University Press.
Williams, B. (2009). The point of view of the universe: Sidgwick and the ambitions of ethics. In M. Burnyeat (Ed.), The sense of the past: Essays in the history of philosophy (pp. 277–296). Princeton: Princeton University Press.
The author wishes to thank Robert Joynt, Miquel Miralbes del Pino, and Iain Laidley for a great deal of very valuable discussion.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Smyth, N. Moral disagreement and non-moral ignorance. Synthese (2019). https://doi.org/10.1007/s11229-019-02084-1
- Moral disagreement
- Moral psychology