Skip to main content
Log in

Why moral psychology is disturbing

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Learning the psychological origins of our moral judgments can lead us to lose confidence in them. In this paper I explain why. I consider two explanations drawn from existing literature—regarding epistemic unreliability and automaticity—and argue that neither is fully adequate. I then propose a new explanation, according to which psychological research reveals the extent to which we are disturbingly disunified as moral agents.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. By ‘phenomenological’ I do not mean perceptual phenomenology; I am not talking about colors or smells. What I will describe might be better called intellectual phenomenology: it pertains to the experience of a certain sequence of beliefs and emotions. To call this phenomenological is merely to stress that the immediate interest is about what it is like to go through this experience.

  2. These cases, of course, constitute the famous ‘Trolley Problem’ (Foot 1967; Thomson 1976).

  3. Here I will leave Greene’s empirical claims unchallenged, though certainly others have challenged them. See e.g. Kahane and Shackel (2010) and Kahane et al. (2015).

  4. This speculative evolutionary account comes from Greene (2008) and Singer (2005). It is worth noting that Greene has amended his claims about the nature of the ‘up close and personal’ mechanism (Greene 2014), though the details do not matter for this example.

  5. Peter Singer thinks so as well, judging by his rhetoric: “[W]hat is the moral salience of the fact that I have killed someone in a way that was possible a million years ago, rather than in a way that became possible only two hundred years ago? I would answer: none.” (Singer 2005, 348). This does not appear to be an argument, unless Singer expects his reader to share his reaction.

  6. See Doris and Stich (2007), Appiah (2008), Kauppinen (2007) and Berker (2009), and others discussed below.

  7. There are also general epistemic reasons, not restricted to the moral domain, to resist causal debunking of our beliefs (White 2010). However, since it is not universally agreed that moral judgment does or should obey the same epistemic standards as other reasoning domains, I will restrict my discussion to distinctively moral judgment.

  8. Elsewhere I do, in effect, defend the rationality of some cases of doxastic embarrassment (Rini 2013). But the argument of this paper does not rely on that one.

  9. Sinnott-Armstrong’s mention of inferential connection here is meant to narrow the target of his skeptical argument. He does not aim to debunk moral intuitions completely, but only to claim that they cannot be treated as free-standing sources of justification; they must be embedded in a broader coherentist framework. In effect, Sinnott-Armstrong is attacking Intuitionist views in moral epistemology (e.g. Audi 2008). He does mount a more generally skeptical challenge to moral intuition in Sinnott-Armstrong (2006).

  10. See Liao (2008), Berker (2009), Musschenga (2010), Kahane (2011), Mason (2011), Leben (2011) and de Lazari-Radek and Singer (2012).

  11. There are some very sophisticated forms of non-cognitivism that seek to preserve truth-related language, even the standard operations of moral epistemology, while denying that moral judgments are semantically truth-evaluable. See for instance Blackburn (1996).

  12. Another reason is that there may be logical problems with psychological debunking arguments. In this paper I have left their internal logic unchallenged, but elsewhere I point out a problem (Rini 2016). If, for whatever reason, you doubt that this sort of argument works, and yet you experience doxastic embarrassment, then that seems sufficient reason to keep reading.

  13. Sauer builds on McDowell (1994) and Pollard (2005). Kennett and Fine (2009, 93) make a similar point in their challenge to Haidt. See also Railton (2014) for related discussion of the rationality of uncontrolled moral cognition and see Velleman (2008) on the concept of ‘flow’.

  14. What would be shocking is the discovery that reflection never plays a role in generating, revising, or sustaining moral beliefs. But not even Haidt claims this—his model allows a role for explicit moral reasoning, albeit “hypothesized to occur somewhat rarely outside of highly specialized subcultures such as that of philosophy, which provides years of training in unnatural modes of thought” (Haidt and Bjorklund 2008, 193).

  15. Here there is the complicated counterfactual matter of what I would have done if the adrenaline shot had not been present. Presumably (as per the first case) I would have automatically acted from my commitments and saved you—so my movement seems to be overdetermined, in a way that complicates analysis of moral responsibility (Frankfurt 1971). But I am trying to sidestep issues of responsibility here; the point of the case is just to clarify what is involved in agency. For discussion of consciousness and moral responsibility, see Sie (2009) and Levy (2014).

  16. This case parallels a regular source of interpersonal drama in fiction—the mistake that is maybe not entirely an accident. See, for example, John Knowles’ A Separate Peace, Ian McEwan’s Atonement, Margaret Atwood’s The Blind Assassin, or Julian Barnes’ The Sense of an Ending.

  17. Further, there is empirical evidence that our attribution of values to another person is affected by our own moral commitments, suggesting that we do not have a non-normative way of making the distinction. See Newman et al. (2014) and Strohminger and Nichols (2014).

  18. Anyway, I think that it is bad to be agentially disunified. But I should admit that not everyone thinks this, especially not those spared a Kantian intellectual upbringing. Many Buddhists hold that conceiving of oneself as a single unified agent is not only mistaken but is the root of suffering. It may be that my analysis does not apply to people with radically different conceptions of the self and human agency. I would be interested to know whether people raised in this tradition experience doxastic embarrassment at all; if they do, then that is a problem for my theory. Thanks to Nic Bommarito for this point (and for the phrase ‘Kantian intellectual upbringing’).

  19. Some philosophers of mind have argued that certain mental happenings (such as judgments) should be understood as agential mental acts (e.g. Geach 1957; Proust 2001).

  20. This is not too far off from Allan Gibbard’s claim that when I judge what is to be done in a particular circumstance, I am making a plan for what I would do were I ever in that circumstance. See Gibbard (2003, 48–53).

  21. For an overview of the science of implicit bias see Jost et al. (2009). For philosophical discussion see the essays in Brownstein and Saul (2016).

  22. Indeed, that is Greene’s point: he says that the up-close-and-personal mechanism explains my deontological moral intuitions, and that deontological moral philosophy is a rationalization of my primate psychology (Greene 2008, 68).

References

  • Appiah, K. A. (2008). Experiments in ethics. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Arpaly, N. (2002). Unprincipled virtue. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Audi, R. (2008). Intuition, inference, and rational disagreement in ethics. Ethical Theory and Moral Practice, 11(5), 475–492.

    Article  Google Scholar 

  • Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54, 462–479.

    Article  Google Scholar 

  • Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of Personality and Social Psychology, 71(2), 230–244. doi:10.1037/0022-3514.71.2.230.

    Article  Google Scholar 

  • Berker, S. (2009). The normative insignificance of neuroscience. Philosophy & Public Affairs, 37(4), 293–329. doi:10.1111/j.1088-4963.2009.01164.x.

    Article  Google Scholar 

  • Blackburn, S. (1996). Securing the nots: Moral epistemology for the quasi-realist. In W. Sinnott-Armstrong & M. Timmons (Eds.), Moral knowledge? New readings in moral epistemology (pp. 82–100). Oxford: Oxford University Press.

    Google Scholar 

  • Brownstein, M., & Saul, J. (Eds). (2016). Implicit bias and philosophy: Moral responsibility, structural injustice, and ethics (Vol. 2). Oxford: Oxford University Press.

  • Caruso, E. M., & Gino, F. (2011). Blind ethics: Closing one’s eyes polarizes moral judgments and discourages dishonest behavior. Cognition, 118(2), 280–285.

    Article  Google Scholar 

  • Chappell, T. (2014). Why ethics is hard. Journal of Moral Philosophy, 11(6), 704–726.

    Article  Google Scholar 

  • de Lazari-Radek, K., & Singer, P. (2012). The objectivity of ethics and the unity of practical reason. Ethics, 123(1), 9–31. doi:10.1086/667837.

    Article  Google Scholar 

  • Doris, J. M. (2009). Skepticism about persons. Philosophical Issues, 19(1), 57–91.

    Article  Google Scholar 

  • Doris, J. M., & Stich, S. (2007). As a matter of fact: Empirical perspectives on ethics. In F. Jackson & M. Smith (Eds.), The Oxford handbook of contemporary philosophy (1st ed., Vol. 1, pp. 114–153). Oxford: Oxford University Press.

    Google Scholar 

  • Dworkin, R. (1996). Objectivity and truth: You’d better believe it. Philosophy & Public Affairs, 25(2), 87–139.

    Article  Google Scholar 

  • Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 5–15.

    Google Scholar 

  • Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. Journal of Philosophy, 68(1), 5–20.

    Article  Google Scholar 

  • Geach, P. (1957). Mental acts: Their content and their objects. New York: The Humanities Press.

    Google Scholar 

  • Gibbard, A. (2003). Thinking how to live. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Greene, J. D. (2008). The secret joke of Kant’s Soul. In W. Sinnott-Armstrong (Ed.), Moral psychology. The neuroscience of morality: Emotion, brain disorders, and development (Vol. 3, pp. 35–80). Cambridge, MA: MIT Press.

    Google Scholar 

  • Greene, J. D. (2014). Beyond point-and-shoot morality: Why cognitive (neuro)science matters for ethics. Ethics, 124(4), 695–726. doi:10.1086/675875.

    Article  Google Scholar 

  • Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389–400. doi:10.1016/j.neuron.2004.09.027.

    Article  Google Scholar 

  • Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834.

    Article  Google Scholar 

  • Haidt, J., & Bjorklund, F. (2008). Social intuitions answer six questions about moral psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology. The cognitive science of morality: Intuition and diversity (Vol. 2, pp. 181–218). Cambridge, MA: MIT Press.

    Google Scholar 

  • Jones, K. (2003). Emotion, weakness of will, and the normative conception of agency. In A. Hatzimoysis (Ed.), Philosophy and the emotions (pp. 181–200). Cambridge, MA: Cambridge University Press.

    Chapter  Google Scholar 

  • Jost, J. T., Rudman, L. A., Blair, I. V., Carney, D. R., Dasgupta, N., Glaser, J., et al. (2009). The existence of implicit bias is beyond reasonable doubt: A refutation of ideological and methodological objections and executive summary of ten studies that no manager should ignore. Research in Organizational Behavior, 29, 39–69. doi:10.1016/j.riob.2009.10.001.

    Article  Google Scholar 

  • Kahane, G. (2011). Evolutionary debunking arguments. Noûs, 45(1), 103–125. doi:10.1111/j.1468-0068.2010.00770.x.

    Article  Google Scholar 

  • Kahane, G., Everett, J. A. C., Earp, B. D., Farias, M., & Savulescu, J. (2015). ‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good. Cognition, 134(January), 193–209. doi:10.1016/j.cognition.2014.10.005.

    Article  Google Scholar 

  • Kahane, G., & Shackel, N. (2010). Methodological issues in the neuroscience of moral judgement. Mind and Language, 25(5), 561–582. doi:10.1111/j.1468-0017.2010.01401.x.

    Article  Google Scholar 

  • Kamm, F. M. (2009). Neuroscience and moral reasoning: A note on recent research. Philosophy & Public Affairs, 37(4), 330–345. doi:10.1111/j.1088-4963.2009.01165.x.

    Article  Google Scholar 

  • Kauppinen, A. (2007). The rise and fall of experimental philosophy. Philosophical Explorations, 10(2), 95–118.

    Article  Google Scholar 

  • Kennett, J., & Fine, C. (2009). Will the real moral judgment please stand up? The implications of social intuitionist models of cognition for meta-ethics and moral psychology. Ethical Theory and Moral Practice, 12(1), 77–96.

    Article  Google Scholar 

  • Korsgaard, C. M. (2009). Self-constitution: Agency, identity, and integrity. New York: Oxford University Press.

    Book  Google Scholar 

  • Leben, D. (2011). Cognitive neuroscience and moral decision-making: Guide or set aside? Neuroethics, 4(2), 163–174. doi:10.1007/s12152-010-9087-z.

    Article  Google Scholar 

  • Levy, N. (2014). Consciousness and moral responsibility. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Liao, S. (2008). A defense of intuitions. Philosophical Studies, 140(2), 247–262. doi:10.1007/s11098-007-9140-x.

    Article  Google Scholar 

  • Mackie, J. L. (1977). Ethics: Inventing right and wrong. London: Penguin Books.

    Google Scholar 

  • Ma-Kellams, C., & Blascovich, J. (2013). Does ‘science’ make you moral? The effects of priming science on moral judgments and behavior. PLoS One, 8(3), e57989. doi:10.1371/journal.pone.0057989.

    Article  Google Scholar 

  • Mason, K. (2011). Moral psychology and moral intuition: A pox on all your houses. Australasian Journal of Philosophy, 89(3), 441–458. doi:10.1080/00048402.2010.506515.

    Article  Google Scholar 

  • McDowell, J. (1994). Mind and world (Vol. 3). Cambridge: Harvard University Press.

    Google Scholar 

  • Musschenga, A. W. (2010). The epistemic value of intuitive moral judgements. Philosophical Explorations, 13(2), 113–128. doi:10.1080/13869791003764047.

    Article  Google Scholar 

  • Nagel, T. (1997). The last word. Oxford: Oxford University Press.

    Google Scholar 

  • Newman, G. E., Bloom, P., & Knobe, J. (2014). Value judgments and the true self. Personality and Social Psychology Bulletin, 40(2), 203–216. doi:10.1177/0146167213508791.

    Article  Google Scholar 

  • Nietzsche, F. (1973). In W. Kaufmann (Ed.), The Will to Power. Random House USA Inc.

  • Pollard, B. (2005). Naturalizing the space of reasons. International Journal of Philosophical Studies, 13(1), 69–82. doi:10.1080/0967255042000324344.

    Article  Google Scholar 

  • Proust, J. (2001). A plea for mental acts. Synthese, 129(1), 105–128.

    Article  Google Scholar 

  • Railton, P. (2014). The affective dog and its rational tale: Intuition and attunement. Ethics, 124(4), 813–859. doi:10.1086/675876.

    Article  Google Scholar 

  • Rini, R. A. (2013). Making psychology normatively significant. The Journal of Ethics, 17(3), 257–274.

    Article  Google Scholar 

  • Rini, R. A. (2016). Debunking debunking: A regress challenge for psychological threats to moral judgment. Philosophical Studies, 173(3), 675–697.

    Article  Google Scholar 

  • Sauer, H. (2012). Educated intuitions. Automaticity and rationality in moral judgement. Philosophical Explorations, 15(3), 255–275.

    Article  Google Scholar 

  • Sie, M. (2009). Moral agency, conscious control, and deliberative awareness. Inquiry, 52(5), 516–531.

    Article  Google Scholar 

  • Singer, P. (2005). Ethics and intuitions. Journal of Ethics, 9(3–4), 331–352.

    Article  Google Scholar 

  • Sinnott-Armstrong, W. (2006). Moral intuitionism meets empirical psychology. In T. Horgan & M. Timmons (Eds.), Metaethics after moore (pp. 339–366). Oxford: Oxford University Press.

    Google Scholar 

  • Sinnott-Armstrong, W. (Ed.). (2008). Framing moral intuition. In Moral psychology. The cognitive science of morality: Intuition and diversity. (Vol. 2, pp. 47–76). Cambridge, MA: MIT Press.

    Google Scholar 

  • Stevenson, C. L. (1944). Ethics and language. New Haven, CT: Yale University Press.

    Google Scholar 

  • Strohminger, N., Lewis, R. L., & Meyer, D. E. (2011). Divergent effects of different positive emotions on moral judgment. Cognition, 119(2), 295–300. doi:10.1016/j.cognition.2010.12.012.

    Article  Google Scholar 

  • Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition, 131(1), 159–171. doi:10.1016/j.cognition.2013.12.005.

    Article  Google Scholar 

  • Thomson, J. J. (1976). Killing, letting die, and the trolley problem. The Monist, 59(2), 204–217.

    Article  Google Scholar 

  • Van Roojen, M. (1999). Reflective moral equilibrium and psychological theory. Ethics, 109(4), 846–857.

    Article  Google Scholar 

  • Velleman, J. D. (2008). The way of the Wanton. In K. Atkins & C. Mackenzie (Eds.), Practical identity and narrative agency (pp. 169–192). New York: Routledge.

    Google Scholar 

  • White, R. (2010). You just believe that because. Philosophical Perspectives, 24(1), 573–615. doi:10.1111/j.1520-8583.2010.00204.x.

    Article  Google Scholar 

  • Williams, B. (1981). Moral luck. Cambridge, MA: Cambridge University Press.

    Book  Google Scholar 

  • Yang, Q., Xiaochang, W., Zhou, X., Mead, N. L., Vohs, K. D., & Baumeister, R. F. (2013). Diverging effects of clean versus dirty money on attitudes, values, and interpersonal behavior. Journal of Personality and Social Psychology, 104(3), 473–489. doi:10.1037/a0030596.

    Article  Google Scholar 

Download references

Acknowledgments

This paper has extensively benefited from discussion by conference audiences at Oxford and NYU, especially a set of superb comments by Nic Bommarito. It was also greatly improved by participants in the 2015 Mentoring Workshop for Women in Philosophy at the University of Massachusetts Amherst, including Dana Howard, Julia Nefsky, Tina Rulli, and most especially Amelia Hicks and Karen Stohr. I also owe thanks to Nomy Arpaly, Nora Heinzelmann, Guy Kahane, David Kaspar, Hanno Sauer, Amia Srinivasan, and an anonymous reviewer for Philosophical Studies for very helpful comments and discussion.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Regina A. Rini.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rini, R.A. Why moral psychology is disturbing. Philos Stud 174, 1439–1458 (2017). https://doi.org/10.1007/s11098-016-0766-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-016-0766-4

Keywords

Navigation