Advertisement

Philosophical Studies

, Volume 174, Issue 6, pp 1439–1458 | Cite as

Why moral psychology is disturbing

  • Regina A. Rini
Article

Abstract

Learning the psychological origins of our moral judgments can lead us to lose confidence in them. In this paper I explain why. I consider two explanations drawn from existing literature—regarding epistemic unreliability and automaticity—and argue that neither is fully adequate. I then propose a new explanation, according to which psychological research reveals the extent to which we are disturbingly disunified as moral agents.

Keywords

Moral judgment Moral intuition Moral psychology Doxastic embarrassment 

Notes

Acknowledgments

This paper has extensively benefited from discussion by conference audiences at Oxford and NYU, especially a set of superb comments by Nic Bommarito. It was also greatly improved by participants in the 2015 Mentoring Workshop for Women in Philosophy at the University of Massachusetts Amherst, including Dana Howard, Julia Nefsky, Tina Rulli, and most especially Amelia Hicks and Karen Stohr. I also owe thanks to Nomy Arpaly, Nora Heinzelmann, Guy Kahane, David Kaspar, Hanno Sauer, Amia Srinivasan, and an anonymous reviewer for Philosophical Studies for very helpful comments and discussion.

References

  1. Appiah, K. A. (2008). Experiments in ethics. Cambridge, MA: Harvard University Press.Google Scholar
  2. Arpaly, N. (2002). Unprincipled virtue. Oxford: Oxford University Press.CrossRefGoogle Scholar
  3. Audi, R. (2008). Intuition, inference, and rational disagreement in ethics. Ethical Theory and Moral Practice, 11(5), 475–492.CrossRefGoogle Scholar
  4. Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54, 462–479.CrossRefGoogle Scholar
  5. Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of Personality and Social Psychology, 71(2), 230–244. doi: 10.1037/0022-3514.71.2.230.CrossRefGoogle Scholar
  6. Berker, S. (2009). The normative insignificance of neuroscience. Philosophy & Public Affairs, 37(4), 293–329. doi: 10.1111/j.1088-4963.2009.01164.x.CrossRefGoogle Scholar
  7. Blackburn, S. (1996). Securing the nots: Moral epistemology for the quasi-realist. In W. Sinnott-Armstrong & M. Timmons (Eds.), Moral knowledge? New readings in moral epistemology (pp. 82–100). Oxford: Oxford University Press.Google Scholar
  8. Brownstein, M., & Saul, J. (Eds). (2016). Implicit bias and philosophy: Moral responsibility, structural injustice, and ethics (Vol. 2). Oxford: Oxford University Press.Google Scholar
  9. Caruso, E. M., & Gino, F. (2011). Blind ethics: Closing one’s eyes polarizes moral judgments and discourages dishonest behavior. Cognition, 118(2), 280–285.CrossRefGoogle Scholar
  10. Chappell, T. (2014). Why ethics is hard. Journal of Moral Philosophy, 11(6), 704–726.CrossRefGoogle Scholar
  11. de Lazari-Radek, K., & Singer, P. (2012). The objectivity of ethics and the unity of practical reason. Ethics, 123(1), 9–31. doi: 10.1086/667837.CrossRefGoogle Scholar
  12. Doris, J. M. (2009). Skepticism about persons. Philosophical Issues, 19(1), 57–91.CrossRefGoogle Scholar
  13. Doris, J. M., & Stich, S. (2007). As a matter of fact: Empirical perspectives on ethics. In F. Jackson & M. Smith (Eds.), The Oxford handbook of contemporary philosophy (1st ed., Vol. 1, pp. 114–153). Oxford: Oxford University Press.Google Scholar
  14. Dworkin, R. (1996). Objectivity and truth: You’d better believe it. Philosophy & Public Affairs, 25(2), 87–139.CrossRefGoogle Scholar
  15. Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 5–15.Google Scholar
  16. Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. Journal of Philosophy, 68(1), 5–20.CrossRefGoogle Scholar
  17. Geach, P. (1957). Mental acts: Their content and their objects. New York: The Humanities Press.Google Scholar
  18. Gibbard, A. (2003). Thinking how to live. Cambridge, MA: Harvard University Press.Google Scholar
  19. Greene, J. D. (2008). The secret joke of Kant’s Soul. In W. Sinnott-Armstrong (Ed.), Moral psychology. The neuroscience of morality: Emotion, brain disorders, and development (Vol. 3, pp. 35–80). Cambridge, MA: MIT Press.Google Scholar
  20. Greene, J. D. (2014). Beyond point-and-shoot morality: Why cognitive (neuro)science matters for ethics. Ethics, 124(4), 695–726. doi: 10.1086/675875.CrossRefGoogle Scholar
  21. Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44(2), 389–400. doi: 10.1016/j.neuron.2004.09.027.CrossRefGoogle Scholar
  22. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834.CrossRefGoogle Scholar
  23. Haidt, J., & Bjorklund, F. (2008). Social intuitions answer six questions about moral psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology. The cognitive science of morality: Intuition and diversity (Vol. 2, pp. 181–218). Cambridge, MA: MIT Press.Google Scholar
  24. Jones, K. (2003). Emotion, weakness of will, and the normative conception of agency. In A. Hatzimoysis (Ed.), Philosophy and the emotions (pp. 181–200). Cambridge, MA: Cambridge University Press.CrossRefGoogle Scholar
  25. Jost, J. T., Rudman, L. A., Blair, I. V., Carney, D. R., Dasgupta, N., Glaser, J., et al. (2009). The existence of implicit bias is beyond reasonable doubt: A refutation of ideological and methodological objections and executive summary of ten studies that no manager should ignore. Research in Organizational Behavior, 29, 39–69. doi: 10.1016/j.riob.2009.10.001.CrossRefGoogle Scholar
  26. Kahane, G. (2011). Evolutionary debunking arguments. Noûs, 45(1), 103–125. doi: 10.1111/j.1468-0068.2010.00770.x.CrossRefGoogle Scholar
  27. Kahane, G., Everett, J. A. C., Earp, B. D., Farias, M., & Savulescu, J. (2015). ‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good. Cognition, 134(January), 193–209. doi: 10.1016/j.cognition.2014.10.005.CrossRefGoogle Scholar
  28. Kahane, G., & Shackel, N. (2010). Methodological issues in the neuroscience of moral judgement. Mind and Language, 25(5), 561–582. doi: 10.1111/j.1468-0017.2010.01401.x.CrossRefGoogle Scholar
  29. Kamm, F. M. (2009). Neuroscience and moral reasoning: A note on recent research. Philosophy & Public Affairs, 37(4), 330–345. doi: 10.1111/j.1088-4963.2009.01165.x.CrossRefGoogle Scholar
  30. Kauppinen, A. (2007). The rise and fall of experimental philosophy. Philosophical Explorations, 10(2), 95–118.CrossRefGoogle Scholar
  31. Kennett, J., & Fine, C. (2009). Will the real moral judgment please stand up? The implications of social intuitionist models of cognition for meta-ethics and moral psychology. Ethical Theory and Moral Practice, 12(1), 77–96.CrossRefGoogle Scholar
  32. Korsgaard, C. M. (2009). Self-constitution: Agency, identity, and integrity. New York: Oxford University Press.CrossRefGoogle Scholar
  33. Leben, D. (2011). Cognitive neuroscience and moral decision-making: Guide or set aside? Neuroethics, 4(2), 163–174. doi: 10.1007/s12152-010-9087-z.CrossRefGoogle Scholar
  34. Levy, N. (2014). Consciousness and moral responsibility. Oxford: Oxford University Press.CrossRefGoogle Scholar
  35. Liao, S. (2008). A defense of intuitions. Philosophical Studies, 140(2), 247–262. doi: 10.1007/s11098-007-9140-x.CrossRefGoogle Scholar
  36. Mackie, J. L. (1977). Ethics: Inventing right and wrong. London: Penguin Books.Google Scholar
  37. Ma-Kellams, C., & Blascovich, J. (2013). Does ‘science’ make you moral? The effects of priming science on moral judgments and behavior. PLoS One, 8(3), e57989. doi: 10.1371/journal.pone.0057989.CrossRefGoogle Scholar
  38. Mason, K. (2011). Moral psychology and moral intuition: A pox on all your houses. Australasian Journal of Philosophy, 89(3), 441–458. doi: 10.1080/00048402.2010.506515.CrossRefGoogle Scholar
  39. McDowell, J. (1994). Mind and world (Vol. 3). Cambridge: Harvard University Press.Google Scholar
  40. Musschenga, A. W. (2010). The epistemic value of intuitive moral judgements. Philosophical Explorations, 13(2), 113–128. doi: 10.1080/13869791003764047.CrossRefGoogle Scholar
  41. Nagel, T. (1997). The last word. Oxford: Oxford University Press.Google Scholar
  42. Newman, G. E., Bloom, P., & Knobe, J. (2014). Value judgments and the true self. Personality and Social Psychology Bulletin, 40(2), 203–216. doi: 10.1177/0146167213508791.CrossRefGoogle Scholar
  43. Nietzsche, F. (1973). In W. Kaufmann (Ed.), The Will to Power. Random House USA Inc.Google Scholar
  44. Pollard, B. (2005). Naturalizing the space of reasons. International Journal of Philosophical Studies, 13(1), 69–82. doi: 10.1080/0967255042000324344.CrossRefGoogle Scholar
  45. Proust, J. (2001). A plea for mental acts. Synthese, 129(1), 105–128.CrossRefGoogle Scholar
  46. Railton, P. (2014). The affective dog and its rational tale: Intuition and attunement. Ethics, 124(4), 813–859. doi: 10.1086/675876.CrossRefGoogle Scholar
  47. Rini, R. A. (2013). Making psychology normatively significant. The Journal of Ethics, 17(3), 257–274.CrossRefGoogle Scholar
  48. Rini, R. A. (2016). Debunking debunking: A regress challenge for psychological threats to moral judgment. Philosophical Studies, 173(3), 675–697.CrossRefGoogle Scholar
  49. Sauer, H. (2012). Educated intuitions. Automaticity and rationality in moral judgement. Philosophical Explorations, 15(3), 255–275.CrossRefGoogle Scholar
  50. Sie, M. (2009). Moral agency, conscious control, and deliberative awareness. Inquiry, 52(5), 516–531.CrossRefGoogle Scholar
  51. Singer, P. (2005). Ethics and intuitions. Journal of Ethics, 9(3–4), 331–352.CrossRefGoogle Scholar
  52. Sinnott-Armstrong, W. (2006). Moral intuitionism meets empirical psychology. In T. Horgan & M. Timmons (Eds.), Metaethics after moore (pp. 339–366). Oxford: Oxford University Press.Google Scholar
  53. Sinnott-Armstrong, W. (Ed.). (2008). Framing moral intuition. In Moral psychology. The cognitive science of morality: Intuition and diversity. (Vol. 2, pp. 47–76). Cambridge, MA: MIT Press.Google Scholar
  54. Stevenson, C. L. (1944). Ethics and language. New Haven, CT: Yale University Press.Google Scholar
  55. Strohminger, N., Lewis, R. L., & Meyer, D. E. (2011). Divergent effects of different positive emotions on moral judgment. Cognition, 119(2), 295–300. doi: 10.1016/j.cognition.2010.12.012.CrossRefGoogle Scholar
  56. Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition, 131(1), 159–171. doi: 10.1016/j.cognition.2013.12.005.CrossRefGoogle Scholar
  57. Thomson, J. J. (1976). Killing, letting die, and the trolley problem. The Monist, 59(2), 204–217.CrossRefGoogle Scholar
  58. Van Roojen, M. (1999). Reflective moral equilibrium and psychological theory. Ethics, 109(4), 846–857.CrossRefGoogle Scholar
  59. Velleman, J. D. (2008). The way of the Wanton. In K. Atkins & C. Mackenzie (Eds.), Practical identity and narrative agency (pp. 169–192). New York: Routledge.Google Scholar
  60. White, R. (2010). You just believe that because. Philosophical Perspectives, 24(1), 573–615. doi: 10.1111/j.1520-8583.2010.00204.x.CrossRefGoogle Scholar
  61. Williams, B. (1981). Moral luck. Cambridge, MA: Cambridge University Press.CrossRefGoogle Scholar
  62. Yang, Q., Xiaochang, W., Zhou, X., Mead, N. L., Vohs, K. D., & Baumeister, R. F. (2013). Diverging effects of clean versus dirty money on attitudes, values, and interpersonal behavior. Journal of Personality and Social Psychology, 104(3), 473–489. doi: 10.1037/a0030596.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.NYU Center for BioethicsNew YorkUSA

Personalised recommendations