Skip to main content
Log in

Making Psychology Normatively Significant

  • Published:
The Journal of Ethics Aims and scope Submit manuscript

Abstract

The debate between proponents and opponents of a role for empirical psychology in ethical theory seems to be deadlocked. This paper aims to clarify the terms of that debate, and to defend a principled middle position. I argue against extreme views, which see empirical psychology either as irrelevant to, or as wholly displacing, reflective moral inquiry. Instead, I argue that moral theorists of all stripes are committed to a certain conception of moral thought—as aimed at abstracting away from individual inclinations and toward interpersonal norms—and that this conception tells against both extremes. Since we cannot always know introspectively whether our particular moral judgments achieve this interpersonal standard, we must seek the sort of self-knowledge offered by empirical psychology. Yet reflective assessment of this new information remains a matter of substantive normative theorizing, rather than an immediate consequence of empirical findings themselves.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Examples of the first sort, some of which are discussed in more detail below, include Baron (1995), Horowitz (1998), Sunstein (2005), and Greene (2008). Examples of the second sort include Kamm (1998), van Roojen (1999), Cullity (2006), Berker (2009), and Kahane and Shackel (2010). For good overviews of these debates, see Appiah (2008), Levy (2009), and the essays in Sinnott-Armstrong (2008a).

  2. This view is most clearly present in Immanuel Kant, who describes an empirical approach to fundamental moral principles as a “base way of thinking”, “disadvantageous to the purity of morals themselves”, and “a bastard patched together from limbs of quite diverse ancestry”. (Kant 1785/2002, 43–44). Variations on this theme also turn up in Held (1996), Dworkin (1996), and Cohen (2003).

  3. An important qualification: of course there are some ways in which empirical information matters to moral theory, even following the Autonomy Thesis. Specifically, empirical information can be crucial in applying a moral theory to actual decisions, especially if the moral theory treats individual interests and preferences as input to a decision procedure. For instance, a utilitarian normative theory advising us to maximize individual happiness cannot be implemented without empirical information about what actually makes people happy. Set that sort of application-relevance aside for the moment; what is at issue here is the philosophical relevance of empirical discoveries regarding how we engage in moral thinking, and especially the origins of our moral intuitions.

  4. To be clear: there certainly are circumstances in which my mere personal preference would be enough to justify my choice. If it is a holiday and nothing else of importance is going on, then of course the mere fact that I want to watch the dog show will count as justifying doing so. However, as the example in the text is meant to demonstrate, mere personal preference falls far short of justification when I am being asked why I have failed to fulfil an apparent obligation. The plausibility of a given consideration in an attempted justification will depend on the background assumptions of those demanding justification. Thanks to an anonymous reviewer for The Journal of Ethics pressing this point.

  5. One might use ‘justification’ in a different way, where it would be a success-word. One might think that only successful efforts to make my preferences acceptable to others (or to some more objective standard) count as ‘justifications’. In that case, the non-success-word version of ‘justification’ used in my text should be read as ‘attempted justification’. Nothing really turns on how we elect to use this word, but it is helpful to make note of the fact that my non-success-word use here may differ from the success-word use in other discussions, and hopefully thereby avoid confusion. Thanks to an anonymous reviewer for The Journal of Ethics suggesting this clarification.

  6. Prominent views in this vicinity can be found in Smith (1994), Korsgaard (1996), Scanlon (1999), and Darwall (2006).

  7. Sidgwick (1874/1962, 34). He later refers back to this discussion and clarifies: “If therefore I judge any action to be right for myself, I implicitly judge it to be right for any other person whose nature and circumstances do not differ from my own in some important respects.” (209) Some process of abstraction seems necessary to determine which “certain definable class” any particular action belongs to, or in what “important respects” two particular actions may or may not be said to differ.

  8. There are, of course, important exceptions. Some people, particularly followers of Aristotelian virtue theory, are likely to reject an account of ethics divorced from particular circumstances and socially embedded contexts. Existentialist philosophers have been especially concerned to emphasize the ineradicable centrality of personal standards in moral determination. And moral particularists (Dancy 2004, Hooker and Little 2000) are obviously unlikely to agree that moral deliberation necessarily involves abstraction, at least not without qualification.

  9. You should not feel too badly about not noticing the switch though, since so few people do. Indeed, this is a well-established result in the literature on “change blindness” in perception research (Simons and Levin 1998, Levin et al. 2002).

  10. To avoid any confusion: here my use of the word “nonconscious” (and similar locutions) is intended to deny the presence of what Ned Block (1995, 2007) calls access consciousness, the availability of a particular representation to global mental operations, especially verbal report.

  11. One might object to interpreting this study as one bearing on moral judgment. Subjects might have regarded setting a bond figure not as a form of moral sanction, but merely as an attempt to deter suspect flight—and the experimenter-swap effect may have unsettled their confidence in predicting the suspect’s reliability, rather than affecting moral judgment directly. (Thanks to an anonymous reviewer for The Journal of Ethics this objection.) I think there is some merit in this point, but it is worth noting possible replies. First, unlike in other legal regimes, the criminal law of Canada (where this study took place) explicitly does require bond-setting judges to consider factors other than flight risk, such as whether granting bond would “maintain confidence in the administration of justice” in a manner fitting “the gravity of the offense”. (Criminal Code of Canada, section 515.10(c). See http://laws-lois.justice.gc.ca/eng/acts/C-46/section-515.html) Second, the authors of the study themselves state that the bond figure is intended as a measure of whether “people are motivated to maintain their cultural worldview and will seek to punish individuals who act in ways that are inconsistent with that worldview. (Proulx and Heine 2008, 1296)” They claim that the measure has been utilized for this purpose in a series of related empirical studies. In their interpretation, the findings of this study are support for the psychological concept of “compensatory affirmation”, whereby the subject responds to an irresolvable disruption in one psychological schema (person consistency) by temporarily heightening expression of some unrelated but manageable schema (enforcement of social norms). So, while the objection does have some merit, addressing it fully would require working against a significant body of legal and psychological literature. In any case, I focus on this study as an illustration; the general point about empirically-demonstrated nonconscious factors in moral judgment can be made with other studies (mentioned below) if one does not trust this one.

  12. Dual-process models are described at length in Stanovich and West (2000) and Kahneman (2002). The idea of cognitively impenetrable processes owes much to Fodor (1983).

  13. The gap itself is a problem if we accept what Christine Korsgaard calls the transparency condition for moral theory: “A normative moral theory must be one that allows us to act in the full light of knowledge of what morality is and why we are susceptible to its influences, and at the same time to believe that our actions are justified and make sense.” (Korsgaard 1996, 17) Given the transparency condition, the possibility of nonconscious influence upon moral thinking provides a direct argument against the Autonomy Thesis; transparency requires that we understand, through empirical means if necessary, why we value what we value.

  14. This study is also discussed, at similar length and with similar aim, in Appiah (2008, 86–87).

  15. Importantly, this does not show what Wheatley and Haidt suggest in their discussion—that all forms of moral judgment are post hoc or biased. What it does show, however, is that we are very bad at detecting when we are biased or engaging in mere post hoc rationalization.

  16. Sinnott-Armstrong (2008b, 75) reaches a very similar conclusion: “Moral intuitionists cannot simply dismiss empirical psychology as irrelevant to their enterprise. They need to find out whether the empirical presuppositions of their normative views are accurate.” His argument differs from mine in that it focuses on general epistemic standards of reliability, while mine is grounded in a concern (normative abstraction) particular to the moral domain.

  17. There is an enormous literature on the mathematics of kin-selection and its expression in animal and human behaviour, starting from Hamilton (1964). The details of any particular proposal don’t matter to the present point about methodology, although they might in coming to a substantive conclusion on our obligations to kin.

  18. My discussion of Singer here is intended only as an example of how one’s reaction to psychological explanation of moral intuition might change through prolonged moral reflection. I take no position on the details of Singer’s reasoning. In particular, my conception of normative abstraction does not necessarily require abstracting to quite the degree that Singer does. (He regularly suggests that moral justification can ultimately be secured only from Sidgwick’s “point of view of the universe”). There are grounds for deep philosophical disagreement about the degree of abstraction required for moral justification; I do not aim to settle such issues in this paper. (Thanks to an anonymous reviewer for The Journal of Ethics pressing me to clarify this point).

  19. This tendency seems to have particular traction in popular accounts of the relation between psychology and moral philosophy; see David Brooks, “The End of Philosophy” (The New York Times, April 6, 2009) and The Economist, “Moral thinking: biology invades a field philosophers thought was safely theirs” (February 21, 2008).

  20. The exceptions to this rule—when a manipulation does seem to generate the correct response—confirm the underlying point. For instance, Caruso and Gino (2011) claim to show that subjects behave more ethically after deliberating with their eyes closed. Their closed-eyed subjects were more generous and indicated less willingness to engage in dishonest behavior. Of course, characterizing these subject behaviors as “ethical” must depend on a background moral theory about generosity and honesty!

  21. F. Nietzsche appreciated early the need to distinguish our reaction at learning a causal story about moral judgments from the story itself, and the need to take this reaction up in an appropriately reflective spirit: “The inquiry into the origin of our evaluations and tables of the good is in absolutely no way identical with a critique of them, as is so often believed: even though the insight into some pudendo origo certainly brings with it a feeling of a dimunition in value of the thing that originated thus and prepares the way to a critical mood and attitude toward it.”(Nietzsche 1901/1967, section 254).

  22. For more on normative conclusions drawn from the biases and heuristics literature, see Kahneman (1994), van Roojen (1999), Sinnott-Armstrong (2008b), and Gigerenzer (2008).

  23. Some have expressed deep empirical methodological reservations about this research, especially in how it experimentally operationalizes the key concepts of deontology, consequentialism, emotion, and reason. See Berker (2009), Kamm (2009), and Kahane and Shackel (2010). But I will set aside such complaints for now to concentrate on interpretation of the results.

  24. Similar points have been raised against Greene and Singer by others, including Berker (2009, 326), Kamm (2009), and Cullity (2006, 127). Greene himself, at least in less formal contexts, falls easily into employing normative premises for his argument. In a perceptive interview conducted by the philosopher Tamler Sommers and with the neuroscientist Liane Young, Greene is challenged with the empirical fact that consequentialist intuitions are also correlated with activity in some “emotional” areas of the brain—just not the same emotional areas as are deontic intuitions. Asked to explain why he favors one set of emotion-correlated intuitions over another, Greene does not offer a theory about the relative superiority of certain emotions or brain-areas. Rather, he appeals to typical utilitarian considerations. [See Sommers (2009, pp. 138–141)]. However, in an unpublished paper, Greene (manuscript) has been developing a bit more guidance about the normative superiority of the cognitive systems driving consequentialist intuitions, though the strength of this argument remains to be seen.

  25. I should say that I have been a bit unfair to Singer, who is clearly aware of the need to provide a normative framework. He writes, “Advances in our understanding of ethics do not themselves directly imply any normative conclusions, but they undermine some conceptions of doing ethics which themselves have normative conclusions. Those conceptions of ethics tend to be too respectful of our intuitions. Our better understanding of ethics gives us grounds for being less respectful of them.” (Singer 2005, 349) The trouble is that his assertion that deontology is “too respectful of our intuitions” rests on familiar consequentialist grounds, and nothing in this exchange seems to advance that dialectic. In a more recent paper, with Katarzyna de Lazari-Radek, Singer provides more detail about the evolutionary debunking of certain moral judgments, once again on Sidgwickian, utilitarian grounds (de Lazari-Radek and Singer 2012).

  26. Intriguingly, C. L. Stevenson—surely Greene’s meta-ethical forebear—seems to have anticipated almost precisely this exchange: “If certain of our attitudes are shown to have the same origin as the taboos of savages, we may become disconcerted at the company we are force to keep. After due consideration, of course, we may decide that our attitudes, however they may have originated, are unlike many taboos in that they will retain a former function, or have since acquired new ones. Hence we may insistently preserve them. But in the midst of such considerations we shall have been led to see our attitudes in a natural setting, and shall be more likely to change them with changing conditions. Hence anyone who wants to change a man’s attitudes can prepare the way by a genetic study.” (Stevenson 1944, 123–124).

References

  • Appiah, Kwame Anthony. 2008. Experiments in ethics. Cambridge: Harvard University Press.

    Google Scholar 

  • Baron, Jonathan. 1995. A psychological view of moral intuition. Harvard Review of Philosophy 5: 36–40.

    Google Scholar 

  • Berker, Selim. 2009. The normative insigificance of neuroscience. Philosophy & Public Affairs 37: 293–329.

    Article  Google Scholar 

  • Block, Ned. 1995. On a confusion about a function of consciousness. Behavioral and Brain Sciences 18: 227–247.

    Article  Google Scholar 

  • Block, Ned. 2007. Consciousness, accessibility, and the mesh between psychology and neural science. Behavioral and Brain Sciences 30: 481–499.

    Google Scholar 

  • Caruso, Eugene M., and Francesca Gino. 2011. Blind ethics: Closing one’s eyes polarizes moral judgments and discourages dishonest behavior. Cognition 118: 280–285.

    Article  Google Scholar 

  • Cohen, G.A. 2003. Facts and principles. Philosophy & Public Affairs 31: 211–245.

    Article  Google Scholar 

  • Cullity, Garrett. 2006. As you were? Moral philosophy and the aetiology of moral experience. Philosophical Explorations 9(1): 117–132.

    Article  Google Scholar 

  • Dancy, Jonathan. 2004. Ethics without principles. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Darwall, Stephen. 2006. The second person standpoint: Respect, morality, and accountability. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • de Lazari-Radek, Katarzyna, and Peter Singer. 2012. The objectivity of ethics and the unity of practical reason. Ethics 123(1): 9–31.

    Article  Google Scholar 

  • Dworkin, Ronald. 1996. Objectivity and truth: You’d better believe it. Philosophy & Public Affairs 25(2): 87–139.

    Article  Google Scholar 

  • Eskine, Kendall J., Natalie A. Kacinik, and Jesse J. Prinz. 2011. A bad taste in the mouth: gustatory disgust influences moral judgment. Psychological Science 22: 295–299.

    Article  Google Scholar 

  • Fried, Charles. 1978. Biology and ethics: Normative implications. In Morality as a biological phenomenon, ed. G. Stent, 187–197. Berkeley: University of California Press.

    Google Scholar 

  • Fodor, Jerry. 1983. The modularity of mind. Cambridge: MIT Press.

    Google Scholar 

  • Gigerenzer, Gerd. 2008. Moral intuition = fast and frugal heuristics? In Sinnott-Armstrong (2008a), 1–26.

  • Greene, J.D., R.B. Sommerville, L.E. Nystrom, J.M. Darley, and J.D. Cohen. 2001. An fMRI investigation of emotional engagement in moral judgment. Science 293: 2105–2108.

    Article  Google Scholar 

  • Greene, J.D., L.E. Nystrom, A.D. Engell, J.M. Darley, and J.D. Cohen. 2004. The neural bases of cognitive conflict and control in moral judgment. Neuron 44: 389–400.

    Article  Google Scholar 

  • Greene, Joshua D. 2008. The secret Joke of Kant’s Soul. In Moral psychology, Vol. 3: The neuroscience of morality: Emotion, disease, and development, ed. W. Sinnott-Armstrong, 35–80. Cambridge: MIT Press.

    Google Scholar 

  • Greene, Joshua D. (manuscript). Beyond point-and-shoot morality: Why Cognitive Neuro (Science) Matters for Ethics.

  • Hamilton, W.D. 1964. The genetical evolution of social behavior. Journal of Theoretical Biology 7(1): 1–16.

    Article  Google Scholar 

  • Hare, R.M. 1963. Freedom and reason. Oxford: Oxford University Press.

    Google Scholar 

  • Held, Virginia. 1996. Whose Agenda? Ethics versus cognitive science. In Minds and morals: Essays on ethics and cognitive science, ed. May Friedman, and E. Clark, 69–88. Cambridge, MA: MIT Press.

    Google Scholar 

  • Hooker, Brad, and Margaret Little. 2000. Moral particularism. Oxford: Oxford University Press.

    Google Scholar 

  • Horowitz, Tamara. 1998. Philosophical intuitions and psychological theory. Ethics 108: 367–385.

    Article  Google Scholar 

  • David, Hume. 1777/1912. An enquiry concerning the principles of morals. Available online through Project Gutenberg. http://www.gutenberg.org/files/4320/4320-h/4320-h.htm.

  • Kahane, Guy, and Nicholas Shackel. 2010. Methodological problems in the neuroscience of moral judgment. Mind Language 25(5): 561–582.

    Article  Google Scholar 

  • Kahneman, Daniel, and Amos Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47(2): 263–292.

    Article  Google Scholar 

  • Kahneman, Daniel. 1994. The cognitive psychology of consequences and moral intuition. The tanner lecture in human values, University of Michigan, Ann Arbor, November 1994. (as cited in Kamm 1998).

  • Kahneman, Daniel. 2002. Maps of bounded rationality: A perspective on intuitive judgment and choice. In: Le Prix Nobel, ed. T. Frangsmyr, 416–499. Nobel Foundation: Stockholm.

  • Kamm, Frances. 1998. Moral intuitions, cognitive psychology, and the harming-versus-not-aiding distinction. Ethics 108(3): 463–488.

    Article  Google Scholar 

  • Kamm, Frances. 2009. Neuroscience and moral reasoning: A note on recent research. Philosophy & Public Affairs 37(4): 330–345.

    Article  Google Scholar 

  • Kant, Immanuel. 1785/2002. Groundwork for the Metaphysics of Morals. Trans. A.W. Wood. New Haven: Yale University Press.

  • Korsgaard, Christine M. 1996. The sources of normativity. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Levin, D.T., et al. 2002. Memory for centrally attended changing objects in an incidental real-world change detection paradigm. British Journal of Psychology 93: 289–302.

    Article  Google Scholar 

  • Levy, Neil. 2009. Empirically informed moral theory: A sketch of the landscape. Ethical Theory and Moral Practice 12(1): 3–8.

    Article  Google Scholar 

  • Nagel, Thomas. 1978. Ethics as an autonomous theoretical subject. In Morality as a biological phenomenon, ed. G. Stent, 198–208. Berkeley: University of California Press.

    Google Scholar 

  • Nagel, Thomas. 1997. The last word. Oxford: Oxford University Press.

    Google Scholar 

  • Nietzsche, Friedrich. 1901/1967. The will to power. Trans. W. Kaufmann. New York: Vintage Books.

  • Proulx, T., and S.J. Heine. 2008. The case of the transmogrifying experimenter: Reaffirmation of moral schema following implicit change detection. Psychological Science 19: 1294–1300.

    Article  Google Scholar 

  • Quinn, Warren. 1989. Actions, intentions, and consequences: The doctrine of doing and allowing. Philosophical Review 98(3): 287–312.

    Article  Google Scholar 

  • Rawls, John. 1971. A theory of justice. Cambridge: Harvard University Press.

    Google Scholar 

  • Scanlon, T.M. 1999. What we owe to each other. Cambridge: Harvard University Press.

    Google Scholar 

  • Scheffler, Samuel. 1994. The rejection of consequentialism. Oxford: University Press.

    Book  Google Scholar 

  • Schnall, S., J. Haidt, G.L. Clore, and A.H. Jordan. 2008. Disgust as embodied moral judgment. Personality and Social Psychology Bulletin 34: 1096–1109.

    Article  Google Scholar 

  • Sidgwick, Henry. 1874/1962. The Methods of Ethics. Chicago: University of Chicago Press.

  • Simons, D.J., and D.T. Levin. 1998. Failure to detect changes to people during a real-world interaction. Psychonomic Bulletin & Review 5: 644–649.

    Article  Google Scholar 

  • Singer, Peter. 1981. The expanding circle: Ethics and sociobiology. Oxford: Oxford University Press.

    Google Scholar 

  • Singer, Peter. 2005. Ethics and intuitions. The Journal of Ethics 9(3/4): 331–352.

    Article  Google Scholar 

  • Sinnott-Armstrong, Walter. 2008a. Moral psychology, vol. 2: The cognitive science of morality. Cambridge, MA: MIT Press.

  • Sinnott-Armstrong, Walter. 2008b. Framing moral intuitions. In Sinnott-Armstrong (2008a).

  • Smith, Michael. 1994. The moral problem. New York: Wiley-Blackwell.

    Google Scholar 

  • Sommers, Tamler. 2009. A very bad wizard: Morality behind the curtain. New York: McSweeney’s.

    Google Scholar 

  • Stanovich, Keith E., and Richard F. West. 2000. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences 23: 645–726.

    Article  Google Scholar 

  • Stevenson, Charles L. 1944. Ethics and language. New Haven: Yale University Press.

    Google Scholar 

  • Sunstein, Cass R. 2005. Moral heuristics. Behavioral and Brain Sciences 28: 531–573.

    Google Scholar 

  • van Roojen, Mark. 1999. Reflective moral equilibrium and psychological theory. Ethics 109: 846–857.

    Article  Google Scholar 

  • Valdesolo, Piercarlo, and David DeSteno. 2006. Manipulations of emotional context shape moral judgment. Psychological Science 17: 476–477.

    Article  Google Scholar 

  • Wheatley, Thalia, and Jonathan Haidt. 2005. Hypnotic disgust makes moral judgments more severe. Psychological Science 16(10): 780–784.

    Article  Google Scholar 

Download references

Acknowledgments

This paper has undergone many iterations, and I have certainly lost track of some of the people to whom thanks for written comments are owed. Still, at least some are: Ned Block, Stefano Cossara, Daniela Dover, Bill Glod, Guy Kahane, Hyunseop Kim, Joshua Knobe, Thomas Nagel, Michael Strevens, and two anonymous reviewers for this journal. Many of the ideas have come from discussions with Anne Barnhill, Justin Clarke-Doane, Jonny Cottrell, Andy Egan, Grace Helton, Matthew Liao, Steven Lukes, Simon Rippon, Jeff Sebo, Jon Simon, Knut Olav Skarsaune, Stephen Stich, Sharon Street, and participants in the NYU Philosophy Thesis Prep seminar and the NYU Sociology of Morals working group, as well as conference audiences at Hokkaido University and the University of Latvia. Further development of this research was supported by the VolkswagenStiftung’s European Platform for Life Sciences, Mind Sciences, and the Humanities (grant II/85 063).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Regina A. Rini.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Rini, R.A. Making Psychology Normatively Significant. J Ethics 17, 257–274 (2013). https://doi.org/10.1007/s10892-013-9145-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10892-013-9145-y

Keywords

Navigation