Skip to main content
Log in

The Science of Morality and its Normative Implications

  • Original Paper
  • Published:
Neuroethics Aims and scope Submit manuscript

Abstract

Neuromoral theorists are those who claim that a scientific understanding of moral judgment through the methods of psychology, neuroscience and related disciplines can have normative implications and can be used to improve the human ability to make moral judgments. We consider three neuromoral theories: one suggested by Gazzaniga, one put forward by Gigerenzer, and one developed by Greene. By contrasting these theories we reveal some of the fundamental issues that neuromoral theories in general have to address. One important issue concerns whether the normative claims that neuromoral theorists would like to make are to be understood in moral terms or in non-moral terms. We argue that, on either a moral or a non-moral interpretation of these claims, neuromoral theories face serious problems. Therefore, neither the moral nor the non-moral reading of the normative claims makes them philosophically viable.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. This view will be highly objectionable to certain moral theorists, particularly those of a Kantian orientation [1]. The view appears to entail that moral reasoning can be treated as an instrumental means to some other, non-moral, end. Moreover, there is a possible view on which ‘collective usefulness’ and ‘meta-moral’ criteria will coincide, such as if one believes that (some of) the moral truths just are whatever best serves a particular set of social aims. Views in this vicinity are defended by Boyd [2] and Copp [3].

  2. A possible view, for example, is that in some contexts people make many performance errors when thinking about moral matters, and that an understanding of UM will help us to avoid such errors. Notice that the proponent of this view would have to find a principled way to make a competence/performance distinction in the context of morality, which may turn out to be a difficult task.

  3. Mill [22, p. 69] himself addressed this classic objection: “[…] [D]efenders of utility often find themselves called upon to reply to such objections as this—that there is not time, previous to action, for calculating and weighing the effects of any line of conduct on the general happiness. This is exactly as if any one were to say that it is impossible to guide our conduct by Christianity, because there is not time, on every occasion on which anything has to be done, to read through the Old and New Testaments.”

  4. This is an example of what Shelly Kagan [23, p. 64] calls “the most common objection to consequentialism”; see Lenman [24] for an influential recent development. However, according to Kagan, the objection generalizes to all moral theories, since any plausible moral theory must allow some role for consequences. And the problem is not peculiar to maximizing theories, since any consequences will causally echo into the unknowable future [23, pp. 64–69].

  5. The presence/absence of personal force explains some of the difference between footbridge and switch. The results of other moral dilemmas suggest that other factors (e.g. whether the harm is intentionally and actively produced) are relevant too and that there are interaction effects between these other factors and personal force [31].

  6. The ‘undisturbed’ operation of the manual system will not result in such moral judgment in all cases. For example, if subjects are asked to solve footbridge first and switch immediately afterwards, they will often give a deontological answer to both footbridge and switch [36]. This could be due to the fact that, having just given a deontological answer to footbridge, and being unsure about the existence of a moral difference between footbridge and switch, subjects—in order to maintain coherence—end up giving a deontological answer to switch too, despite the fact that the harm hypothesized in switch does not elicit a negative emotional response via the automatic system.

  7. Greene claims that there is much independent evidence in support of the dual-process theory of moral judgment, coming from the studies of psychopaths, individual differences, the way particular tasks interfere or enhance emotional reactions or reasoning abilities, etc. Cf. [34]. Greene’s account has generated much controversy [3745] and various alternative proposals have been made [7, 4547].

  8. Consider another example. According to [34], some recent research suggests that moral permissibility judgments are affected—through the emotions generated by the automatic system—by how close one is to the person whose death is needed in order to save a greater number of lives. The closer one is, the less likely it is that one will judge the action permissible. Since, says Greene, in accordance for example with Peter Singer, distance is morally irrelevant, one ought to try to avoid being affected by distance when forming moral judgments. But see Kamm [48] for the view that distance is, at least in some circumstances, morally relevant.

  9. Here is the third of these axioms: “The good of any one individual is of no more importance, from the point of view (if I may say so) of the Universe, than the good of any other” [50, p. 382].

  10. It could be argued that to be wrong about moral facts is morally wrong, and that this moral wrongness is supplementary relative to the wrongness of a moral action that originates from such erroneous moral knowledge. Alternatively, it could be argued that errors in moral knowledge are morally wrong just because they lead to morally wrong actions. We do not want to enter this interesting and important issue here, since what concerns us at the moment is to show that to follow a morally irrelevant factor can be plausibly seen as an instance of moral wrongness (as opposed to other kinds of wrongness).

  11. Some of Gigerenzer’s worries about the drawbacks of informationally rich thinking, at least in the context of everyday cognition, may be relevant here.

References

  1. Kant, I. 1996/1788. Critique of practical reason. In Practical philosophy, ed. M. Gregor. Cambridge: Cambridge University Press. First published 1788.

  2. Boyd, R. 1988. How to be a moral realist. In Essays on moral realism, ed. G. Sayre-McCord, 181–228. Ithaca: Cornell University Press.

    Google Scholar 

  3. Copp, D. 2008. Darwinian skepticism about moral realism. Philosophical Issues 18(1): 186–206.

    Article  Google Scholar 

  4. Joyce, R. 2002. The myth of morality. Cambridge: Cambridge University Press.

    Google Scholar 

  5. Joyce, R. 2006. The evolution of morality. Cambridge: MIT Press.

    Google Scholar 

  6. Gazzaniga, M. 2005. The ethical brain. New York: Dana Press.

    Google Scholar 

  7. Mikhail, J. 2007. Universal moral grammar: Theory, evidence and the future. Trends in Cognitive Sciences 11(4): 143–152.

    Article  Google Scholar 

  8. Hauser, M., L. Young, and F. Cushman. 2008. Reviving Rawls’s linguistic analogy: Operative principles and the causal structure of moral actions. In Moral psychology vol. 2: The cognitive science of morality, ed. W. Sinnott-Armstrong, 107–143. Cambridge: The MIT Press.

    Google Scholar 

  9. Dwyer, S. 2009. Moral dumbfounding and the linguistic analogy. Methodological implications for the study of moral judgment. Mind and Language 24(3): 274–296.

    Article  Google Scholar 

  10. Sterelny, K. 2012. The evolved apprentice. How evolution made humans unique. Cambridge: MIT Press.

    Google Scholar 

  11. Gigerenzer, G. 2008. Moral intuition=fast and frugal heuristics? In Moral psychology vol. 2. The cognitive science of morality, ed. W. Sinnott-Armstrong, 1–26. Cambridge: MIT Press.

    Google Scholar 

  12. Gigerenzer, G. 2010. Moral satisficing: Rethinking moral behavior as bounded rationality. Topics in Cognitive Science 2: 528–554.

    Article  Google Scholar 

  13. Simon, H. 1955. A behavioral model of rational choice. Quarterly Journal of Economics 69: 99–118.

    Article  Google Scholar 

  14. Simon, H. 1956. Rational choice and the structure of the environment. Psychological Review 63: 129–138.

    Article  Google Scholar 

  15. Gigerenzer, G., P.M. Todd, and ABC Research Group. 1999. Simple heuristics that make us smart. New York: Oxford University Press.

    Google Scholar 

  16. Gigerenzer, G. 2000. Adaptive thinking. New York: Oxford University Press.

    Google Scholar 

  17. Gigerenzer, G., and R. Selten (eds.). 2001. Bounded rationality. Cambridge: MIT Press.

    Google Scholar 

  18. Johnson, E., and D. Goldstein. 2003. Do defaults save lives? Science 302: 1338–1339.

    Article  Google Scholar 

  19. Lipsey, R.G., and K. Lancaster. 1956. The general theory of the second best. Review of Economic Studies 24: 11–32.

    Article  Google Scholar 

  20. Todd, P.M., and G. Gigerenzer. 2007. Mechanisms of ecological rationality: Heuristics and environments that make us smart. In Oxford handbook of evolutionary psychology, ed. R. Dunbar and L. Barrett. Oxford: Oxford University Press.

    Google Scholar 

  21. Sunstein, C.R., and R.H. Thaler. 2003. Libertarian paternalism is not an oxymoron. The University of Chicago Law Review 70(4): 1159–1202.

    Article  Google Scholar 

  22. Mill, J.S. 1998/1861. In Utilitarianism, ed. R. Crisp. Oxford: Oxford University Press.

  23. Kagan, S. 1998. Normative ethics. Boulder: Westview Press.

    Google Scholar 

  24. Lenman, J. 2000. Consequentialism and cluelessness. Philosophy and Public Affairs 29(4): 342–370.

    Article  Google Scholar 

  25. Gigerenzer, G. 2008. Reply to comments. In Moral psychology vol. 2. The cognitive science of morality, ed. W. Sinnott-Armstrong, 41–46. Cambridge: MIT Press.

    Google Scholar 

  26. Gigerenzer, G., and T. Sturm. 2012. How (far) can epistemology be naturalized? Synthese 187(1): 243–268.

    Article  Google Scholar 

  27. Sunstein, C.R. 2008. Fast, frugal and (sometimes) wrong. In Moral psychology, vol. 2. The cognitive science of morality, ed. W. Sinnott-Armstrong, 27–31. Cambridge: MIT Press.

    Google Scholar 

  28. Greene, J.D., et al. 2001. An fMRI investigation of emotional engagement in moral judgment. Science 293: 2105–2108.

    Article  Google Scholar 

  29. Greene, J.D., et al. 2004. The neural bases of cognitive conflict and control in moral judgment. Neuron 44: 389–400.

    Article  Google Scholar 

  30. Greene, J.D., et al. 2008. Cognitive load selectively interferes with utilitarian moral judgment. Cognition 107: 1144–1154.

    Article  Google Scholar 

  31. Greene, J.D., et al. 2009. Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition 111: 364–371.

    Article  Google Scholar 

  32. Greene, J.D. 2008. The secret joke of Kant’s soul. In Moral psychology vol. 3. The neuroscience of morality, ed. W. Sinnott-Armstrong, 35–79. Cambridge: MIT Press.

    Google Scholar 

  33. Greene, J. D. 2010. Notes onThe Normative Insignificance of Neuroscienceby Selim Berker. Available on http://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-Notes-on-Berker-Nov10.pdf (accessed May 2nd, 2012).

  34. Greene, J. D. (unpublished). Beyond point-and-shoot morality: Why cognitive (neuro)science matters for ethics. Forthcoming on Ethics.

  35. Haidt, J. 2001. The emotional dog and his rational tail: A social intuitionist approach to moral judgment. Psychological Review 108(4): 814–834.

    Article  Google Scholar 

  36. Schwitzgebel, E., and F. Cushman. 2012. Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers. Mind and Language 27(2): 135–153.

    Article  Google Scholar 

  37. Berker, S. 2009. The normative insignificance of neuroscience. Philosophy and Public Affairs 37(4): 293–329.

    Article  Google Scholar 

  38. Haidt, J., and S. Kesebir. 2010. Morality. In Handbook of social psychology, 5th ed, ed. S. Fiske, D. Gilbert, and G. Lindzey, 797–832. Hoboken: Wiley.

    Google Scholar 

  39. Kahane, G. 2013. The armchair and the trolley: An argument for experimental ethics. Philosophical Studies 162(2): 421–445.

    Article  Google Scholar 

  40. Kahane, G., et al. 2012. The neural basis of intuitive and counterintuitive moral judgment. Social Cognitive Affective Neuroscience 7(4): 393–402.

    Article  Google Scholar 

  41. Kahane, G., and N. Shackel. 2010. Methodological issues in the neuroscience of moral judgement. Mind and Language 25(5): 561–582.

    Article  Google Scholar 

  42. Klein, C. 2011. The dual track theory of moral decision-making: A critique of the neuroimaging evidence. Neuroethics 4(2): 143–162.

    Article  Google Scholar 

  43. McGuire, J., et al. 2009. A reanalysis of the personal/impersonal distinction in moral psychology research. Journal of Experimental Social Psychology 45(3): 577–580.

    Article  Google Scholar 

  44. Mikhail, J. 2008. Moral cognition and computational theory. In Moral psychology vol. 3. The neuroscience of morality, ed. W. Sinnott-Armstrong, 81–91. Cambridge: MIT Press.

    Google Scholar 

  45. Moll, J., et al. 2005. The neural basis of human moral cognition. Nature Reviews Neuroscience 6: 799–809.

    Article  Google Scholar 

  46. Haidt, J. 2007. The new synthesis in moral psychology. Science 316(5827): 998–1002.

    Article  Google Scholar 

  47. Haidt, J. 2012. The righteous mind. Why we are divided by politics and religion. New York: Pantheon Books.

    Google Scholar 

  48. Kamm, F. 2007. Intricate ethics. New York: Oxford University Press.

    Book  Google Scholar 

  49. Singer, P. 2005. Ethics and intuitions. The Journal of Ethics 9: 331–352.

    Article  Google Scholar 

  50. Sidgwick, H. 1907. The methods of ethics, 7th ed. London: Macmillan.

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank Joshua Greene for sharing unpublished materials and two anonymous referees for their helpful comments. Research was supported by the European Institute of Oncology (TB), the Umberto Veronesi Foundation (TB), and the VolkswagenStiftung (RR).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matteo Mameli.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bruni, T., Mameli, M. & Rini, R.A. The Science of Morality and its Normative Implications. Neuroethics 7, 159–172 (2014). https://doi.org/10.1007/s12152-013-9191-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12152-013-9191-y

Keywords

Navigation