Skip to main content
Log in

Robot minds and human ethics: the need for a comprehensive model of moral decision making

  • Original paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, in the making of moral decisions. However, assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties in order to function satisfactorily in responding to morally significant situations. But working through methods for building AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans arrive at satisfactory moral judgments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Agents within computer systems are often referred to as bots. Therefore the term (ro)bots seems a useful way to collectively refer to physical robots and software agents within computers and networks.

  2. While some philosophers have tried to distinguish ethics from morals, in this paper I bow to the more common practice, which is to use the words interchangeably.

  3. Moral Machines, p. 8.

  4. p. 495.

References

  • Allen, C. (2002). Calculated morality: Ethical computing in the limit. In I. Smit & G. Lasker (Eds.), Cognitive, emotive and ethical aspects of decision making and human action, vol I. Germany/Windsor, Ontario: Baden Baden/IIAS.

    Google Scholar 

  • Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence, 12, 251–261.

    Article  MATH  Google Scholar 

  • Anderson, M., & Anderson, S. (2006). Machine ethics. IEEE Intelligent Systems, 21(4), 10–11.

    Google Scholar 

  • Anderson, M., Anderson, S., & Armen, C. (2006). An approach to computing ethics. IEEE Intelligent Systems, 21(4), 56–63.

    Google Scholar 

  • Axelrod, R., & Hamilton, W. (1981). The evolution of cooperation. Science, 211, 1390–1396.

    Article  MathSciNet  Google Scholar 

  • Bentham, J. [1823] 2008. Introduction to the Principles of Morals and Legislation. Whitefish, MT: Kessinger Publishing, LLC.

  • Danielson, P. (1992). Artificial morality: Virtuous robots for virtual games. New York: Routledge.

    Google Scholar 

  • Darley, J., & Batson, D. (1973). From Jerusalem to Jericho: A study of situational and dispositional variables in helping behavior. Journal of Personality and Social Psychology, 27, 100–108.

    Article  Google Scholar 

  • de Waal, F. (1996). Good natured: The evolution of right & wrong in humans and other animals. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Flack, J., & de Waal, F. (2000) ‘Any Animal Whatever’: Darwinian building blocks of morality in monkeys and apes. In L. Katz (Ed.), Evolutionary origins of morality (pp. 1–30). Imprint Academic.

  • Franklin, S. (2003). IDA: A conscious artifact? Journal of Consciousness Studies, 10, 47–66.

    MathSciNet  Google Scholar 

  • Franklin, S., & Patterson, F. G. (2006). The LIDA architecture: Adding new modes of learning to an intelligent, Autonomous, Software Agent. IDPT-2006 Proceedings (Integrated Design and Process Technology). Society for Design and Process Science.

  • Gigerenzer, G. (2010). Moral satisficing: Rethinking morality as bounded rationality. TopiCS (forthcoming).

  • Greene, J., Sommerville, B., Nystrom, L., Darley, J., & Cohen, J. (2001). An fMRI investigation of emotional engagement in moral Judgment. Science, vol. 293, Sept. 14, 2001, 2105–2108, 2001.

  • Greenwald, A., & Banaji, M. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 102, 4–27.

    Article  Google Scholar 

  • Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834.

    Article  Google Scholar 

  • Haidt, J. (2003). The moral emotions. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 852–870). Oxford: Oxford University Press.

    Google Scholar 

  • Hamilton, W. (1964a). The general evolution of social behavior I. Journal of Theoretical Biology, 7, 1–16.

    Article  Google Scholar 

  • Hamilton, W. (1964b). The general evolution of social behavior II. Journal of Theoretical Biology, 7, 17–52.

    Article  Google Scholar 

  • Hauser, M. (2006). Moral minds: How nature designed our universal sense of right and wrong. New York: Ecco.

    Google Scholar 

  • Hume, D. [1739–1740] 2009. A treatise on human nature: Being an attempt to introduce the experimental method of reasoning into moral subjects. Ithaca: Cornell University Press.

  • Isen, A., & Levin, P. F. (1972). Effect of feeling good on helping: Cookies and kindness. Journal of Personality and Social Psychology, 21, 384–388.

    Article  Google Scholar 

  • Kohlberg, L. (1981). Essays on moral development, vol. I: The philosophy of moral development. San Francisco: Harper & Row.

    Google Scholar 

  • Kohlberg, L. (1984). Essays on moral development, vol 2: The psychology of moral development. San Francisco: Harper & Row.

    Google Scholar 

  • Lapsley, D., & Narvaez, D. (Eds.). (2004). Moral development, self, and identity. Mahwah, New Jersey: Lawrence Erlbaum Associates.

    Google Scholar 

  • Mikhail, J. (2000). Rawls’ linguistic analogy: A study of the “generative grammar” model of moral theory described by John Rawls in A Theory of Justice. PhD Dissertation, Cornell University.

  • Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2), 81–97.

    Article  Google Scholar 

  • Moore, G. E. [1903] 2008. Principia Ethica. Cambridge, UK: Cambridge University Press.

  • Nucci, L., & Narvaez, D. (2008). Handbook of moral and character education. New York: Routledge.

    Google Scholar 

  • Piaget, J. (1972). Judgment and reasoning in the child. Totowa, NJ: Littlefield, Adams and Company.

    Google Scholar 

  • Rawls, J. [1971] 1999. A theory of justice. Cambridge, MA: Harvard University Press.

  • Sanfey, A., Rilling, J., Aronson, J., Nystrom, L., & Cohen, J. (2003) The neural basis of economic decision-making in the Ultimatum Game. Science, 300(5626), 1755–1758.

    Google Scholar 

  • Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–458.

    Article  Google Scholar 

  • Simon, H. (1957). A behavioral model of rational choice, in models of man, social and rational: Mathematical essays on rational human behavior in a social setting. New York: Wiley.

    Google Scholar 

  • Simon, H. (1982). Models of bounded rationality, vols. 1 and 2. Cambridge, MA: MIT Press.

    Google Scholar 

  • Singer, P. (1990). Animal liberation. New York: New York Review Books.

    Google Scholar 

  • Smith, A. [1759] 2004. The theory of moral sentiments. Whitefish, MT: Kessinger Publishing, LLC.

  • Torrance, S. (2008). Ethics and consciousness in artificial agents. Artificial Intelligence and Society, 22(4), 34.

    Google Scholar 

  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.

    Article  Google Scholar 

  • Uleman, J., & Bargh, J. (Eds.). (1989). Unintended thought. New York: Guilford.

    Google Scholar 

  • Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. New York: Oxford University Press.

    Google Scholar 

  • Wallach, W., Allen, C., & Smit, I. (2008). Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI and Society, 22(4), 565–582.

    Article  Google Scholar 

  • Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. TopiCS (forthcoming).

  • Wilson, E. (1975). Sociobiology: The new synthesis. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Yudkowsky, E. (2001). What is Friendly AI? Available online at http://singinst.org/ourresearch/publications/what-is-friendly-ai.html.

Download references

Acknowledgments

Colin Allen and Iva Smit contributed to many of the ideas in this paper during our collaborative work together. I am also most appreciative for the many helpful suggestions from four anonymous critics.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wendell Wallach.

Additional information

Moral Machines Blog: http://moralmachines.blogspot.com/

Technology and Ethics at the Yale Interdisciplinary Center for Bioethics: http://www.yale.edu/bioethics/studygrps_techno.shtml

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wallach, W. Robot minds and human ethics: the need for a comprehensive model of moral decision making. Ethics Inf Technol 12, 243–250 (2010). https://doi.org/10.1007/s10676-010-9232-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-010-9232-8

Keywords

Navigation