Minds and Machines

, Volume 25, Issue 1, pp 57–71 | Cite as

A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights from the Science of Heroism in Humans

Article

Abstract

The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview of what it means to be an artificial moral agent is provided. Then, an overview of a recent account of heroism that seeks to define the construct as the dynamic and interactive integration of character strengths (e.g., bravery and integrity) and situational constraints that afford the opportunity for moral behavior (i.e., moral affordances). With this as a foundation, a discussion is provided for what it might mean for a robot to be an ideal moral agent by proposing a dynamic and interactive connectionist model of robotic heroism. Given the limited accounts of robots engaging in moral behavior, a case for extending robotic moral capacities beyond just being a moral agent to the level of heroism is supported by drawing from exemplar situations where robots demonstrate heroism in popular film and fiction.

Keywords

Machine morality Moral agency Heroism Connectionism Affordances Character strengths 

References

  1. Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.CrossRefMATHGoogle Scholar
  2. Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? Intelligent Systems, IEEE, 21(4), 12–17.  Google Scholar
  3. Becker, S. W., & Eagly, A. H. (2004). The heroism of women and men. American Psychologist, 59(3), 163.CrossRefGoogle Scholar
  4. Beer, R. D. (1995). A dynamical systems perspective on agent-environment interaction. Artificial Intelligence, 72, 173–215.CrossRefGoogle Scholar
  5. Chemero, A. (2003). An outline of a theory of affordances. Ecological Psychology, 15(2), 181–195.CrossRefGoogle Scholar
  6. Churchland, P. M. (1996). The engine of reason, the seat of the soul: A philosophical journey into the brain. Cambridge, MA: MIT Press.Google Scholar
  7. Dautenhahn, L., Ogden, B., & Quick, T. (2002). From embodied to social embedded agents: Implications for interaction-aware robots. Cognitive Systems Research, 3, 397–428.CrossRefGoogle Scholar
  8. Di Stefano, P. (2010). Motivation and responsibility: Understanding the phenomenon of rescuing during the Rwandan genocide. Master’s Dissertation, Center for the Study of Human Rights, The London School of Economics and Political Science.Google Scholar
  9. Flescher, A. M. (2003). Heroes, saints, and ordinary morality. Washington, DC: Georgetown University Press.Google Scholar
  10. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.CrossRefGoogle Scholar
  11. Freeman, J. B., & Ambady, N. (2011). A dynamic interactive theory of person construal. Psychological Review, 118(2), 247.CrossRefGoogle Scholar
  12. Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.Google Scholar
  13. Goud, N. H. (2005). Courage: Its nature and development. The Journal of Humanistic Counseling, Education and Development, 44(1), 102–116.CrossRefGoogle Scholar
  14. Guarini, M. (2010). Particularism, analogy, and moral cognition. Minds and Machines, 20(3), 385–422.CrossRefGoogle Scholar
  15. Haidt, J., & Joseph, C. (2008). The moral mind: How five sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence, and S. Stich (Eds.), The innate mind, 3, 367–391.Google Scholar
  16. Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65(4), 613.CrossRefGoogle Scholar
  17. Hodges, B. H., & Baron, R. M. (1992). Values as constraints on affordances: Perceiving and acting properly. Journal for the Theory of Social Behaviour, 22(3), 263–294.CrossRefGoogle Scholar
  18. Hofmann, W., Wisneski, D. C., Brandt, M. J., & Skitka, L. J. (2014). Morality in everyday life. Science, 345(6202), 1340–1343.CrossRefGoogle Scholar
  19. Honarvar, A. R., & Ghasem-Aghaee, N. (2009). An artificial neural network approach for creating an ethical artificial agent. In IEEE international symposium on computational intelligence in robotics and automation (CIRA), 2009 (pp. 290–295). IEEE.Google Scholar
  20. Jayawickreme, E., & Chemero, A. (2008). Ecological moral realism: An alternative theoretical framework for studying moral psychology. Review of General Psychology, 12(2), 118.CrossRefGoogle Scholar
  21. Jayawickreme, E., & Di Stefano, P. (2012). How can we study heroism? Integrating persons, situations and communities. Political Psychology, 33(1), 165–178.CrossRefGoogle Scholar
  22. Jayawickreme, E., & Forgeard, M. J. (2011). Insight or data: Using non-scientific sources to teach positive psychology. The Journal of Positive Psychology, 6(6), 499–505.CrossRefGoogle Scholar
  23. Johnson, A. M., & Axinn, S. (2014). Acting vs. being moral: The limits of technological moral actors. In IEEE international symposium on ethics in engineering, sciences, and technology. Google Scholar
  24. Jordan, J. S. (2008). Wild agency: Nested intentionalities in cognitive neuroscience and archaeology. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1499), 1981–1991.CrossRefGoogle Scholar
  25. Jordan, J. S., & Ghin, M. (2006). (Proto-) consciousness as a contextually emergent property of self-sustaining systems. Mind and Matter, 4(1), 45–68.Google Scholar
  26. Ladikos, A. (2004). Revisiting the virtue of courage in Aristotle. Phronimon, 5(2), 77–92.Google Scholar
  27. Li, H., Doermann, D., & Kia, O. (2000). Automatic text detection and tracking in digital video. Image Processing, IEEE Transactions on, 9(1), 147–156.CrossRefGoogle Scholar
  28. Lyons, M. T. (2005). Who are the heroes? Characteristics of people who rescue others. Journal of Cultural and Evolutionary Psychology, 3(3), 245–254.CrossRefGoogle Scholar
  29. MacIntyre, A. (1984). After virtue (2nd ed.). Notre Dame: University of Notre Dame Press.Google Scholar
  30. Merritt, M. W., Doris, J. M., & Harman, G. (2010). Character. In J. M. Doris & F. Cushman (Eds.), The moral psychology handbook. (pp. 355–401). Oxford: Oxford University Press.Google Scholar
  31. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE, 21(4), 18–21.CrossRefGoogle Scholar
  32. Peterson, C., & Seligman, M. E. P. (2004). Character strengths and virtue: A classification. Oxford: Oxford University Press.Google Scholar
  33. Pezzulo, G. (2012). The “interaction engine”: A common pragmatic competence across linguistic and nonlinguistic interactions. IEEE Transactions on Autonomous Mental Development, 4(2), 105–123.CrossRefGoogle Scholar
  34. Pomerleau, D. A. (1991). Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1), 88–97.CrossRefGoogle Scholar
  35. Prinz, J. J., & Nichols, S. (2010). Moral emotions. In J. M. Doris & F. Cushman (Eds.), The moral psychology handbook. (pp. 111–146). Oxford: Oxford University Press.Google Scholar
  36. Schwartz, B. (1990). The creation and destruction of value. American Psychologist, 45(1), 7–15.CrossRefGoogle Scholar
  37. Smirnov, O., Arrow, H., Kennett, D., & Orbell, J. (2007). Ancestral war and the evolutionary origins of “heroism”. Journal of Politics, 69(4), 927–940.CrossRefGoogle Scholar
  38. Stenstrom, D. M., & Curtis, M. (2012). Heroism and risk of harm. Psychology, 3(12A), 1085–1090.CrossRefGoogle Scholar
  39. Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30.Google Scholar
  40. Wallach, W. (2010). Robot minds and human ethics: The need for a comprehensive model of moral decision making. Ethics and Information Technology, 12(3), 243–250.CrossRefGoogle Scholar
  41. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. New York: Oxford University Press.Google Scholar
  42. Wallach, W., Franklin, S., & Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. Topics in Cognitive Science, 2(3), 454–485.CrossRefGoogle Scholar
  43. Wilson, D. H. (2011). Robopocalypse. New York: Simon and Schuster.Google Scholar
  44. Wiltshire, T. J., Barber, D., & Fiore, S. M. (2013). Towards modeling social-cognitive mechanisms in robots to facilitate human-robot teaming. In Proceedings of the human factors and ergonomics society 57th annual meeting, pp. 1278–1282.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2015

Authors and Affiliations

  1. 1.Cognitive Sciences Laboratory, Institute for Simulation and TrainingUniversity of Central FloridaOrlandoUSA

Personalised recommendations