Computationally rational agents can be moral agents

Abstract

In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument for computational rationality as an integrative element that effectively combines the philosophical and computational aspects of artificial moral agency. This logically leads to a philosophically coherent and scientifically consistent model for building artificial moral agents. Besides providing a possible answer to the question of how to build artificial moral agents, this model also invites sound debate from multiple disciplines, which should help to advance the field of machine ethics forward.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3

Notes

  1. 1.

    Sometimes referred to as machine morality, computational morality or artificial morality.

  2. 2.

    See, for example, the works of Wu and Lin (2018), Arnold et al. (2017), Conitzer et al. (2017) and Yu et al. (2018). Much of these works offer tremendous technical value in building AMA’s, but very little by way of conceptualisation and formulation of an AMA.

  3. 3.

    Chapter 13 of Russell and Norvig (2009) gives a great introduction to decision theory in computer science.

  4. 4.

    Can be found in the Nicomachean Ethics Book VI.

  5. 5.

    “A moral agent is an agent whom one appropriately holds responsible for its actions and their consequences, and moral agency is the distinct type of agency that agent possesses”.

  6. 6.

    According to Moor (2006), a explicit ethical agent is one that can hold an explicit ethical representation of a given situation, and use that to respond in a manner that is ethical.

  7. 7.

    The use of the terms top-down and bottom-up in both the cited philosophical and scientific disciplines is conceptually the same. In both cases, top-down means starting from a pre-defined ethical framework or a computational model and bottom-up means learning an ethical representation or a computational model from the available data.

References

  1. Abney, K. (2012). Robotics, ethical theory, and metaethics: A guide for the perplexed, chap 3. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot Ethics, the ethical and social implications of robotics. Cambridge: The MIT Press.

    Google Scholar 

  2. Allen, C., & Wallach, W. (2012). Moral Machines: contradiction in terms, or abdication of human responsibility? Chap 4. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot Ethics, the ethical and social implications of robotics. Cambridge: The MIT Press.

    Google Scholar 

  3. Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7(3), 149–155. https://doi.org/10.1007/s10676-006-0004-4.

    Article  Google Scholar 

  4. Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15. https://doi.org/10.1609/aimag.v28i4.2065, http://www.aaai.org/ojs/index.php/aimagazine/article/view/2065.

  5. Arnold, T., Kasenberg, D., & Scheutz, M. (2017). Value alignment or misalignment what will keep systems accountable?. In Workshops at the Thirty-First AAAI Conference on Artificial Intelligence.

  6. Churchland, P. S. (2014). The neurobiological platform for moral values. Behaviour, 151(2–3), 283–296. https://doi.org/10.1163/1568539X-00003144.

    Article  Google Scholar 

  7. Coeckelbergh, M. (2014). The moral standing of machines: Towards a relational and non-cartesian moral hermeneutics. Philosophy and Technology, 27(1), 61–77.

    Article  Google Scholar 

  8. Conitzer, V., Sinnott-Armstrong, W., Borg, J. S., Deng, Y., & Kramer, M. (2017). Moral decision making frameworks for artificial intelligence. In Thirty-First AAAI Conference on Artificial Intelligence, https://pdfs.semanticscholar.org/a3bb/ffdcc1c7c4cae66d6af373651389d94b7090.pdf.

  9. Daily, M., Medasani, S., Behringer, R., & Trivedi, M. (2017). Self-driving cars. Computer, 50(12), 18–23. https://doi.org/10.1109/MC.2017.4451204

    Article  Google Scholar 

  10. Dameski, A. (2018). A comprehensive ethical framework for AI entities: Foundations. In M. Iklé, A. Franz, R. Rzepka, B. Goertzel, (Eds.), International Conference on Artificial General Intelligence, pp. 42–51. Berlin: Springer. https://doi.org/10.1007/978-3-319-97676-1.

    Google Scholar 

  11. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.2139/ssrn.1124296.

    Article  Google Scholar 

  12. Franklin, S. (2003). A conscious artifact? Journal of Consciousness Studies, 10(4–5), 47–66.

    Google Scholar 

  13. Franklin, S., Madl, T., Mello, S. D., & Snaider, J. (2014). LIDA: A systems-level architecture for cognition, emotion, and learning. IEEE Transactions on Autonomous Mental Development, 6(1), 19–41.

    Article  Google Scholar 

  14. Genewein, T., Leibfried, F., Grau-Moya, J., & Braun, D. A. (2015). Bounded rationality, abstraction, and hierarchical decision-making: An information-theoretic optimality principle. Frontiers in Robotics and AI, 2(November), 1–24. https://doi.org/10.3389/frobt.2015.00027.

    Article  Google Scholar 

  15. Gershman, S. J., Horvitz, E. J., & Tenenbaum, J. B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines.Science, 349(6245), 273–278. https://doi.org/10.1126/science.aac6076, www.sciencemag.orgpapers2://publication/uuid/20A0106C-9CBA-472D-AAFB-69231964766F, arXiv:1011.1669v3.

    MathSciNet  Article  Google Scholar 

  16. Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29. https://doi.org/10.1007/s10676-008-9167-5.

    Article  Google Scholar 

  17. Horvitz, E. J. (1987). Reasoning about beliefs and actions under computational resource constraints. In Proceedings of the Third Workshop on Uncertainty in Artificial Intelligence, AAAI and Association for Uncertainty in Artificial Intelligence, pp. 429–444. http://erichorvitz.com/u87.htm.

  18. Horvitz, E. J. (1988). Reasoning under varying and uncertain resource constraints. In AAAI, pp. 111–116.

  19. Horvitz, E. J. (1989). Rational metareasoning and compilation for optimizing decisions under bounded resources. In Proceedings of Computational Intelligence ’89, Association of Computing Machinery, Milan, Italy, http://erichorvitz.com/rationality_89.htm.

  20. Horvitz, E. J., Cooper, G. F., & Heckerman, D. E. (1989). Reflection and action under scarce resources: Theoretical principles and empirical study. IJCAI, 2, 1121–1127.

    MATH  Google Scholar 

  21. Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., et al. (2017). Artificial intelligence in healthcare: past, present and future. BMJ,. https://doi.org/10.1136/svn-2017-000101.

    Article  Google Scholar 

  22. Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Machine Ethics, 9780521112, 168–183. https://doi.org/10.1017/CBO9780511978036.012.

    Article  Google Scholar 

  23. Leviathan, Y., & Matias, Y. (2017). Google AI Blog: Google Duplex: An AI system for accomplishing real-world tasks over the phone. https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html.

  24. Lewis, R. L., Howes, A., & Singh, S. (2014). Computational rationality: Linking mechanism and behavior through bounded utility maximization. Topics in Cognitive Science,. https://doi.org/10.1111/tops.12086.

    Article  Google Scholar 

  25. Liao, S. M. (2010). The basis of human moral status. Journal of Moral Philosophy, 7(2), 1–31. https://doi.org/10.1163/174552409X12567397529106.

    Article  Google Scholar 

  26. Lucentini, D. F., & Gudwin, R. R. (2015). A comparison among cognitive architectures: A theoretical analysis. Procedia Procedia Computer Science, 71, 56–61. https://doi.org/10.1016/j.procs.2015.12.198.

    Article  Google Scholar 

  27. Marwala, T. (2013). Semi-bounded rationality—A model for decision making. arXiv preprint arXiv:13056037 pp. 153–164, arXiv:1305.6037.

  28. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine, 4, 12–14. https://doi.org/10.1609/aimag.v27i4.1904. arXiv:9809069v1.

  29. Miller, F. D. (1984). Aristotle on rationality in action. The Review of Metaphysics, 37(3), 499–520, https://www.jstor.org/stable/20128047.

  30. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21. https://doi.org/10.1109/MIS.2006.80.

    Article  Google Scholar 

  31. Parthemore, J., & Whitby, B. (2013). What makes any agent a moral agent? Reflections on machine consciousness and moral Agency. International Journal of Machine Consciousness, 5(2), 105–129. https://pdfs.semanticscholar.org/3ff2/49fe3c8b3a2c94ae762b76b2dd0203f1f789.pdf.

  32. Parthemore, J., & Whitby, B. (2014). Moral agency, moral responsibility, and artifacts: What existing artifacts fail to achieve (and why), and why they, nevertheless, can (and do!) make moral claims upon us. International Journal of Machine Consciousness, 6(2), 141–161. https://doi.org/10.1142/S1793843014400162.

    Article  Google Scholar 

  33. Rottschaefer, W. A. (2000). Naturalizing ethics: The biology and psychology of moral agency. Zygon, 35(5–6), 253–286. https://doi.org/10.1111/0591-2385.00276.

    Article  Google Scholar 

  34. Russell, S. J., & Norvig, P. (2009). Artifical intelligence: A modern approach, third edit edn. Prentice Hall, https://doi.org/10.1017/S0269888900007724, arXiv:1707.02286, arXiv:1011.1669v3.

    Article  Google Scholar 

  35. Russell, S. J., & Subramanian, D. (1995). Provably bounded-optimal agents. Journal of Artiicial Intelligence Research, 2, 575–609.

    Article  Google Scholar 

  36. Sapaty, P. S. (2015). Military robotics: Latest trends and spatial grasp solutions. IJARAI International Journal of Advanced Research in Artificial Intelligence, 4(4), 9–18.

    Google Scholar 

  37. Scheutz, M., & Malle, B. F. (2017). Moral robots. In L. S. M. Johnson & K. S. Rommelfanger (Eds.), The Routledge handbook of neuroethics. Abington: Routledge. https://doi.org/10.4324/9781315708652.ch24.

    Google Scholar 

  38. Schlosser, M. (2015). Agency. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy, fall 2015 edition. Stanford: Metaphysics Research Lab, Stanford University.

    Google Scholar 

  39. Selten, R. (1990). Bounded rationality. Journal of Institutional and Theoretical Economics (JITE), 146(4), 649–658.

    Google Scholar 

  40. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354. https://doi.org/10.1038/nature24270.

    Article  Google Scholar 

  41. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118.

    Article  Google Scholar 

  42. Simon, H. A. (1972). Theories of bounded rationality. Decision and Organization, 1(1), 161–176.

    MathSciNet  Google Scholar 

  43. Sullins, J. P. (2006). When is a robot a moral agent? IRIE: International Review of Information Ethics. http://sonoma-dspace.calstate.edu/handle/10211.1/427.

  44. Torrance, S. (2008). Ethics and consciousness in artificial agents. AI and Society, 22(4), 495–521. https://doi.org/10.1007/s00146-007-0091-8.

    Article  Google Scholar 

  45. Torrance, S. (2013). Artificial agents and the expanding ethical circle. AI and Society, 28(4), 399–414. https://doi.org/10.1007/s00146-012-0422-2.

    Article  Google Scholar 

  46. Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.

    MathSciNet  Article  Google Scholar 

  47. Wallach, W., Allen, C., & Franklin, S. (2011). Consciousness and ethics: Artificially conscious moral agents. International Journal of Machine Consciousness, 03(01), 177–192. https://doi.org/10.1142/S1793843011000674.

    Article  Google Scholar 

  48. Wu, Y. H., & Lin, S. D. (2018). A low-cost ethics shaping approach for designing reinforcement learning agents. The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18). arXiv:1712.04172.

  49. Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V. R., & Yang, Q. (2018). Building ethics into artificial intelligence. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI), pp. 5527–5533. http://moralmachine.mit.edu/.

  50. Zilberstein, S. (2013). Metareasoning and bounded rationality. In M. T. Cox & A. Raja (Eds.), Metareasoning: Thinking about thinking (pp. 27–40). Cambridge: MIT Press. https://doi.org/10.7551/mitpress/9780262014809.003.0003.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Bongani Andy Mabaso.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Mabaso, B.A. Computationally rational agents can be moral agents. Ethics Inf Technol (2020). https://doi.org/10.1007/s10676-020-09527-1

Download citation

Keywords

  • Artificial moral agency
  • Computational rationality
  • Bounded-rationality
  • Machine ethics