Ethics and Information Technology

, Volume 16, Issue 3, pp 197–206 | Cite as

Artificial moral agents are infeasible with foreseeable technologies

  • Patrick Chisan Hew
Original Paper


For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.


Artificial agent Moral agent Ethical agent Moral responsibility Automation Robotics 



The authors thank DELETED and the especially the anonymous reviewers for their comments and feedback. This article is UNCLASSIFIED and approved for public release. Any opinions in this document are those of the author alone, and do not necessarily represent those of DELETED.


  1. Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top–down, bottom–up, and hybrid approaches. Ethics and Information Technology, 7(3), 149–155. doi: 10.1007/s10676-006-0004-4.CrossRefGoogle Scholar
  2. Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence, 12(3), 251–261. doi: 10.1080/09528130050111428.CrossRefzbMATHGoogle Scholar
  3. Anderson, M., Anderson, S. L., & Armen, C. (2005). Towards machine ethics: Implementing two action-based ethical theories. In M. Anderson, S. L. Anderson & C. Armen (Eds.), 2005 Association for the advancement of artificial intelligence fall symposium (Vol. FS-05-06). Menlo Park, California: The AAAI Press. Retrieved from
  4. Aristotle (translated by W. D. Ross). Nicomachean ethics. Google Scholar
  5. Arkin, R. C. (2007). Governing lethal behaviour: Embedding ethics in a hybrid deliberative/reactive robot architecture. Atlanta: Georgia Institute of Technology.Google Scholar
  6. Arkoudas, K., Bringsjord, S., & Bello, P. (2005). Toward ethical robots via mechanized deontic logic. In M. Anderson, S. L. Anderson & C. Armen (Eds.), 2005 Association for the advancement of artificial intelligence fall symposium (Vol. FS-05-06). Menlo Park, CA: The AAAI Press. Retrieved from
  7. Asaro, P. (2009). What should we want from a robot ethic? In R. Capurro & M. Nagenborg (Eds.), Ethics and robotics. Amsterdam: IOS Press.Google Scholar
  8. Billings, C. E. (1991). Human-centered automation: A concept and guidelines. Washington: National Aeronautics and Space Administration.Google Scholar
  9. Bringsjord, S., Bello, P., & Ferrucci, D. (2001). Creativity, the turing test, and the (better) lovelace test. Minds and Machines, 11(1), 3–27.CrossRefzbMATHGoogle Scholar
  10. Bryson, J. J. (2012). Patiency is not a virtue: Suggestions for co-constructing an ethical framework including intelligent artefacts. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.Google Scholar
  11. Chalmers, D. J. (1992). Subsymbolic computation and the Chinese room. In J. Dinsmore (Ed.), The symbolic and connectionist paradigms: Closing the gap. London: Lawrence Erlbaum.Google Scholar
  12. Chalmers, D. J. (1993). A computational foundation for the study of cognition. Google Scholar
  13. Champagne, M., & Tonkens, R. (2012). Bridging the responsibility gap in automated warfare. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.Google Scholar
  14. Christman, J. (2011). Autonomy in moral and political philosophy. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Spring 2011 Edition ed.). Retrieved from
  15. Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents. AI & SOCIETY, 24(2), 181–189. doi: 10.1007/s00146-009-0208-3.CrossRefGoogle Scholar
  16. Denis, L. (2012). Kant and hume on morality. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Fall 2012 Edition ed.). Retrieved from
  17. Dodig-Crnkovic, G., & Çürüklü, B. (2012). Robots: Ethical by design. Ethics and Information Technology, 14(1), 61–71. doi: 10.1007/s10676-011-9278-2.CrossRefGoogle Scholar
  18. Eshleman, A. (1999). Moral responsibility. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter 2009 Edition ed.). Retrieved from
  19. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. doi: 10.1023/B:MIND.0000035461.63578.9d.CrossRefGoogle Scholar
  20. Friedman, B., & Kahn, P. H, Jr. (1992). Human agency and responsible computing: Implications for computer system design. Journal of Systems and Software, 17(1), 7–14. doi: 10.1016/0164-1212(92)90075-u.CrossRefGoogle Scholar
  21. Grodzinsky, F., Miller, K., & Wolf, M. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10(2), 115–121. doi: 10.1007/s10676-008-9163-9.CrossRefGoogle Scholar
  22. Gunkel, D. J. (2012). A vindication of the rights of machines. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.Google Scholar
  23. Haksar, V. (1998). Moral agents. Concise routledge encyclopaedia of philosophy.Google Scholar
  24. Hanson, F. (2009). Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91–99. doi: 10.1007/s10676-009-9184-z.CrossRefGoogle Scholar
  25. Hellström, T. (2013). On the moral responsibility of military robots. Ethics and Information Technology, 15(2), 99–107. doi: 10.1007/s10676-012-9301-2.CrossRefGoogle Scholar
  26. Himma, K. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29. doi: 10.1007/s10676-008-9167-5.CrossRefGoogle Scholar
  27. Hinton, G. E. (1989). Connectionist learning procedures. Artificial Intelligence, 40, 185–234.CrossRefGoogle Scholar
  28. Hofstadter, D. R. (1999). Gödel, Escher, Bach: An eternal golden braid (20th Anniversary Edition ed.). New York: Basic Books.Google Scholar
  29. Hofstadter, D. R. (2007). I am a strange loop. New York: Basic Books.zbMATHGoogle Scholar
  30. Human Rights Watch. (2012). Losing humanity: The case against killer robots. International Human Rights Clinic.Google Scholar
  31. Johansson, L. (2010). The functional morality of robots (pp. 65–73). Hershey, PA: IGI Global.Google Scholar
  32. Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204. doi: 10.1007/s10676-006-9111-5.CrossRefGoogle Scholar
  33. Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2), 123–133. doi: 10.1007/s10676-008-9174-6.CrossRefGoogle Scholar
  34. Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Surrey: Ashgate.Google Scholar
  35. Kuflik, A. (1999). Computers in control: Rational transfer of authority or irresponsible abdication of autonomy? Ethics and Information Technology, 1(3), 173–184. doi: 10.1023/a:1010087500508.CrossRefGoogle Scholar
  36. Lichocki, P., Kahn, P. H., & Billard, A. (2011). The ethical landscape of robotics. Robotics and Automation Magazine, IEEE, 18(1), 39–50. doi: 10.1109/mra.2011.940275.CrossRefGoogle Scholar
  37. Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. California: California State Polytechnic University.Google Scholar
  38. Lucas, G. R. (2011). Industrial challenges of military robotics. Journal of Military Ethics, 10(4), 274–295. doi: 10.1080/15027570.2011.639164.CrossRefGoogle Scholar
  39. Matheson, B. (2012). Manipulation, moral responsibility, and machines. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.Google Scholar
  40. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183. doi: 10.1007/s10676-004-3422-1.CrossRefGoogle Scholar
  41. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE, 21, 18–21.CrossRefGoogle Scholar
  42. Ord, T. (2006). The many forms of hypercomputation. Applied Mathematics and Computation, 178(1), 143–153. doi: 10.1016/j.amc.2005.09.076.CrossRefzbMATHMathSciNetGoogle Scholar
  43. Paramythis, A. (2004). Towards self-regulating adaptive systems. In Paper presented at the proceedings of the annual workshop of the sig adaptivity and user modeling in interactive systems of the german informatics society (ABIS04), Berlin.Google Scholar
  44. Paramythis, A. (2006). Can adaptive systems participate in their design? Meta-adaptivity and the evolution of adaptive behavior adaptive hypermedia and adaptive web-based systems (Vol. 4018/2006). Berlin: Springer.Google Scholar
  45. Parthemore, J., & Whitby, B. (2012). Moral agency, moral responsibility, and artefacts. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.Google Scholar
  46. Powers, T. M. (2006). Prospects for a kantian machine. Intelligent Systems, IEEE, 21(4), 46–51. doi: 10.1109/mis.2006.77.CrossRefGoogle Scholar
  47. Ruffo, M.-d.-N. (2012). The robot, a stranger to ethics. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.Google Scholar
  48. Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach (2nd ed.). Upper Saddle River, NJ: Prentice Hall.Google Scholar
  49. Serugendo, G. D. M., Gleizes, M.-P., & Karageorgos, A. (2006). Self-organisation and emergence in MAS: An overview. Informatica, 30, 45–54.zbMATHGoogle Scholar
  50. Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRefGoogle Scholar
  51. Stahl, B. C. (2004). Information, ethics, and computers: The problem of autonomous moral agents. Minds and Machines, 14(1), 67–83. doi: 10.1023/b:mind.0000005136.61217.93.CrossRefGoogle Scholar
  52. Stahl, B. C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8(1), 205–213. doi: 10.1007/s10676-006-9112-4.CrossRefGoogle Scholar
  53. Stensson, P., & Jansson, A. (2013). Autonomous technology—Sources of confusion: A model for explanation and prediction of conceptual shifts. Ergonomics, 57(3), 455–470. doi: 10.1080/00140139.2013.858777.CrossRefGoogle Scholar
  54. Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30.Google Scholar
  55. Swiatek, M. (2012). Intending to err: The ethical challenge of lethal, autonomous systems. Ethics and Information Technology, 14(4), 241–254. doi: 10.1007/s10676-012-9302-1.CrossRefMathSciNetGoogle Scholar
  56. Tonkens, R. (2009). A challenge for machine ethics. Minds and Machines, 19(3), 421–438. doi: 10.1007/s11023-009-9159-1.CrossRefGoogle Scholar
  57. Tonkens, R. (2012). Should autonomous robots be pacifists? Ethics and Information Technology, 1–15. doi: 10.1007/s10676-012-9292-z.
  58. Torrance, S. (2008). Ethics and consciousness in artificial agents. AI and Society, 22(4), 495–521. doi: 10.1007/s00146-007-0091-8.CrossRefGoogle Scholar
  59. Trevellyan, R., & Browne, D. P. (1987). A self-regulating adaptive system. ACM SIGCHI Bulletin, 18(4), 103–107.CrossRefGoogle Scholar
  60. Turing, A. M. (1947 (1986)). Lecture to the London mathematical society on 20 February 1947. In B. E. Carpenter & B. W. Doran (Eds.), Charles Babbages reprint series for the history of computing. MIT Press.Google Scholar
  61. Wallach, W. (2008). Implementing moral decision making faculties in computers and robots. AI and Society, 22(4), 463–475. doi: 10.1007/s00146-007-0093-6.CrossRefMathSciNetGoogle Scholar
  62. Williams, G. (2014). Responsibility. In J. Fieser & B. Dowden (Eds.), The internet encyclopedia of philosophy. Retrieved from
  63. Wolf, T. D., & Holvoet, T. (2004). Emergence and self-organisation: A statement of similarities and differences. Paper presented at the 2nd international workshop on engineering self-organising applications.Google Scholar
  64. Wolf, T. D., & Holvoet, T. (2005). Towards a methodology for engineering self-organising emergent systems. In H. Czap & R. Unland (Eds.), Self-organization and autonomic informatics (pp. 18–34). Amsterdam: IOS Press.Google Scholar

Copyright information

© Her Majesty the Queen in Right of Australia 2014

Authors and Affiliations

  1. 1.Defence Science and Technology OrganisationRockingham DCAustralia

Personalised recommendations