Skip to main content
Log in

Artificial moral agents are infeasible with foreseeable technologies

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top–down, bottom–up, and hybrid approaches. Ethics and Information Technology, 7(3), 149–155. doi:10.1007/s10676-006-0004-4.

    Article  Google Scholar 

  • Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence, 12(3), 251–261. doi:10.1080/09528130050111428.

    Article  MATH  Google Scholar 

  • Anderson, M., Anderson, S. L., & Armen, C. (2005). Towards machine ethics: Implementing two action-based ethical theories. In M. Anderson, S. L. Anderson & C. Armen (Eds.), 2005 Association for the advancement of artificial intelligence fall symposium (Vol. FS-05-06). Menlo Park, California: The AAAI Press. Retrieved from http://www.aaai.org/Library/Symposia/Fall/fs05-06.

  • Aristotle (translated by W. D. Ross). Nicomachean ethics.

  • Arkin, R. C. (2007). Governing lethal behaviour: Embedding ethics in a hybrid deliberative/reactive robot architecture. Atlanta: Georgia Institute of Technology.

    Google Scholar 

  • Arkoudas, K., Bringsjord, S., & Bello, P. (2005). Toward ethical robots via mechanized deontic logic. In M. Anderson, S. L. Anderson & C. Armen (Eds.), 2005 Association for the advancement of artificial intelligence fall symposium (Vol. FS-05-06). Menlo Park, CA: The AAAI Press. Retrieved from http://www.aaai.org/Library/Symposia/Fall/fs05-06.

  • Asaro, P. (2009). What should we want from a robot ethic? In R. Capurro & M. Nagenborg (Eds.), Ethics and robotics. Amsterdam: IOS Press.

    Google Scholar 

  • Billings, C. E. (1991). Human-centered automation: A concept and guidelines. Washington: National Aeronautics and Space Administration.

    Google Scholar 

  • Bringsjord, S., Bello, P., & Ferrucci, D. (2001). Creativity, the turing test, and the (better) lovelace test. Minds and Machines, 11(1), 3–27.

    Article  MATH  Google Scholar 

  • Bryson, J. J. (2012). Patiency is not a virtue: Suggestions for co-constructing an ethical framework including intelligent artefacts. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.

  • Chalmers, D. J. (1992). Subsymbolic computation and the Chinese room. In J. Dinsmore (Ed.), The symbolic and connectionist paradigms: Closing the gap. London: Lawrence Erlbaum.

    Google Scholar 

  • Chalmers, D. J. (1993). A computational foundation for the study of cognition.

  • Champagne, M., & Tonkens, R. (2012). Bridging the responsibility gap in automated warfare. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.

  • Christman, J. (2011). Autonomy in moral and political philosophy. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Spring 2011 Edition ed.). Retrieved from http://plato.stanford.edu/archives/spr2011/entries/autonomy-moral/.

  • Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents. AI & SOCIETY, 24(2), 181–189. doi:10.1007/s00146-009-0208-3.

    Article  Google Scholar 

  • Denis, L. (2012). Kant and hume on morality. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Fall 2012 Edition ed.). Retrieved from http://plato.stanford.edu/archives/fall2012/entries/kant-hume-morality/.

  • Dodig-Crnkovic, G., & Çürüklü, B. (2012). Robots: Ethical by design. Ethics and Information Technology, 14(1), 61–71. doi:10.1007/s10676-011-9278-2.

    Article  Google Scholar 

  • Eshleman, A. (1999). Moral responsibility. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter 2009 Edition ed.). Retrieved from http://plato.stanford.edu/archives/win2009/entries/moral-responsibility/.

  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. doi:10.1023/B:MIND.0000035461.63578.9d.

    Article  Google Scholar 

  • Friedman, B., & Kahn, P. H, Jr. (1992). Human agency and responsible computing: Implications for computer system design. Journal of Systems and Software, 17(1), 7–14. doi:10.1016/0164-1212(92)90075-u.

    Article  Google Scholar 

  • Grodzinsky, F., Miller, K., & Wolf, M. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10(2), 115–121. doi:10.1007/s10676-008-9163-9.

    Article  Google Scholar 

  • Gunkel, D. J. (2012). A vindication of the rights of machines. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.

  • Haksar, V. (1998). Moral agents. Concise routledge encyclopaedia of philosophy.

  • Hanson, F. (2009). Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91–99. doi:10.1007/s10676-009-9184-z.

    Article  Google Scholar 

  • Hellström, T. (2013). On the moral responsibility of military robots. Ethics and Information Technology, 15(2), 99–107. doi:10.1007/s10676-012-9301-2.

    Article  Google Scholar 

  • Himma, K. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29. doi:10.1007/s10676-008-9167-5.

    Article  Google Scholar 

  • Hinton, G. E. (1989). Connectionist learning procedures. Artificial Intelligence, 40, 185–234.

    Article  Google Scholar 

  • Hofstadter, D. R. (1999). Gödel, Escher, Bach: An eternal golden braid (20th Anniversary Edition ed.). New York: Basic Books.

  • Hofstadter, D. R. (2007). I am a strange loop. New York: Basic Books.

    MATH  Google Scholar 

  • Human Rights Watch. (2012). Losing humanity: The case against killer robots. International Human Rights Clinic.

  • Johansson, L. (2010). The functional morality of robots (pp. 65–73). Hershey, PA: IGI Global.

    Google Scholar 

  • Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204. doi:10.1007/s10676-006-9111-5.

    Article  Google Scholar 

  • Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2), 123–133. doi:10.1007/s10676-008-9174-6.

    Article  Google Scholar 

  • Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Surrey: Ashgate.

    Google Scholar 

  • Kuflik, A. (1999). Computers in control: Rational transfer of authority or irresponsible abdication of autonomy? Ethics and Information Technology, 1(3), 173–184. doi:10.1023/a:1010087500508.

    Article  Google Scholar 

  • Lichocki, P., Kahn, P. H., & Billard, A. (2011). The ethical landscape of robotics. Robotics and Automation Magazine, IEEE, 18(1), 39–50. doi:10.1109/mra.2011.940275.

    Article  Google Scholar 

  • Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. California: California State Polytechnic University.

    Google Scholar 

  • Lucas, G. R. (2011). Industrial challenges of military robotics. Journal of Military Ethics, 10(4), 274–295. doi:10.1080/15027570.2011.639164.

    Article  Google Scholar 

  • Matheson, B. (2012). Manipulation, moral responsibility, and machines. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183. doi:10.1007/s10676-004-3422-1.

    Article  Google Scholar 

  • Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE, 21, 18–21.

    Article  Google Scholar 

  • Ord, T. (2006). The many forms of hypercomputation. Applied Mathematics and Computation, 178(1), 143–153. doi:10.1016/j.amc.2005.09.076.

    Article  MATH  MathSciNet  Google Scholar 

  • Paramythis, A. (2004). Towards self-regulating adaptive systems. In Paper presented at the proceedings of the annual workshop of the sig adaptivity and user modeling in interactive systems of the german informatics society (ABIS04), Berlin.

  • Paramythis, A. (2006). Can adaptive systems participate in their design? Meta-adaptivity and the evolution of adaptive behavior adaptive hypermedia and adaptive web-based systems (Vol. 4018/2006). Berlin: Springer.

  • Parthemore, J., & Whitby, B. (2012). Moral agency, moral responsibility, and artefacts. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.

  • Powers, T. M. (2006). Prospects for a kantian machine. Intelligent Systems, IEEE, 21(4), 46–51. doi:10.1109/mis.2006.77.

    Article  Google Scholar 

  • Ruffo, M.-d.-N. (2012). The robot, a stranger to ethics. Paper presented at the AISB/IACAP World Congress 2012, Birmingham, UK.

  • Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach (2nd ed.). Upper Saddle River, NJ: Prentice Hall.

    Google Scholar 

  • Serugendo, G. D. M., Gleizes, M.-P., & Karageorgos, A. (2006). Self-organisation and emergence in MAS: An overview. Informatica, 30, 45–54.

    MATH  Google Scholar 

  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.

    Article  Google Scholar 

  • Stahl, B. C. (2004). Information, ethics, and computers: The problem of autonomous moral agents. Minds and Machines, 14(1), 67–83. doi:10.1023/b:mind.0000005136.61217.93.

    Article  Google Scholar 

  • Stahl, B. C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8(1), 205–213. doi:10.1007/s10676-006-9112-4.

    Article  Google Scholar 

  • Stensson, P., & Jansson, A. (2013). Autonomous technology—Sources of confusion: A model for explanation and prediction of conceptual shifts. Ergonomics, 57(3), 455–470. doi:10.1080/00140139.2013.858777.

    Article  Google Scholar 

  • Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30.

    Google Scholar 

  • Swiatek, M. (2012). Intending to err: The ethical challenge of lethal, autonomous systems. Ethics and Information Technology, 14(4), 241–254. doi:10.1007/s10676-012-9302-1.

    Article  MathSciNet  Google Scholar 

  • Tonkens, R. (2009). A challenge for machine ethics. Minds and Machines, 19(3), 421–438. doi:10.1007/s11023-009-9159-1.

    Article  Google Scholar 

  • Tonkens, R. (2012). Should autonomous robots be pacifists? Ethics and Information Technology, 1–15. doi:10.1007/s10676-012-9292-z.

  • Torrance, S. (2008). Ethics and consciousness in artificial agents. AI and Society, 22(4), 495–521. doi:10.1007/s00146-007-0091-8.

    Article  Google Scholar 

  • Trevellyan, R., & Browne, D. P. (1987). A self-regulating adaptive system. ACM SIGCHI Bulletin, 18(4), 103–107.

    Article  Google Scholar 

  • Turing, A. M. (1947 (1986)). Lecture to the London mathematical society on 20 February 1947. In B. E. Carpenter & B. W. Doran (Eds.), Charles Babbages reprint series for the history of computing. MIT Press.

  • Wallach, W. (2008). Implementing moral decision making faculties in computers and robots. AI and Society, 22(4), 463–475. doi:10.1007/s00146-007-0093-6.

    Article  MathSciNet  Google Scholar 

  • Williams, G. (2014). Responsibility. In J. Fieser & B. Dowden (Eds.), The internet encyclopedia of philosophy. Retrieved from http://www.iep.utm.edu/.

  • Wolf, T. D., & Holvoet, T. (2004). Emergence and self-organisation: A statement of similarities and differences. Paper presented at the 2nd international workshop on engineering self-organising applications.

  • Wolf, T. D., & Holvoet, T. (2005). Towards a methodology for engineering self-organising emergent systems. In H. Czap & R. Unland (Eds.), Self-organization and autonomic informatics (pp. 18–34). Amsterdam: IOS Press.

    Google Scholar 

Download references

Acknowledgments

The authors thank DELETED and the especially the anonymous reviewers for their comments and feedback. This article is UNCLASSIFIED and approved for public release. Any opinions in this document are those of the author alone, and do not necessarily represent those of DELETED.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrick Chisan Hew.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hew, P.C. Artificial moral agents are infeasible with foreseeable technologies. Ethics Inf Technol 16, 197–206 (2014). https://doi.org/10.1007/s10676-014-9345-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-014-9345-6

Keywords

Navigation