Ethics and Information Technology

, Volume 15, Issue 2, pp 99–107 | Cite as

On the moral responsibility of military robots

  • Thomas HellströmEmail author
Original Paper


This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.


Moral responsibility Robots Military robots Autonomy Robot ethics 



The author would like to thank several anonymous reviewers for their highly valuable comments and suggestions to this and earlier versions of the paper.


  1. ABIresearch. (2011). Military robot markets to exceed $8 billion in 2016. Retrieved June 15, 2012, from
  2. Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? IEEE Intelligent Systems, 12–17, July/August.Google Scholar
  3. Aristotle. (1985). The Nicomachean Ethics (Terence Irwin, Trans.). Hackett Publishing Co, 1985.Google Scholar
  4. Arkin, R. C. (2009a). Governing lethal behavior in autonomous robots. London: Chapman & Hall/CRC.CrossRefGoogle Scholar
  5. Arkin, R. C. (2009b). Ethical robots in warfare. IEEE Technology and Society Magazine, 28(1), 30–33, Spring 2009.Google Scholar
  6. Asaro, P. M. (2006). What should we want from a robot ethic? IRIE International Review of Information Ethics, 6 (12/2006).Google Scholar
  7. Bechtel, W. (1985). Attributing responsibility to computer systems. Metaphilosophy, 16(4), 296–306.CrossRefGoogle Scholar
  8. Bone, E., & Bolkcom, C. (2003, April). Unmanned aerial vehicles: Background and issues for congress. Retrieved January 3, 2012, from
  9. Connolly, W. (1974). The terms of political discourse. Princeton: Princeton University Press.Google Scholar
  10. Dennett, D. C. (1973). Mechanism and responsibility. In T. Honderich (Ed.), Essays on freedom of action. Boston: Routledge & Keegan Paul.Google Scholar
  11. Dennett, D. C. (1997). When HAL kills, who’s to blame? computer ethics. In D. G. Stork (Ed.), HAL’s Legacy: 2001′s computer as dream and reality. Cambridge: MIT Press.Google Scholar
  12. Dodig-Crnkovic, G., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. In A. Holst, P. Kreuger, & P. Funk (Eds.), 10h Scandinavian Conference on Artificial Intelligence SCAI 2008 (Vol. 173). Frontiers in Artificial Intelligence and Applications.Google Scholar
  13. Eshleman, A. (2009). Moral responsibility, the stanford encyclopedia of philosophy (Winter 2009 Edition). In E. N. Zalta (Ed.), Retrieved January 21, 2012, from
  14. Franklin, S., & Graesser, A. (1997). Is it an agent, or just a program?: A taxonomy for autonomous agents (pp. 21–35). Berlin: Intelligent Agents III.Google Scholar
  15. Friedman, B. (1990). Moral responsibility and computer technology. Erin document reproduction services.Google Scholar
  16. Friedman, B., & Millett, L. (1995). It’s the computer’s fault—reasoning about computers as moral agents. In Conference companion of the conference on human factors in computing systems (pp. 226–227). Denver, CO.Google Scholar
  17. Friedman, B., & Millett, L. (1997). Reasoning about computers as moral agents: A research note, in human values and the design of computer technology. In B. Friedman (Ed.), Stanford/New York: CSLI Publications/Cambridge University Press.Google Scholar
  18. GlobalSecurity. (2012). TALON small mobile robot. Retrieved January 22, 2012, from
  19. Grossman, N. (2007). Rehabilitation or revenge: Prosecuting child soldiers for human rights violations. Georgetown Journal of International Law, 38, 323–362.Google Scholar
  20. Hertzberg, J., & Chatila, R. (2008), AI reasoning methods for robotics. In Springer handbook of robotics (pp. 207–223).Google Scholar
  21. Hildebrand, A. (2009, March). Samsung Techwin’s latest: A killing robot, info4 4 SECURITY. Retrieved January 21, 2012, from
  22. Hinds, P., Roberts, T., & Jones, H. (2004). Whose job is it anyway? A study of human-robot interaction in a collaborative task. Human-Computer Interaction, 19, 151–181.CrossRefGoogle Scholar
  23. iRobot. (2012). Retrieved January 22, 2012, from
  24. Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204.CrossRefGoogle Scholar
  25. Kim, T., & Hinds, P. J. (2006). Who should i blame? Effects of autonomy and transparency on attributions in human-robot interaction. In Proceedings of RO-MAN’06 (pp. 80–85).Google Scholar
  26. Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design, a US department of defense office of naval research-funded report. Retrieved June 16, 2012, from
  27. Matarić, M. J., & Michaud, F. (2008). Behavior-based systems. In B. Siciliano, & O. Khatib (Eds.), Springer handbook of eobotics (pp. 891–909). Springer.Google Scholar
  28. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.CrossRefGoogle Scholar
  29. Miller, K. W. (2011). Moral responsibility for computing artifacts: “The rules”. IT Professional, 13(3), 57–59.zbMATHCrossRefGoogle Scholar
  30. Moon, Y., & Nass, C. (1998). Are computers scapegoats? Attributions of responsibility in human-computer interaction. International Journal of Human-Computer Studies, 49(1), 79–94.CrossRefGoogle Scholar
  31. Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, & Cybernetics, 30(3), 286–297.CrossRefGoogle Scholar
  32. QineticQ. (2012). MAARS—Modular Advanced Armed Robotic System. Retrieved January 22, 2012, from
  33. Raytheon. (2009). News release. Retrieved January 22, 2012, from
  34. Riedel, F. W., Hall, S. M., Barton, J. D., Christ, J. P., Funk, B. K., Milnes, T. D., et al. (2010). Guidance and navigation in the global engagement department. Johns Hopkins APL Technical Digest, 29(2).Google Scholar
  35. Samsung. (2012). SGR-1. Retrieved January 22, 2012, from
  36. Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. USA: MIT Press.Google Scholar
  37. Singer, P. W. (2009a). Wired for war—The robotics revolution and 21st Century conflict. Penguin.Google Scholar
  38. Singer, P. W. (2009b). Military robots and the laws of war. The New Atlantis, Winter 2009.Google Scholar
  39. Singer, P. W. (2009c). Wired for war? Robots and military doctrine. JFQ: Joint Force Quarterly, 2009 1st Quarter, 1(52), 104–110.Google Scholar
  40. Sofge, E. (2009). America’s Robot Army: Are Unmanned Fighters Ready for Combat? Retrieved January 3, 2012, from
  41. Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRefGoogle Scholar
  42. Stahl, B. C. (2004). Responsible management of information systems. Hershey: Idea-Group Publishing.Google Scholar
  43. Strawson, P. F. (1974). Freedom and resentment, in freedom and resentment and other essays. London: Methuen.Google Scholar
  44. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT Press.Google Scholar
  45. Thorndike, E. L. (1911). Animal Intelligence (2nd ed.). New York: Hafner. Transaction Publishers, 2000.Google Scholar
  46. U.S. Air Force. (2006). ‘Reaper’ moniker given to MQ-9 unmanned aerial vehicle. In The official Web site of U.S. Air Force. Retrieved January 3, 2012, from
  47. U.S. Air Force. (2009). Unmanned Aircraft Systems Flight Plan 2009–2047. Retrieved January 3, 2012, from
  48. U.S. Navy. (2011). MK 15—Phalanx Close-In Weapons System (CIWS), United States Navy Fact File. Retrieved June 14, 2012, from
  49. Walzer, M. (2006). Just and unjust wars: A moral argument with historical illustrations. Basic Books.Google Scholar
  50. Wezeman, S. (2007). UAVS and UCAVS: Developments in the European Union. European Parliament, October, 2007. Retrieved January 3, 2012, from
  51. Yamauchi, B. (2004). PackBot: A Versatile platform for military robotics. In Proceedings of SPIE Vol. 5422: Unmanned ground vehicle technology VI, , Orlando, FL.Google Scholar
  52. Yamauchi, B., Pook, P., & Gruber, A. (2002). Bloodhound: A semi-autonomous battlefield medical robot. In Proceedings of the 23rd Army Science Conference, U.S. Army, Orlando, FL.Google Scholar
  53. Young, I. M. (2010). Responsibility and global labor justice. In G. Ognjenovic (Ed.), Responsibility in context: Perspectives. Springer.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  1. 1.Department of Computing ScienceUmeå UniversityUmeåSweden

Personalised recommendations