Skip to main content

Negotiating autonomy and responsibility in military robots

Abstract

Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly.

This is a preview of subscription content, access via your institution.

Notes

  1. Hellstrom argues not exactly that autonomous robots will be responsible but that we will be inclined to consider them responsible when they are responsive to praise and blame. Asaro (2007) entertains the possibility of robots being legally liable and subject to punishment by comparing legal liability for robots to the legal liability of corporations. Wallach (2013) suggests that: “If and when robots become ethical actors that can be held responsible for their actions, we can then begin debating whether they are no longer machines and are deserving of some form of personhood.”

  2. There are exceptions to this as in the case of Matthias (2004) who specifies several different kinds of programming that are considered autonomous.

  3. In its Report on Technological Horizons, the Office of the Chief Scientist of the U.S. Air Force concludes that the single greatest theme to emerge from the report “is the need, opportunity, and potential to dramatically advance technologies that can allow the Air Force to gain the capability increases, manpower efficiencies, and cost reductions available through far greater use of autonomous systems in essentially all aspects of Air Force operations” (2010, p. ix).

  4. A Task Force of the U.S. Defense Science Board defined autonomy as “a capability (or a set of capabilities) that enables a particular action of a system to be automatic or, within programmed boundaries, “self-governing.””(U.S. DoD 2012).

  5. See http://www.raytheon.com/capabilities/products/phalanx/.

References

  • Adams, T. (2001). Future warfare and the decline of human decision-making. Parameters, 31, 55–71.

    Google Scholar 

  • Asaro, P. (2007). Robots and responsibility from a legal perspective. Proceedings of the IEEE Conference on Robotics and Automation, Workshop on Roboethics, April 14, 2007, Rome.

  • Asaro, P. (2008). How just could a robot war be? In P. Brey, A. Briggle, & K. Waelbers (Eds.), Current Issues in computing and philosophy (pp. 50–64). Amsterdam, The Netherlands: IOS Press.

    Google Scholar 

  • Bekey, G. (2005). Autonomous robots: From biological inspiration to implementation and control. Cambridge, MA: MIT Press.

    Google Scholar 

  • Bijker, W. E., Hughes, T. P., & Pinch, T. (1987). The social construction of technological systems: New directions in the sociology and history of technology. London, UK: The MIT Press.

    Google Scholar 

  • Boyd, J. (1987). A discourse on winning and losing. Maxwell Air Force Base, AL: Air University Library Document No. M-U 43947.

  • Chopra, S., & White, L. W. (2011). A legal theory for autonomous artificial agents. Ann Arbor: The University of Michigan Press.

    Google Scholar 

  • Clough, B. T. (2002). Metrics, schmetrics: How the heck do you determine a UAV’s autonomy anyway. Technical report. Wright-Patterson AFB, OH: Air Force Research Lab.

    Google Scholar 

  • Crnkovic, G. D., & Çürüklü, B. (2012). Robots—Ethical by design. Ethics and Information Technology, 14(1), 61–71.

    Article  Google Scholar 

  • Crnkovic, G. D., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. In P. K. Holst & P. Funk (Eds.), Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press Books.

    Google Scholar 

  • Elio, R., & Petrinjak, A. (2005). Normative Communication Models for Agent. Autonomous Agents and Multi-Agent Systems, 11(3), 273–305.

    Article  Google Scholar 

  • Elliott, L., & Stewart, B. (2011). Automation and autonomy in unmanned aircraft systems. Introduction to Unmanned Aircraft Systems (pp. 99–122). Boca Raton: CRC Press.

    Book  Google Scholar 

  • Falcone, R., & Castelfranchi, C. (2001). The human in the loop of a delegated agent: The theory of adjustable social autonomy. IEEE Transactions on Systems, Man and Cybernetics, 31(5), 406–418.

    Article  Google Scholar 

  • Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10, 115–121.

    Article  Google Scholar 

  • Hellstrom, T. (2012). On the moral responsibility of military robots. Ethics and Information Technology (forthcoming).

  • Huang, H. (2008). Autonomy levels for unmanned systems (ALFUS) framework volume I: Terminology version 2.0. NISTSP 1011-I-2.0, National Institute of Standards and Technology, Gaithersburg, MD, September 2004.

  • Huang, H., Messina, E., & Albus, J. (2003). Autonomy level specification for intelligent autonomous vehicles: Interim progress report. In Proceedings of the performance metrics for intelligent systems (PerMIS) workshop, September 16–18, 2003, Gaithersburg, MD.

  • Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.

    Article  Google Scholar 

  • Johnson, D. G., & Powers, T. M. (2005). Computer systems and responsibility: A normative look at technological complexity. Ethics and Information Technology, 7(2), 99–107.

    Article  Google Scholar 

  • Khaleghi, B., Khamis, A., Fakhreddine, O. K., & Razavi, S. N. (2013). Multisensor data fusion: A review of the state-of-the-art. Information Fusion, 14(1), 28–44.

    Article  Google Scholar 

  • Lee, N., & Brown, S. (1994). Otherness and the actor network. American Behavioral Scientists, 37(6), 772–790.

    Article  Google Scholar 

  • Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robots: Risk, ethics, and design. http://ethics.calpoly.edu/ONR_report.pdf. Accessed October 14, 2011.

  • Luck, M., McBurney, P., Shehory, O., & Willmot, S. (2005). Agent technology: A roadmap for agent based computing (A Roadmap for Agent Based Computing), AgentLink, 2005. http://www.agentlink.org/roadmap/. Accessed February 12, 2014.

  • Luck, M., Munroe, S., & d’Inverno, M. (2003). Autonomy: Variable and generative. In H. Hexmoor, C. Castelfranchi, & R. Falcone (Eds.), Agent Autonomy (pp. 9–22). Dordrecht: Kluwer.

    Google Scholar 

  • Marino, D., & Tamburrini, G. (2006). Learning robots and human responsibility. International Review of Information Ethics, 6, 46–51.

    Google Scholar 

  • Marra, W. C., & McNeil, S. K. (2013). Understanding ‘The Loop’: Regulating the next generation of war machines (May 1, 2012). Harvard Journal of Law and Public Policy, 36(3). http://ssrn.com/abstract=2043131.

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.

    Article  Google Scholar 

  • Murphy, R. R., & Woods, D. D. (2009). Beyond Asimov: The three laws of responsible robotics. IEEE Intelligent Systems, 24(4), 14–20.

    Article  Google Scholar 

  • Nagenborg, M., Capurro, R., Weber, J., & Pingel, C. (2008). Ethical regulations on robotics in Europe. AI & SOCIETY, 22, 349–366.

    Article  Google Scholar 

  • Noorman, M. (2009). Mind the gap a critique of human/technology analogies in artificial agent discourse. Maastricht, The Netherlands: Universitaire Pers Maastricht.

  • Noorman, M. (2013). Responsibility practices and unmanned military technologies. Science and Engineering ethics. doi:10.1007/s11948-013-9484-x.

  • Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors Society, 39(2), 230–253 (224).

    Article  Google Scholar 

  • Perrow, C. B. (1999). Normal accidents: Living with high-risk technologies. 2nd Edition, Princeton, NJ: Princeton University Press.

  • Schulzke, M. (2012). Autonomous weapons and distributed responsibility. Philosophy and Technology. http://link.springer.com/article/10.1007%2Fs13347-012-0089-0. Accessed December 14, 2012.

  • Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. Cambridge, MA: MIT Press.

    Google Scholar 

  • Sheridan, T. B., & Verplank, W. (1978). Human and computer control of undersea teleoperators. Cambridge, MA: Man–Machine Systems Laboratory, Department of Mechanical Engineering, MIT.

    Google Scholar 

  • Singer, P. (2009). Wired for war: The robotics revolution and conflict in the 21st century. New York, NY: Penguin.

    Google Scholar 

  • Sparrow, R. (2007). Killer robots. Journal of applied philosophy, 24(1), 62–77.

    Article  Google Scholar 

  • U.S. Air Force Chief Scientist. (2010). Report on technological horizons: A vision for air force science & technology during 2010–2030. Vol 1. AF/ST-TR-10-01-PR, May 15, 2010.

  • U.S. Department of Defense. (2009). FY2009-2034 Unmanned systems integrated roadmap. http://www.acq.osd.mil/psa/docs/UMSIntegratedRoadmap-2009.pdf. Visit September 20, 2011.

  • U.S. Department of Defense. (2011). FY2011-2036 Unmanned systems integrated roadmap. http://www.acq.osd.mil/sts/docs/UnmannedSystemsIntegrated-RoadmapFY2011-2036.pdf. Accessed January 3, 2012.

  • U.S. Department of Defense. (2012). Task force report: The role of autonomy in DoD systems. http://www.fas.org/irp/agency/dod/dsb/autonomy.pdf. Accessed November 5, 2012.

  • Wallach, W. (2013). Terminating the terminator: What to do about autonomous weapons. http://ieet.org/index.php/IEET/more/wallach20130129 posted January 28, 2013; Accessed February 2, 2013.

  • Wallach, W. & Allen, C. (2013). Framing robot arms control. Ethics and Information Technology, 15(2), 125–135.

    Google Scholar 

Download references

Acknowledgments

This paper was written with support from the National Science Foundation under Grant No. 1058457.  Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation. The authors would like to thank the anonymous reviewers for their helpful feedback on earlier versions of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Merel Noorman.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Noorman, M., Johnson, D.G. Negotiating autonomy and responsibility in military robots. Ethics Inf Technol 16, 51–62 (2014). https://doi.org/10.1007/s10676-013-9335-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-013-9335-0

Keywords

  • Autonomy
  • Responsibility
  • Military robots