The future of war: could lethal autonomous weapons make conflict more ethical?


Lethal Autonomous Weapons (LAWs) are robotic weapon systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapon systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey of the implications of employing such ethical devices to replace humans in warfare is taken into account, this paper will engage on matters related to current scholarship on the rejection or acceptance of LAWs—including contemporary technological shortcomings of LAWs to differentiate between targets and the behavioral and psychological volatility of humans—and current and proposed regulatory infrastructures for developing and using such devices. After careful consideration of these factors, this paper will conclude that only ethical LAWs should be used to replace human involvement in war, and, by extension of their consistent abilities, should remove humans from war until a more formidable discovery is made in conducting ethical warfare.

This is a preview of subscription content, log in to check access.


  1. 1.

    ‘Autonomous’ in this regard refers to a system being pre-programmed to function independent of human control or supervision and does not presuppose autonomy as a construct of consciousness like that considered possessed by human agents.

  2. 2.

    Although some issues within the command and control infrastructure can arise from such an abdication of strategic targeting to LAWS, the philosophical issues at play in the paper remain unaffected given the approach taken. Technical and legislative issue to address this must obviously take precedence when aiming to resolve these issue. For a more in depth discussion of these issue see Roff 2014.

  3. 3.

    Michal Klincewicz (2015) provides a uniquely thorough account of the psychological differentiation between autonomous weapons systems and humans.

  4. 4.

    ‘Ethical’ in this context, and throughout the paper should be used in a pragmatic way, such that an ethical LAW is one that functions in accordance with the LoW and RoE. As the paper argues, abiding by these guidelines provide an initial step that can ameliorate unnecessary violence.

  5. 5.

    Value-laden programming here refers to the explicit programing of values into a system. This does not discount the fact that the design of technology always implicated some values, usually the designers and engineers that makes certain decisions rather than others during the design process.


  1. Arkin R (2015) The case for banning killer robots. Commun ACM 58(12):46–47.

    Article  Google Scholar 

  2. Arkin R, Ronald C (2008) Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/reactive Robot Architecture Part I: Motivation and Philosophy. In: Proceedings of the 3rd International Conference on Human Robot Interaction—HRI’08, 121. New York, New York, USA: ACM Press.

  3. Asaro PM (2008) How just could a robot war be?. In: Proceedings of the 2008 Conference on Current Issues in Computing and Philosophy, 50–64. Amsterdam, The Netherlands, The Netherlands: IOS Press. Accessed 28 Jan 2017

  4. Barrat J (2013) Our final invention. Thomas Dunne Books, New York

    Google Scholar 

  5. Baum SD (2015) Winter-safe deterrence: the risk of nuclear winter and its challenge to deterrence. Contemp Secur Policy 36(1):123–148. (Taylor & Francis)

    Article  Google Scholar 

  6. Boisboissel G (2015) Uses of lethal autonomous weapon systems. In International Conference on Military Technologies (ICMT) 2015, 1–6. IEEE.

  7. Bourget D, Chalmers D (2013) What do philosophers believe?. Philos Stud 170(3):465–500

    Article  Google Scholar 

  8. Burke KA, Tal Oron-Gilad G, Conway, Peter AH (2007) Friend/foe identification and shooting performance: effects of prior task loading and time pressure. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51 (4). SAGE Publications Inc: 156–60.

    Article  Google Scholar 

  9. Chase C (2015) Surviving AI: the promise and peril of artificial intelligence. Three Cs, London

    Google Scholar 

  10. Danielson P (1999) Evolutionary models of cooperative mechanisms: artificial morality and genetic programming. In: P Danielson (ed) Modeling rationality, morality, and evolution, 423–62. Oxford University Press. Accessed 28 Jan 2017

  11. Davidson D (1982) Paradoxes of Irrationality. In Problems of rationality, 189–198. Oxford University Press. Accessed 28 Jan 2017

    Google Scholar 

  12. DeBaets AM (2014) Can a robot pursue the good? Exploring artificial moral agency. J Evol Technol 24(3):76–86. Accessed 28 Jan 2017

  13. Egeland K (2016) Lethal autonomous weapon systems under international humanitarian law. Nord J Int Law 85(2):89–118.

    Article  Google Scholar 

  14. Ekelhof M, Struyk M (2014) Deadly Decisions: 8 objections to killer robots—Google Books. Accessed 28 Jan 2017

  15. Geibel A (1997) Learning from their mistakes: Russia’s arena active protection system. Accessed 28 Jan 2017

  16. Goertzel B (2016) Infusing advanced AGIs with human-like value systems: two theses. J Evol Technol 26(1):50–72

    Google Scholar 

  17. Guetlein MA (2005) Lethal autonomous weapons—ethical and doctrinal implications. Newport: Naval War College. Accessed 28 Jan 2017

  18. Guizzo E (2016) Autonomous weapons Could be developed for use within years, says Arms-Control Group. IEEE Spectrum. Accessed 28 Jan 2017

  19. Heyns C (2013) Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions. Human Rights Council. United Nations General Assembly. Accessed 28 Jan 2017

  20. Higgins E (2017) Obama’s drone war escalating in his last year in office. Huffington Post. Accessed 28 Jan 2017

  21. ICRAC (2009) Original mission statement. International Committee for Robot Arms Control. Accessed 28 Jan 2017

  22. ICRAC (2014) 2014 mission statement. International Committee for Robot Arms Control. Accessed 28 Jan 2017

  23. ICRC 2014. 1980 Convention on certain conventional weapons—factsheet. International Committee of the Red Cross. Accessed 28 Jan 2017

  24. Jacoby GA, Chang JD (2008) Towards command and control networking of cooperative autonomous robotics for military applications (CARMA). In: 2008 Canadian Conference on Electrical and Computer Engineering, 000815–20. IEEE.

  25. Jenks C (2010) Law from above: unmanned aerial systems, use of force, and the law of armed conflict. N D Law Rev 85:649. Accessed 28 Jan 2017

  26. Johnson AM, Axinn S (2013) The Morality of autonomous robots. J Mil Ethics 12(2):129–141.

    Article  Google Scholar 

  27. Jürgen A (2008) Military uses of nanotechnology—too much complexity for international security? Complexity 2(1):62–70.

    Article  Google Scholar 

  28. Kantrowitz A (1992) The weapon of openness. In: Crandal BC, Lewis J (eds) Nanotechnology research and perspectives. MIT Press, Cambridge, pp 303–311. Accessed 28 Jan 2017

  29. Katz Y, Lappin Y (2012) Iron dome ups its interception rate to over 90%. Jerusalem Post. Accessed 28 Jan 2017

  30. Kirk A (2015) What are the biggest defence budgets in the world?—Telegraph. The Telegraph. Accessed 28 Jan 2017

  31. Klincewicz M (2015b) autonomous weapons systems, the frame problem and computer security. J Mil Ethics 14(2): 162–176. (Taylor & Francis)

    Article  Google Scholar 

  32. Krishnan A (2009) Killer robots: legality and ethicality of autonomous weapons. Ashgate. Accessed 28 Jan 2017

  33. Lewis J (2015) The case for regulating fully autonomous weapons. Yale L.J. 1309, January. Accessed 28 Jan 2017

  34. Lin P, George K, Bekey, Abney MA (2008) Autonomous military robotics: risk, ethics, and design. Accessed 28 Jan 2017

  35. Marauhn T (2014) An analysis of the potential impact of lethal autonomous weapons systems on responsibility and accountability for violations of international law. In CCW Expert Meeting on Lethal Autonomous Systems. Geneva. Accessed 28 Jan 2017

  36. Marchant GE, Braden Allenby RC, Arkin J, Borenstein ML, Gaudet O, Kittrie P, Lin GR, Lucas RO’Meara, Silberman J (2015) International governance of autonomous military robots. In: Kimon PV, George JV (eds) Handbook of unmanned aerial vehicles, Dordrecht: Springer, 2879–2910.

    Google Scholar 

  37. McLean W (2014) Drones are cheap, soldiers are not: a cost-benefit analysis of war. The Conversation. Accessed 28 Jan 2017

  38. Mills MJ, Owen B, Toon J, Lee-Taylor, Robock A (2014) Multidecadal global cooling and unprecedented ozone loss following a regional nuclear conflict. Earth’s Future 2 (4): 161–76. (Wiley Periodicals, Inc.)

    Article  Google Scholar 

  39. Nadeau JE (2006) Only androids can be ethical. In: KM Ford, CN Glymour, PJ Hayes (eds) Thinking about android epistemology, MIT Press, Boston

    Google Scholar 

  40. Nibbeling N, Raôul RD, Oudejans EM, Ubink, Hein AM, Daanen (2014) The effects of anxiety and exercise-induced fatigue on shooting accuracy and cognitive performance in infantry soldiers. Ergonomics 57(9):1366–1379.

    Article  Google Scholar 

  41. O’Meara RM (2011) Contemporary governance architecture regarding robotics technologies: an assessment. In: George AB, Abney K, Lin P (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 159–168.

  42. Pinch T, Bijker WE (1987) “The Social Construction of Facts and Artifacts.” In: Bijker WE, Parke T. Hughes, Pinch T (eds) The social construction of technological systems: new directions in the sociology and history of technology, MIT Press, 405. Social Construction of Technological Systems&f = false. Accessed 28 Jan 2017

  43. Roff HM (2014) The strategic robot problem: lethal autonomous weapons in war. J Mil Ethics 13(3): 211–227. (Taylor & Francis)

    Article  Google Scholar 

  44. Sauer F (2016) Stopping ‘killer robots’: why now is the time to ban autonomous weapons systems. Arms Control Association. Accessed 28 Jan 2017

  45. Shachtman N (2007) Robo-Snipers, ‘Auto Kill Zones’ to Protect Israeli Border. Wired. Accessed 28 Jan 2017

  46. Sharkey NE (2010) Saying ‘No!’ to lethal autonomous targeting. J Mil Ethics 9 (4): 369–83. (Routledge)

    Article  Google Scholar 

  47. Sharkey NE (2012) The evitability of autonomous robot warfare. Evitability of Autonomous Robot Warfare 886:787–799. Accessed 28 Jan 2017

    Article  Google Scholar 

  48. Shulman HC, Jonsson, Tarleton N (2009) Machine ethics and superintelligence. In AP-CAP 2009: The Fifth Asia-Pacific Computing and Philosophy Conference, 95–97. Tokyo. Accessed 28 Jan 2017

  49. Singer PW (2009a) Military robots and the laws of war. The New Atlantis. Accessed 28 Jan 2017

  50. Singer PW (2009b) Wired for war: the robotics revolution and conflict in the 21st Century. Ethics Int Aff 23:312–313.

    Article  Google Scholar 

  51. Soares N (2016) The value learning problem. In Ethics for Artificial IntelligenceWorkshop at 25th International Joint Conference on Artificial Intelligence, 1–8. Machine Intelligence Research Institute

  52. Tarleton N (2010) Coherent extrapolated volition: a meta-level approach to machine ethics. Accessed 28 Jan 2017

  53. Thurnher JS (2012) Legal implications of fully autonomous targeting. Jt Force Q 67:77–84. Accessed 28 Jan 2017

  54. Thurnher J (2013) the law that applies to autonomous weapon systems. 17 ASIL INSIGHTS, no. 4(January). Accessed 28 Jan 2017

  55. Thurnher JS (2016) Means and methods of the future: autonomous systems BT—targeting: the challenges of modern warfare. In: Paul AL, Ducheine MN, Schmitt, Frans PB, Osinga (eds) Targeting: the challenges of modern warfare. TMC Asser Press, The Hague, pp 177–199.

    Google Scholar 

  56. United Nations (1979) Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (AP I). Vol. 1125. 1125/volume-1125-i-17512-english.pdf. Accessed 28 Jan 2017

  57. United SN (2017) MK 15—Phalanx Close-In Weapons System (CIWS).” The US Navy—Fact File. Accessed 28 Jan 2017

  58. Varden H (2010) Kant and lying to the murderer at the door… one more time: kant’s legal philosophy and lies to murderers and nazis. J Soc Philos 41(4): 403–421. (Blackwell Publishing Inc)

    Article  Google Scholar 

  59. Wallach W, Allen C, Smit I (2008) Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI Soc 22(4):565–582.

    Article  Google Scholar 

  60. Walzer M (1991) Just and unjust wars: a moral argument with historical illustrations. Accessed 28 Jan 2017

  61. Whitman J (2011) The arms control challenges of nanotechnology. Contemp Secur Policy 32 (1): 99–115. (Routledge)

    Article  Google Scholar 

  62. Wilson W (2012) The Pennyfarthing H-Bomb. The World Today. Accessed 28 Jan 2017

  63. Wilson G (2013) Minimizing global catastrophic and existential risks from emerging technologies through international law. Va Environ Law J 31(1):307–364. Accessed 28 Jan 2017

  64. Work B (2015) The Third US Offset Strategy and Its Implications for Partners and Allies > US DEPARTMENT OF DEFENSE.” US Department of Defense. Accessed 28 Jan 2017

  65. Xia L, Robock A, Mills M, Stenke A, Helfand I (2015) Decadal reduction of Chinese agriculture after a regional nuclear war. Earth’s Future 3 (2): 37–48. (Wiley Periodicals, Inc.)

    Article  Google Scholar 


  1. Davis J, Nathan LP (2015) Handbook of ethics, values, and technological design: sources, theory, values and application domains. In: van den Hoven J, Vermaas PE, van de Poel I (eds) Handbook of ethics, values, and technological design: sources, theory, values and application domains, Springer, Berlin, 12–40.

    Google Scholar 

  2. Klincewicz M (2015a) Autonomous weapons systems, the frame problem and computer security. J Mil Ethics 14 (2): 162–76. (Routlegde)

    Article  Google Scholar 

  3. Manders-Huits N (2011) What values in design? The challenge of incorporating moral values into design. Sci Eng Ethics 17(2):271–287.

    Article  Google Scholar 

  4. McClelland J (2003) The review of weapons in accordance with article 36 of additional protocol I.” Accessed 28 Jan 2017

  5. Pereira L, Moniz, Saptawijaya A (2007) Modelling Morality with Prospective Logic BT—Progress in Artificial Intelligence: 13th Portuguese Conference on Aritficial Intelligence, EPIA 2007, Workshops: GAIW, AIASTS, ALEA, AMITA, BAOSW, BI, CMBSB, IROBOT, MASTA, STCS, and TEMA, Guimarães, Portug.” In, edited by José Neves, Manuel Filipe Santos, and José Manuel Machado, 99–111. Berlin, Heidelberg: Springer Berlin Heidelberg.

  6. Winner L (2003) Do artifacts have politics? Technol Future 109(1):148–164.

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Steven Umbrello.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Umbrello, S., Torres, P. & De Bellis, A.F. The future of war: could lethal autonomous weapons make conflict more ethical?. AI & Soc 35, 273–282 (2020).

Download citation


  • Lethal autonomous weapons
  • Artificial intelligence
  • Military robots
  • Ethics
  • Laws of war