Skip to main content
Log in

Defining semi-autonomous, automated and autonomous weapon systems in order to understand their ethical challenges

  • Original Article
  • Published:
Digital War Aims and scope Submit manuscript

Abstract

There is a lot of misunderstandings when it comes to what has been labelled as “autonomous killing robots”. Indeed, the robotization of weapons is a very complex issue and requires a clear conceptualization of these various types of weapons that are currently being used in the military. This article offers a typology of these weapon systems by distinguishing between semi-autonomous, automated and autonomous weapons. This necessary distinction allows for a better understanding of the ethical challenges associated with these systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The Predator has been retired from active service by the US military in early 2018.

  2. The autonomous nature of the SGR-A1 defence system has been a hotly debated topic in the last few years. Despite the fact that the spokesperson for Samsung Techwin—the company that developed the weapon—said in 2010 that the weapon “cannot automatically fire at detected foreign objects or figures”, it was however revealed 3 years earlier that the technology could engage targets on its own without human intervention.

  3. Noel Sharkey is one of the few who adequately sees and engages with the problem of labelling weapons such as drones as autonomous (Sharkey 2010, 376).

  4. This is why “someone who does not know the difference between right and wrong is not a moral agent and not appropriately censured for her behaviours. This is, of course, why we do not punish people with severe cognitive disabilities like a psychotic condition that interferes with the ability to understand the moral character of her behaviour” (Einar Himma 2009, 23).

  5. It is estimated that global spending on military robotics will reach around USD 7.5 billion a year in 2018.

  6. It took almost 40 years before a computer was able to beat a chess Grand Master, while it was originally predicted that it would take 10 years (Krishnan 2009, 47).

  7. As Leveringhaus puts it correctly, “the act of programming negates any autonomy in a philosophical sense” (Leveringhaus 2016, 48).

  8. As noted by Vincent Conitzer, a Professor of Computer Science at Duke University who is working on allowing AI to make moral judgments, “Recently, there have been a number of steps towards such a system, and I think there have been a lot of surprising advances (…) but I think having something like a "true AI", one that’s really as flexible, able to abstract, and do all these things that humans do so easily, I think we’re still quite far away from that” (Creighton 2016).

  9. This short film depicts a future in which autonomous drones are going berserk and turn themselves against US Senators and university students.

  10. During Operation Pillar of Defense, the system successfully intercepted 84% of rockets and mortars fired against Israel, while the success rate reached 91% during the first part.

References

  • Caron, Jean-François. 2018. A Theory of the Super Soldier: The Morality of Capacity-Increasing Technologies in the Military. Manchester: Manchester University Press.

    Book  Google Scholar 

  • Caron, Jean-François. 2019. The War of the Machines: Contemporary Technologies and the Morality of Warfare, 2019. London: Routledge.

    Google Scholar 

  • Creighton, Jolene. 2016. The Evolution of AI: Can Morality be Programmed?. Futurism, 1 July. https://futurism.com/the-evolution-of-ai-can-morality-be-programmed/.

  • Einar Himma, Kenneth. 2009. Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent have to be a Moral Agent? Ethics and Information Technology 11 (1): 19–29.

    Article  Google Scholar 

  • Human Rights Council. 2015. Report of the detailed findings of the independent commission of inquiry established pursuant to Human Rights Council resolution S-21/1. 29th session.

  • Krishnan, Armin. 2009. Killer Robots: Legality and Ethicality of Autonomous Weapons. London: Ashgate.

    Google Scholar 

  • Leveringhaus, Alex. 2016. Ethics and Autonomous Weapons. Oxford: Palgrave Macmillan.

    Book  Google Scholar 

  • Minsky, Marvin. 1968. Semantic Information Processing. Cambridge, MA: The MIT Press.

    Google Scholar 

  • Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. New York, London: W.W. Norton & Company.

    Google Scholar 

  • Sharkey, Noel. 2010. Saying No! To Lethal Autonomous Targeting. Journal of Military Ethics 9 (4): 369–383.

    Article  Google Scholar 

  • Sparrow, Robert. 2007. Killer Robots. Journal of Applied Philosophy 24 (1): 62–77.

    Article  Google Scholar 

  • Turner, Jacob. 2019. Robot Rules. Regulating Artificial Intelligence. London: Palgrave MacMillan.

    Book  Google Scholar 

  • Turner, Julian. 2018. Sea Hunter: Inside the US Navy’s Autonomous Submarine Tracking Vessel. Naval Technology, 3 May. https://www.naval-technology.com/features/sea-hunter-inside-us-navys-autonomous-submarine-tracking-vessel/.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-François Caron.

Additional information

This text is an extract from Caron (2019).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Caron, JF. Defining semi-autonomous, automated and autonomous weapon systems in order to understand their ethical challenges. Digi War 1, 173–177 (2020). https://doi.org/10.1057/s42984-020-00028-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1057/s42984-020-00028-5

Keywords

Navigation