Skip to main content

Advertisement

Log in

Intending to err: the ethical challenge of lethal, autonomous systems

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Current precursors in the development of lethal, autonomous systems (LAS) point to the use of biometric devices for assessing, identifying, and verifying targets. The inclusion of biometric devices entails the use of a probabilistic matching program that requires the deliberate targeting of noncombatants as a statistically necessary function of the system. While the tactical employment of the LAS may be justified on the grounds that the deliberate killing of a smaller number of noncombatants is better than the accidental killing of a larger number, it may nonetheless contravene a reemerging conception of right intention. Originally framed by Augustine of Hippo, this lesser-known formulation has served as the foundation for chivalric code, canon law, jus in bello criteria, and the law of armed conflict. Thus it serves as a viable measure to determine whether the use of lethal autonomy would accord with these other laws and principles. Specifically, examinations of the LAS through the lenses of collateral damage, the principle of double effect, and the principle of proportionality, reveal the need for more attention to be paid to the moral issues now, so that the promise of this emerging technology—that it will perform better than human beings—might actually come to pass.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. For example, Plato’s guardians were to act as “correctors, not enemies” (Bloom 1991:151), but that was only with regard to other Greeks, and not necessarily extended to barbarians.

  2. Merely providing the vocabulary doesn’t ensure the soldiers’ compliance with right intention, but it does constitute a necessary step in the process of having the soldiers internalize or buy into the concept.

  3. Thus, the “strategic corporal,” a term coined in January, 1999 by Gen. Charles C. Krulak in an article for the Marines Magazine titled “The Strategic Corporal: Leadership in the Three Block War.” Incidents such as the abuse of prisoners in Abu Gahrib and the massacre of 17 Afghans outside the city of Kandahar exemplify the damage relatively low-ranking individuals can do to the strategic goals of conventional forces.

  4. Others have dedicated the fuller attention this topic deserves. See, for example, Col (USA) Tony Pfaff’s “Risk, Military Ethics and Irregular Warfare” on the Foreign Policy Research Institute’s website; http://www.fpri.org/enotes/2011/201112.pfaff.irregularwarfare.html.

  5. We should note that risk to combatants wasn’t a concern for Augustine, as combatants had no right to self-defense. Under the early Christian doctrine of that time, warriors fought on behalf of others, not themselves. Their path to salvation was tenuous, as their calling exposed them to grave evil, but so long as they remained properly oriented in battle, they retained their admissibility into a peaceful afterlife. Today, the secular perspective on right intention is more pragmatic.

  6. The dynamic was graphically demonstrated in a televised report by CNN in April of 2003 (prior to the development of COIN operations) when US combat soldiers from the 101st Airborne Division avoided a violent confrontation with a mob of followers of the Grand Ayatollah Sistani in Najaf, Iraq by taking a knee and pointing their weapons down.

  7. In setting the case study that follows, I’ve chosen to focus on the relatively near-term potential for technological development, approximately 5–10 years. There are two reasons for this approach: first, taking a longer view (beyond 10 years) would require the consideration of hypotheticals, such as the development of artificial intelligence, which would require more speculation than concrete analysis. Second, analyzing the ethical issues of employing LAS that may appear on the battlefield in the near future might better serve the larger discussion about whether this technology should be pursued for military applications, and if it is, about how to resolve the ethical issues.

  8. The information on biometrics in sections II and III was derived from the author’s work with the Army Science Board from 2010 to 2012 as a report writer for the Tactical, Non-Cooperative Biometrics Study (Heinz et al. 2012).

  9. Combining biometric data; either across modalities or with that of other systems.

  10. Patrick Lin provides a comprehensive look at several of these issues in his article, “Military 2.0: Ethical Blowback from Emerging Technologies,” published in the December 2010 issue of the Journal of Military Ethics.

  11. The statistics for any match algorithm are derived by developing match score distributions for two populations: a genuine population (in which the subject sample resides in the database) and an imposter population (in which the subject is not in the database). These two distributions overlap (if they did not, the algorithm would be perfect in providing correct matches), which means the score for some imposters exceeds the score for some genuine subjects. A threshold can be set to determine the score used to declare a match. If the threshold is lowered, the area under the imposter distribution increases, increasing the probability of false match, but also increasing the probability of declaring a match.

  12. In the case of fusion algorithms, one common approach statistically combines match scores (probability of a valid match) from each biometric device. The same problems with transparency exist, so while we know fusion works with the high scores obtained through controlled, cooperative techniques, it’s not clear that the same approach will work with low scores from non-cooperative technology. For example, it may be possible for several low-match scores to combine and raise the fused score above the pre-determined threshold, indicating a valid match.

  13. We’ve already established one positive outcome, the reduction, or more accurately, removal of risk to the combatant, but that fails to satisfy our present need, because it merely recirculates the question of our intention. That is, if right intention is demonstrated through increased acceptance of combatant risk, and the LAS removes combatant risk, then we need some account of our intention in fielding the LAS.

References

  • Augustine. (1986). Concerning the City of God against the Pagans. New York: Penguin Classics.

    Google Scholar 

  • Bellamy, A. J. (2006). Just wars, from Cicero to Iraq. Malden, MA: Polity Press.

    Google Scholar 

  • Bloom, A. D. (Trans.) (1991). The republic of Plato, 2nd edn. New York: Basic Books.

  • Department of the Army/US Marine Corps. Headquarters Department of the Army/Marine Corps Combat Development Command (2006). FM 3-24/MCWP 3-33.5Counterinsurgency. Washington, DC.

  • Finn, P. (2011, September 15). A future for Drones: Automated killing. Washington Post. Retrieved from http://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html.

  • Heinz, M. H., Jette, B., Smith, T. B., & Department of the Army, Assistant Secretary of the Army (Acquisition, Logistics and Technology). (2012). Tactical non-cooperative biometric systems. Fort Belvoir: Defense Technical Information Center.

    Google Scholar 

  • Johnson, J. T. (1975). Ideology, Reason, and the Limitation of War, Religious and Secular Concepts, 1200–1740. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • McChrystal, S. International Security Assistance Force (ISAF) Tactical Directive dated 6 July 2009.

  • Orend, B. (2006). The morality of war. Peterborough, ON: Broadview Press.

    Google Scholar 

  • Patrick, L. (2010). Ethical blowback from emerging technologies. Journal of Military Ethics, 9(4), 313–331.

    Article  Google Scholar 

  • Reichberg, G. M., Syse, H., & Begby, E. (2008). The Ethics of War, Classic and Contemporary Readings. Malden, MA: Wiley-Blackwell.

    Google Scholar 

  • Schaff, P. (1886). Letter CXXXIII, to Marcellinus. In T. Perrine (Ed.), Nicene and Post-Nicene Fathers Series I, Vol. 1. New York: Christian Literature Publishing Co. Retrieved from http://www.ccel.org/ccel/schaff/npnf101.vii.1.CXXXIII.html.

  • Schaff, P. (1887). Book XXII, 76, Reply to Faustus the Manichæan. In T. Perrine (Ed.), Nicene and post-Nicene fathers series I, Vol. 1. New York: Christian Literature Publishing Co. Retrieved from http://www.ccel.org/ccel/schaff/npnf104.iv.ix.xxiv.html.

  • Shaw, P. M. (1997). Collateral Damage and the US Air Force. Informally published manuscript, School of Advanced Airpower Studies, Air University, Montgomery, Alabama USA.

  • Sullins, P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12/2006), 24–29. Retrieved from http://www.i-r-i-e.net/inhalt/006/006_Sullins.pdf.

  • Turing, A. (1950). Computing machinery and intelligence. Mind New Series, 59(236), 433–460.

    Article  MathSciNet  Google Scholar 

  • Walker, R. J., & Minister of National Defence, Directorate of Land Concepts and Design. (2009). Duty with discernment: CLS guidance on ethics in operations. Ottowa, Canada: Directorate Army Public Affairs.

    Google Scholar 

  • Walzer, M. (1977). Just and unjust wars (3rd ed.). New York: Basic Books.

    Google Scholar 

Download references

Acknowledgments

The author is indebted to Ms. Vivian Baylor, Study Manager at the Army Science Board, and Mr. Michael Heinz, Co-Chair of the Tactical, Non-Cooperative Biometric Systems Study (2010–2011), for their gracious support of this paper, as well as to Dr. Stephanie Schuckers, for her discerning comments and the invitation to present these issues at the Biometrics Consortium Conference (2011). Comments and criticism were also gratefully received from attendees of the International Society of Military Ethics Conference (2011), and from colleagues in the Department of Philosophy at the United States Air Force Academy. The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the United States Air Force Academy, the Air Force, The US Army, the Department of Defense or the US Government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark S. Swiatek.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Swiatek, M.S. Intending to err: the ethical challenge of lethal, autonomous systems. Ethics Inf Technol 14, 241–254 (2012). https://doi.org/10.1007/s10676-012-9302-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-012-9302-1

Keywords

Navigation