Ethics and Information Technology

, Volume 14, Issue 4, pp 241–254

Intending to err: the ethical challenge of lethal, autonomous systems

Original Paper


Current precursors in the development of lethal, autonomous systems (LAS) point to the use of biometric devices for assessing, identifying, and verifying targets. The inclusion of biometric devices entails the use of a probabilistic matching program that requires the deliberate targeting of noncombatants as a statistically necessary function of the system. While the tactical employment of the LAS may be justified on the grounds that the deliberate killing of a smaller number of noncombatants is better than the accidental killing of a larger number, it may nonetheless contravene a reemerging conception of right intention. Originally framed by Augustine of Hippo, this lesser-known formulation has served as the foundation for chivalric code, canon law, jus in bello criteria, and the law of armed conflict. Thus it serves as a viable measure to determine whether the use of lethal autonomy would accord with these other laws and principles. Specifically, examinations of the LAS through the lenses of collateral damage, the principle of double effect, and the principle of proportionality, reveal the need for more attention to be paid to the moral issues now, so that the promise of this emerging technology—that it will perform better than human beings—might actually come to pass.


Biometrics Lethal autonomy Tolerance for error Right intention 

Copyright information

© Springer Science+Business Media Dordrecht (outside the USA) 2012

Authors and Affiliations

  1. 1.Department of PhilosophyUnited States Air Force AcademyColorado SpringsUSA

Personalised recommendations