Skip to main content

The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods

  • Chapter
  • First Online:
Yearbook of International Humanitarian Law, Volume 21 (2018)

Part of the book series: Yearbook of International Humanitarian Law ((YIHL,volume 21))

Abstract

Deep learning is a method of machine learning which has advanced several headline-grabbing technologies, from self-driving cars to systems recognising mental health issues in medical data. Due to these successes, its capabilities in image and target recognition is currently being researched for use in armed conflicts. However, this programming method contains inherent limitations, including an inability for the resultant algorithms to comprehend context and the near impossibility for humans to understand the decision-making process of the algorithms. This can lead to the appearance that the algorithms are functioning as intended even when they are not. This chapter examines these problems, amongst others, with regard to the potential use of deep learning to programme automatic target recognition systems, which may be used in an autonomous weapon system during an armed conflict. This chapter evaluates how the limitations of deep learning affect the ability of these systems to perform target recognition in compliance with the law of armed conflict. Ultimately, this chapter concludes that whilst there are some very narrow circumstances where these algorithms could be used in compliance with targeting rules, there are significant risks of unlawful targets being selected. Further, these algorithms impair the exercise of legal duties by autonomous weapon system operators, commanders, and weapons reviewers. As such, this chapter concludes that deep learning-generated algorithms should not be used for target recognition by fully-autonomous weapon systems in armed conflicts, unless they can be made in such a way as to understand the context of targeting decisions and be explainable.

Joshua G. Hughes is a Ph.D. Candidate at Lancaster University Law School. The research for this chapter was carried out through a studentship grant from the North-West Consortium Doctoral Training Partnership, funded by the UK Arts and Humanities Research Council. He would like to thank Professor James Sweeney for his assistance and advice in writing this chapter. He would also like to thank the two anonymous reviewers for their very helpful comments. All errors remain the author’s own.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Steinberg R (2017) 6 areas where artificial neural networks outperform humans. https://venturebeat.com/2017/12/08/6-areas-where-artificial-neural-networks-outperform-humans/. Accessed 1 February 2019.

  2. 2.

    See Geirhos et al. 2018, p. 1; Sharma N, Bluemstein M (2018) SharkSpotter combines AI and drone technology to spot sharks and aid swimmers on Australian beaches. https://theconversation.com/sharkspotter-combines-ai-and-drone-technology-to-spot-sharks-and-aid-swimmers-on-australian-beaches-92667. Accessed 1 February 2019.

  3. 3.

    Marcus 2018, p. 2.

  4. 4.

    Hawkins A (2018) Inside the Lab where Waymo is Building the Brains for its Driverless Cars. https://www.theverge.com/2018/5/9/17307156/google-waymo-driverless-cars-deep-learning-neural-net-interview. Accessed 1 February 2019.

  5. 5.

    Silver et al. 2018. Also note Silver et al. 2017.

  6. 6.

    Kasparov 2018, p. 265.

  7. 7.

    Boulanin and Verbruggen 2017, p. 24. See, e.g., Handy 2007, p. 87.

  8. 8.

    Boulanin and Verbruggen 2017, p. 26.

  9. 9.

    SBIR 2018. Note that the term “uninhabited” is used rather than “unmanned” in order to avoid any potential bias associated with using a gendered term, and also to avoid any inference that “unmanned” could mean fully-autonomous. See Leveringhaus 2016, p. 3.

  10. 10.

    SBIR 2018.

  11. 11.

    TASS (2017) Kalashnikov gunmaker develops combat module based on artificial intelligence. http://tass.com/defense/954894. Accessed 1 February 2019.

  12. 12.

    See, e.g., Rogers et al. 1995.

  13. 13.

    See, e.g., Furukawa 2018.

  14. 14.

    SBIR 2018.

  15. 15.

    TASS (2017) Kalashnikov gunmaker develops combat module based on artificial intelligence. http://tass.com/defense/954894. Accessed 1 February 2019.

  16. 16.

    US Department of Defense 2012, p. 13.

  17. 17.

    Ibid., p. 14.

  18. 18.

    For an overview of the major issues, see Bhuta et al. 2016.

  19. 19.

    Scharre 2018, p. 91.

  20. 20.

    Boden 2016, p. 1.

  21. 21.

    Jajal 2018.

  22. 22.

    Ibid.

  23. 23.

    Ibid.

  24. 24.

    Boden 2016, pp. 6 f.

  25. 25.

    Fry 2018, pp. 10 f.

  26. 26.

    Ibid., pp. 10 f.

  27. 27.

    Ibid.

  28. 28.

    Ibid.

  29. 29.

    Descriptions of various data types in a multitude of algorithms are discussed in Fry 2018.

  30. 30.

    Boden 2016, p. 47.

  31. 31.

    Ibid., p. 80.

  32. 32.

    Ibid., p. 49; Goodfellow et al. 2016, p. 87.

  33. 33.

    Boden 2016, p. 49.

  34. 34.

    Marcus 2018, pp. 2–3. To see what an example deep learning algorithm looks like, and to use one yourself, see Deep Learning Playground 2019.

  35. 35.

    Fry 2018, p. 86.

  36. 36.

    Scharre 2018, pp. 325 f.; Etzioni and Etzioni 2017; Press 2017, p. 1361.

  37. 37.

    Scharre 2018, pp. 327–330.

  38. 38.

    Alston 2011, p. 43.

  39. 39.

    Scharre 2018, pp. 328–330.

  40. 40.

    See Gill 2018.

  41. 41.

    See UNOG 2018 and links. These discussions took place under the auspices of the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, opened for signature 10 April 1981, 1342 UNTS 137 (entered into force 2 December 1983).

  42. 42.

    For more on the movement towards prohibiting AWS, see Campaign to Stop Killer Robots 2019a.

  43. 43.

    “Consensus” meaning the absence of formal disagreement. Definition taken from United Nations Convention on the Law of the Sea, opened for signature on 10 December 1982, 1833 UNTS 3 (entered into force 16 November 1994), Article 161(8)(e). In terms of disagreements, see Campaign to stop Killer Robots 2018a, 2018b.

  44. 44.

    Boothby 2019, pp. 145–150.

  45. 45.

    See, e.g., Schmitt 2013; Heyns 2017; Chengeta 2016; Anderson et al. 2014; Wagner 2014; Crootof 2015.

  46. 46.

    See, e.g., Asaro 2012; Sparrow 2007; Robillard 2017; Leveringhaus 2016.

  47. 47.

    See, e.g., Sharkey 2017; Arkin 2009.

  48. 48.

    For an overview of state and some NGO views on characterising AWS, see Gill 2018.

  49. 49.

    Boulanin and Verbruggen 2017.

  50. 50.

    See, e.g., Human Rights Watch 2012; Campaign to Stop Killer Robots 2019b.

  51. 51.

    Whilst this chapter is focussed upon algorithms, it does not use the “war algorithm” concept. Although this is an important concept, considering “war algorithms” as “any algorithm […] capable of operating in relation to armed conflict” does not enable greater understanding of the issues emanating from deep learning. See Lewis et al. 2016.

  52. 52.

    See, e.g., Cummings 2017; Boothby 2016, pp. 251 f.; Boothby 2019, pp. 150 f.; Heyns 2017.

  53. 53.

    See, e.g., Sejnowski 2018 pp. 7, 8.

  54. 54.

    Scharre 2018, pp. 124–130.

  55. 55.

    Boulanin and Verbruggen 2017, pp. 17, 25–26, 114, 120.

  56. 56.

    iPRAW 2017, p. 12.

  57. 57.

    Farrant and Ford 2017, pp. 399–404.

  58. 58.

    Ibid., p. 404.

  59. 59.

    See iPRAW 2017, p. 11 on off-line and on-line learning.

  60. 60.

    Brandom R (2018) Self-Driving Cars are Headed toward an AI Roadblock. https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber. Accessed 1 February 2019; Brown J (2018) IBM Watson Reportedly Recommended Cancer Treatments that were ‘Unsafe and Incorrect’. https://gizmodo.com/ibm-watson-reportedly-recommended-cancer-treatments-tha-1827868882. Accessed 1 February 2019.

  61. 61.

    See, e.g., Marcus 2018.

  62. 62.

    Marcus 2018, pp. 7 f.

  63. 63.

    Boulanin and Verbruggen 2017, p. 17.

  64. 64.

    Nguyen et al. 2015; Szegedy et al. 2014.

  65. 65.

    See, e.g., Jackson 2018.

  66. 66.

    Sharkey 2018.

  67. 67.

    See, e.g., Common Article 3(1) to the Geneva Conventions, e.g. Geneva Convention (I) for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field, opened for signature 12 August 1949, 75 UNTS 31 (entered into force 21 October 1950); Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), opened for signature 8 June 1977, 1125 UNTS 3 (entered into force 7 December 1978) (AP I), Article 84(4)(c). Also note AP I, Articles 9(1), 75(1).

  68. 68.

    iPRAW 2017, pp. 11 f., 17; Barocas and Selbst 2016; Ford 2017, p. 460; Boulanin and Verbruggen 2017, p. 17; Marcus 2018, pp. 6 f.

  69. 69.

    Under-fitting is where there is too much variance in the data such that nothing can be adequately recognised. Over-fitting is where an algorithm is too specific and can only recognise its training data. Both lead to sub-optimal performance. See Brownlee 2016.

  70. 70.

    Marcus 2018, pp. 6, 13. Techniques such as transfer learning and domain adaption can allow algorithms to be used in different environments than they were trained for, but only work with simple data sets and so are inappropriate for use with data from constantly changing armed conflicts. See Goodfellow et al. 2016, pp. 526 ff., Domingos 2015, p. 115.

  71. 71.

    Scharre 2018, pp. 182–184; Szegedy et al. 2014; Nguyen et al. 2015; Su et al. 2018.

  72. 72.

    Scharre 2018, pp. 181 f.

  73. 73.

    This is the comparison between the concrete and direct military advantage to be expected from an operation and the level of incidental harm to civilians, the civilian population, and civilian objects, or a combination thereof. An excessive level of civilian harm in relation to the military advantage would be unlawful. See AP I, above n 67, Articles 51(5)(b), 57(2)(b); Henckaerts and Doswald-Beck 2005, Rule 14.

  74. 74.

    Marcus 2018, p. 7 f.

  75. 75.

    Handy 2007, p. 87.

  76. 76.

    Srinivasan 2016; Abate T (2013) Stanford algorithm analyzes sentence sentiment, advances machine learning. https://engineering.stanford.edu/magazine/article/stanford-algorithm-analyzes-sentence-sentiment-advances-machine-learning. Accessed 1 February 2019.

  77. 77.

    Characteristics recognised by deep learning systems would not easily apply to concepts of “justice” or “democracy”, or even to notions of military advantage, for example. See Marcus 2018, p. 7.

  78. 78.

    Srinivasan 2016.

  79. 79.

    Anderson et al. 2014, pp. 388–395.

  80. 80.

    Fry 2018, pp. 10 f.

  81. 81.

    Domingos 2015, p. 117.

  82. 82.

    Marcus 2018, p. 11.

  83. 83.

    This would be different from “codespace”, which relates to how physical spaces are altered by technology. See Kitchin and Dodge 2011.

  84. 84.

    Nguyen et al. 2016.

  85. 85.

    Marcus 2018, p. 11.

  86. 86.

    Hussain A (2016) AI On The Battlefield: A Framework For Ethical Autonomy. https://www.forbes.com/sites/forbestechcouncil/2016/11/28/ai-on-the-battlefield-a-framework-for-ethical-autonomy/#767535675cf2. Accessed 1 February 2019.

  87. 87.

    Ananny and Crawford 2016, p. 9; Boulanin and Verbruggen 2017, p. 17; Marcus 2018, pp. 10–11.

  88. 88.

    Ananny and Crawford 2016, p. 7.

  89. 89.

    See Kimball W (2019) Why Is It Called A Black Box If It’s Actually Orange? http://www.hopesandfears.com/hopes/now/question/168795-why-is-it-called-a-black-box-if-it-s-actually-orange. Accessed 1 February 2019.

  90. 90.

    Ananny and Crawford 2016, p. 9.

  91. 91.

    Knight W (2017) The Dark Secret at the Heart of AI. https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/. Accessed 1 February 2019.

  92. 92.

    Yudkowsky 2018, pp. 15 f.; Kaufman 2015.

  93. 93.

    For another example, also see Vanhemert K (2015) Simple Pictures that State-Of-The-Art AI Still Can’t Recognize. https://www.wired.com/2015/01/simple-pictures-state-art-ai-still-cant-recognize/. Accessed 1 February 2019. Further, algorithms that only give a semblance of compliance will still suggest highly-accurate image recognition, which would create additional problems in relation to automation bias (see for this below, Sect. 4.6). See Nguyen et al. 2015, Szegedy et al. 2014.

  94. 94.

    Gunning 2016, 2017.

  95. 95.

    Gunning 2016.

  96. 96.

    Gunning 2017, slide 5.

  97. 97.

    Ibid., slides 12–17.

  98. 98.

    Ibid., slide 21.

  99. 99.

    Henckaerts and Doswald-Beck 2005.

  100. 100.

    Bellinger and Haynes 2007; UK Legal Adviser to the Foreign and Commonwealth Office, in Kaikobad et al. 2005, p. 695.

  101. 101.

    See, e.g., Wood 2018.

  102. 102.

    See, e.g., Handy 2007, p. 87.

  103. 103.

    AP I, above n 67, Articles 50(1), 52(3); Henckaerts and Doswald-Beck 2005, Rules 6 and 10.

  104. 104.

    AP I, above n 67, Article 57(2)(a)(i); Henckaerts and Doswald-Beck 2005, Rule 16.

  105. 105.

    AP I, above n 67, Article 51(4)(a); Henckaerts and Doswald-Beck 2005, Rules 11, 12.

  106. 106.

    AP I, above n 67, Articles 35(3), 55; Henckaerts and Doswald-Beck 2005, Rules 43–45.

  107. 107.

    AP I, above n 67, Article 50(1). Also see “Situations of doubt as to the character of a person” in Henckaerts and Doswald-Beck 2005, Rule 6.

  108. 108.

    See Schmitt and Vihul 2017, Rule 95, para 1; Henckaerts and Doswald-Beck 2005, Rule 6.

  109. 109.

    See Schmitt and Vihul 2017, Rule 95, para 1.

  110. 110.

    Henckaerts and Doswald-Beck 2005, Rule 6.

  111. 111.

    AP I, above n 67, Article 52(3).

  112. 112.

    Henckaerts and Doswald-Beck 2005, Rule 10.

  113. 113.

    UK Ministry of Defence 2004, para 5.3.4.

  114. 114.

    ICTY, Prosecutor v Stanislav Galić, Judgment and Opinion, 5 December 2003, Case No. IT-98-29-T, para 55.

  115. 115.

    Schmitt and Thurnher 2013, pp. 262–265; Ford 2017, p. 442.

  116. 116.

    AP I, above n 67, Article 57(2)(a)(i); Henckaerts and Doswald-Beck 2005, Rule 16.

  117. 117.

    AP I, above n 67, Article 57(2)(a)(i).

  118. 118.

    Boothby 2016, p. 256.

  119. 119.

    AP I, above n 67, Article 51(4)(a).

  120. 120.

    Henckaerts and Doswald-Beck 2005, Rules 11, 12.

  121. 121.

    There is no expansion in the AP I commentary or case law on how much confidence is required. However, the ICTY Trial Chamber alludes to part (a) of the indiscriminate attack prohibition in ICTY, Prosecutor v Dragomir Milošević, Judgement, 12 December 2007, Case No. IT-98-29/1-T, para 431, although the Chamber focuses upon parts (b) and (c) in its later deliberations and offers no expansion on part (a).

  122. 122.

    Convention on the prohibition of military or any hostile use of environmental modification techniques, opened for signature 18 May 1977, 1108 UNTS 151 (entered into force 5 October 1978) (ENMOD Treaty), Article 1; AP I, above n 67, Articles 35(3), 55.

  123. 123.

    Henckaerts and Doswald-Beck 2005, Rules 43, 44, 45.

  124. 124.

    Humanitarian Policy and Conflict Research (2010), p. 204, Section M, para 4.

  125. 125.

    The use of camouflage is a lawful ruse as they mislead the enemy but do not invite their confidence and therefore are not perfidious. See AP I, above n 67, Article 37(2).

  126. 126.

    United Kingdom 2002, para h; France 2001, para 9.

  127. 127.

    ICJ, Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, 8 July 1996, [1996] ICJ Rep 226, para 78.

  128. 128.

    This customary rule is applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rules 1, 7.

  129. 129.

    AP I, above n 67, Article 48.

  130. 130.

    Murray 2016, para 5.35.

  131. 131.

    Algorithms capable of distinguishing combatants from civilians have been publicised. See Rosenberg M, Markoff J (2016) The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own. https://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html. Accessed 1 February 2019. For other algorithms able to distinguish police uniforms from civilians, see Guersenzvaig 2018.

  132. 132.

    AP I, above n 67, Article 43(1).

  133. 133.

    See Gaggioli 2018, pp. 912–915.

  134. 134.

    For more on targeting members of an OAG based upon their membership, see Gaggioli 2018.

  135. 135.

    AP I, above n 67, Article 52(2).

  136. 136.

    Ibid., Article 52(2).

  137. 137.

    “Neutralisation” is used here for brevity, but expected military advantage can also come from “total or partial destruction, [or] capture”. See ibid., Article 52(2).

  138. 138.

    Jachec-Neale 2015, p. 116, see also pp. 117–119.

  139. 139.

    UK Ministry of Defence 2004, para 5.4.4(j).

  140. 140.

    Schmitt and Thurnher 2013, p. 256 f.

  141. 141.

    Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), opened for signature 8 June 1977, 1125 UNTS 609 (entered into force 7 December 1978) (AP II), Article 13(3).

  142. 142.

    Humanitarian Policy and Conflict Research 2010, p. 87, Rule 12(a), para 3.

  143. 143.

    Heller, p. 95 f.

  144. 144.

    See Ford 2017, p. 438.

  145. 145.

    AP I, above n 67, Articles 8, 13(2)(a). See Ipsen 2013, para 316.

  146. 146.

    AP I, above n 67, Article 41; Henckaerts and Doswald-Beck 2005, Rule 47.

  147. 147.

    Pike 2011.

  148. 148.

    AP I, above n 67, Article 41(a); Henckaerts and Doswald-Beck 2005, Rule 47(a).

  149. 149.

    AP I, above n 67, Article 48; AP II, above n 141, Article 13; Henckaerts and Doswald-Beck 2005, Rules 1–10, 25–45.

  150. 150.

    AP I, above n 67, Articles 51(4), 51(4)(b), 51(5); Henckaerts and Doswald-Beck 2005, Rules 11, 12.

  151. 151.

    On catastrophic errors, see UK Ministry of Defence 2018, para 4.2.

  152. 152.

    See, e.g., Scharre 2018, pp. 59–76.

  153. 153.

    AP I, above n 67, Article 57(1). According to the ICRC, this is a customary rule applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rule 15.

  154. 154.

    AP I, above n 67, Article 57(2)(a)(ii). According to the ICRC, this is a customary rule applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rule 17.

  155. 155.

    AP I, above n 67, Article 57(2)(c). According to the ICRC, this is a customary rule applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rule 20.

  156. 156.

    AP I, above n 67, Articles 57(2)(a)(iii), 57(2)(b). According to the ICRC, this is a customary rule applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rules 18, 19.

  157. 157.

    AP I, above n 67, Article 57(3). According to the ICRC, this is a customary rule applicable in IAC, and arguably also in NIAC. See Henckaerts and Doswald-Beck 2005, Rule 21.

  158. 158.

    Note that the UK manual extends precautionary duties to all those with discretion over attacks. See UK Ministry of Defence 2004, para 5.32.9. Also note Boothby’s suggestion that in the case of autonomous attacks, these precautionary duties would extend to those who evaluate intelligence, to those who set areas to be searched and targets to be attacked, and those who input this information into the system. See Boothby 2016, 254.

  159. 159.

    Schmitt and Thurnher 2013, pp. 254–257.

  160. 160.

    ENMOD Treaty, above n 122, Article 1; AP I, above n 67, Articles 35(3), 55. According to the ICRC, this is a customary rule applicable in IAC, and arguably also in NIAC. See Henckaerts and Doswald-Beck 2005, Rules 43, 44, 45.

  161. 161.

    Such an attack would also be prohibited due to the protection of works and installations containing dangerous forces, see AP I, above n 67, Article 56. Boothby also provides an example of an attack against an adverse super-tanker which causes an oil spill. However, as such an object is not of a military nature, this example does not fit perfectly here. See Boothby 2016, p. 84.

  162. 162.

    AP I, above n 67, Articles 35(3), 55.

  163. 163.

    See, e.g., UK Ministry of Defence 2018; Scharre 2018 pp. 321–325; United Kingdom 2016.

  164. 164.

    UK Ministry of Defence 2018, para 4.5.

  165. 165.

    Ibid., para 4.6.

  166. 166.

    For an often-cited example of human-machine teams, see the discussion on “Advanced Chess” in Kasparov 2018, pp. 245–248. For its applicability to human control of weapon systems, see UK Ministry of Defence 2018, para 4.1.

  167. 167.

    See, e.g., BAE Systems (2018) Taranis. https://www.baesystems.com/en/product/taranis. Accessed 1 February 2019.

  168. 168.

    See Skitka et al. 1999, 2000b; Parasuraman and Manzey 2010. On trusting autonomous weapon systems, also see Roff and Danks 2018, p. 8 f.

  169. 169.

    Skitka et al. 2000a.

  170. 170.

    Algorithms recognising and using the wrong characteristics are not unusual. See, e.g., Vanhemert K (2015) Simple Pictures that State-Of-The-Art AI Still Can’t Recognize. https://www.wired.com/2015/01/simple-pictures-state-art-ai-still-cant-recognize/. Accessed 1 February 2019.

  171. 171.

    See AP I, above n 67, Articles 15, 21, 59(1), 70(4), 71(2), 85(3); Henckaerts and Doswald-Beck 2005, Rules 25, 27, 29–38, 40, 42.

  172. 172.

    AP I, above n 67, Article 48; AP II, above n 141, Article 13; Henckaerts and Doswald-Beck 2005, Rules 1–10, 25–45

  173. 173.

    AP I, above n 67, Article 51(4), 51(4)(b), 51(5); Henckaerts and Doswald-Beck 2005, Rules 11, 12.

  174. 174.

    AP I, above n 67, Articles 50(1), 52(3); Henckaerts and Doswald-Beck 2005, Rules 6 and 10.

  175. 175.

    AP I, above n 67, Article 57; Henckaerts and Doswald-Beck 2005, Rules 15–24.

  176. 176.

    AP I, above n 67, Articles 51(5)(b), 57(2)(b); Henckaerts and Doswald-Beck 2005, Rule 14.

  177. 177.

    AP I, above n 67, Articles 15, 21, 59(1), 70(4), 71(2), 85(3); Henckaerts and Doswald-Beck 2005, Rules 25, 27, 29–38, 40, 42.

  178. 178.

    ENMOD Treaty, above n 122, Article 1; AP I, above n 67, Articles 35(3), 55; Henckaerts and Doswald-Beck 2005, Rules 43–45.

  179. 179.

    On catastrophic errors, see UK Ministry of Defence 2018, para 4.2.

  180. 180.

    Gunning 2016, 2017.

  181. 181.

    Sharkey 2016, pp. 34–37.

  182. 182.

    See Sauer 2018; Horowitz and Scharre 2015; Article 36 2016; Crootof 2016.

  183. 183.

    Sassòli 2014, p. 324; Schmitt and Thurnher 2013, p. 267; US Department of Defense 2012, para (4)(a)(3)(a).

  184. 184.

    Ford 2017, p. 456.

  185. 185.

    Sassòli 2014, p. 324.

  186. 186.

    ICTY, Prosecutor v Tihomir Blaškić, Judgement, 29 July 2004, Case No IT-95-14-A, para 417; ICTR, Prosecutor v Clément Kayishema and Obed Ruzindana, Judgement, 1 June 2001, Case No. ICTR-95-1-A, para 302; ICTY, Prosecutor v Momčilo Krajišnik, Judgement, 17 March 2009, Case No. IT-00-39-A, para 193 f.

  187. 187.

    ICC, The Prosecutor v Jean-Pierre Bemba Gombo, Decision Pursuant to Article 61(7)(a) and (b) of the Rome Statute on the Charges of the Prosecutor Against Jean-Pierre Bemba Gombo, 15 June 2009, Case No. ICC-01/05-01/08-424, para 438.

  188. 188.

    Schmitt and Vihul 2017, pp. 399 f.

  189. 189.

    Ibid., pp. 399 f.

  190. 190.

    UK Ministry of Defence 2009, para 125; US Joint Chiefs of Staff 2013, pp. III-1, III-3, III-13–III-20.

  191. 191.

    Ford 2017, p. 474. Ford likens the failure of command responsibility by a commander who allowed drunk or unstable subordinates to operate to a commander who could not control an autonomous system. See ICTY, Prosecutor v Zdravko Mucić, Judgement, 20 February 2001, Case No. IT-96-21-A, para 238.

  192. 192.

    Jevglevskaja 2018.

  193. 193.

    Henckaerts and Doswald-Beck 2005, Rule 71, in particular p. 250.

  194. 194.

    UK Ministry of Defence 2016, p. 2.

  195. 195.

    US Department of Defense 2016, paras 6.2–6.2.4; UK Ministry of Defence 2016; Australia 2018.

  196. 196.

    United Kingdom 2016; US Department of Defense 2016, para 6.2.2; Australia 2018, p. 5.

  197. 197.

    United Kingdom 2016; Australia 2018, p. 5.

  198. 198.

    Australia 2018, p. 5.

  199. 199.

    Sandoz et al. 1987, para 1410.

  200. 200.

    See Boothby 2016, pp. 76–91, 347–348.

  201. 201.

    AP I, above n 67, Article 1(2). It states: “[i]n cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.” Note that this provision has appeared in different forms in the 1899 Hague Convention (IV) respecting the Laws and Customs of War on Land and its annex: Regulations concerning the Laws and Customs of War on Land, opened for signature 18 October 1907, International Peace Conference, The Hague, Official Record 631 (entered into force 26 January 1910), all four Geneva Conventions, and AP II.

  202. 202.

    See, e.g., Meron 2000; Cassese 2000; Human Rights Watch 2018.

  203. 203.

    Australia 2018, p. 5.

  204. 204.

    Ibid., p. 5; Sandoz et al. 1987, para 55.

  205. 205.

    Boothby 2016, pp. 348 f.

  206. 206.

    AP I, above n 67, Article 48; AP II, above n 141, Article 13; Henckaerts and Doswald-Beck 2005, Rules 1–10, 25–45.

  207. 207.

    AP I, above n 67, Articles 50(1) and 52(3); Henckaerts and Doswald-Beck 2005, Rule 6.

  208. 208.

    AP I, above n 67, Article 57; Henckaerts and Doswald-Beck 2005, Rules 15–24.

  209. 209.

    Hussain A (2016) AI On The Battlefield: A Framework For Ethical Autonomy. https://www.forbes.com/sites/forbestechcouncil/2016/11/28/ai-on-the-battlefield-a-framework-for-ethical-autonomy/#767535675cf2. Accessed 1 February 2019.

  210. 210.

    Simulations are an accepted method of testing complex technologies. See Gillespie 2015, p. 52.

  211. 211.

    Marcus 2018, pp. 6, 13.

  212. 212.

    Levy S (2015) Inside Deep Dreams: How Google Made its Computers Go Crazy. https://www.wired.com/2015/12/inside-deep-dreams-how-google-made-its-computers-go-crazy/. Accessed 1 February 2019.

  213. 213.

    To see examples of Deep Dream images, or to create your own, see Deep Dream Generator 2018.

  214. 214.

    AP I, above n 67, Article 85.

  215. 215.

    Rome Statute of the International Criminal Court, opened for signature 17 July 1998, 2187 UNTS 3 (entered into force 1 July 2002), Articles 6–8.

  216. 216.

    States are under an obligation to investigate such acts. See AP I, above n 67, Article 85. See also UN General Assembly (2005) Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims of Gross Violations of International Human Rights Law and Serious Violations of International Humanitarian Law, UN Doc. A/Res/60/147, Article 4.

  217. 217.

    It has been suggested that soldiers experienced with using autonomous systems could create trust between other soldiers and machines. This chapter sees no reason why weapons reviewers could not play the same role. See Roff and Danks 2018, pp. 12 f.

  218. 218.

    Sassòli 2014, p. 324; Schmitt and Thurnher 2013, p. 267; US Department of Defense 2012, para (4)(a)(3)(a).

  219. 219.

    US Department of Defense 2016, para 6.2.2; Australia 2018, p. 5.

  220. 220.

    Boden 2016, pp. 6 f., 108–112.

  221. 221.

    Wilson et al. 2018.

  222. 222.

    MIT Technology Review 2018.

  223. 223.

    In the work by Selbst and Barocas 2017, “high stakes” domains refer to criminal justice, healthcare, welfare, and education. However, due to a similar, if not greater, level of impact on people’s lives, this author also includes armed conflict. See also Roff and Danks 2018, p. 12.

  224. 224.

    Pande V (2018) Artificial Intelligence’s ‘Black Box’ Is Nothing To Fear. https://www.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html. Accessed 1 February 2019.

References

Articles, Books and Other Documents

Case Law

  • ICC, The Prosecutor v Jean-Pierre Bemba Gombo, Decision Pursuant to Article 61(7)(a) and (b) of the Rome Statute on the Charges of the Prosecutor Against Jean-Pierre Bemba Gombo, 15 June 2009, Case No. ICC-01/05-01/08-424

    Google Scholar 

  • ICJ, Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, 8 July 1996, [1996] ICJ Rep 226

    Google Scholar 

  • ICTR, Prosecutor v Clément Kayishema and Obed Ruzindana, Judgement, 1 June 2001, Case No. ICTR-95-1-A

    Google Scholar 

  • ICTY, Prosecutor v Zdravko Mucić, Judgement, 20 February 2001, Case No. IT-96-21-A

    Google Scholar 

  • ICTY, Prosecutor v Tihomir Blaškić, Judgement, 29 July 2004, Case No IT-95-14-A

    Google Scholar 

  • ICTY, Prosecutor v Stanislav Galić, Judgment and Opinion, 5 December 2003, Case No. IT-98-29-T

    Google Scholar 

  • ICTY, Prosecutor v Momčilo Krajišnik, Judgement, 17 March 2009, Case No. IT-00-39-A

    Google Scholar 

  • ICTY, Prosecutor v Dragomir Milošević, Judgement, 12 December 2007, Case No. IT-98-29/1-T

    Google Scholar 

Treaties

  • Convention (IV) respecting the Laws and Customs of War on Land and its annex: Regulations concerning the Laws and Customs of War on Land, opened for signature 18 October 1907, International Peace Conference, The Hague, Official Record 631 (entered into force 26 January 1910)

    Google Scholar 

  • Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, opened for signature 10 April 1981, 1342 UNTS 137 (entered into force 2 December 1983)

    Google Scholar 

  • Convention on the prohibition of military or any hostile use of environmental modification techniques, opened for signature 18 May 1977, 1108 UNTS 151 (entered into force 5 October 1978)

    Google Scholar 

  • Geneva Convention (I) for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field, opened for signature 12 August 1949, 75 UNTS 31 (entered into force 21 October 1950)

    Google Scholar 

  • Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), opened for signature 8 June 1977, 1125 UNTS 3 (entered into force 7 December 1978)

    Google Scholar 

  • Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), opened for signature 8 June 1977, 1125 UNTS 609 (entered into force 7 December 1978)

    Google Scholar 

  • Rome Statute of the International Criminal Court, opened for signature 17 July 1998, 2187 UNTS 3 (entered into force 1 July 2002)

    Google Scholar 

  • United Nations Convention on the Law of the Sea, opened for signature on 10 December 1982, 1833 UNTS 3 (entered into force 16 November 1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joshua G. Hughes .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 T.M.C. Asser Press and the authors

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hughes, J.G. (2020). The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods. In: Gill, T., Geiß, R., Krieger, H., Paulussen, C. (eds) Yearbook of International Humanitarian Law, Volume 21 (2018). Yearbook of International Humanitarian Law, vol 21. T.M.C. Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-343-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-94-6265-343-6_4

  • Published:

  • Publisher Name: T.M.C. Asser Press, The Hague

  • Print ISBN: 978-94-6265-342-9

  • Online ISBN: 978-94-6265-343-6

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics