Abstract
Deep learning is a method of machine learning which has advanced several headline-grabbing technologies, from self-driving cars to systems recognising mental health issues in medical data. Due to these successes, its capabilities in image and target recognition is currently being researched for use in armed conflicts. However, this programming method contains inherent limitations, including an inability for the resultant algorithms to comprehend context and the near impossibility for humans to understand the decision-making process of the algorithms. This can lead to the appearance that the algorithms are functioning as intended even when they are not. This chapter examines these problems, amongst others, with regard to the potential use of deep learning to programme automatic target recognition systems, which may be used in an autonomous weapon system during an armed conflict. This chapter evaluates how the limitations of deep learning affect the ability of these systems to perform target recognition in compliance with the law of armed conflict. Ultimately, this chapter concludes that whilst there are some very narrow circumstances where these algorithms could be used in compliance with targeting rules, there are significant risks of unlawful targets being selected. Further, these algorithms impair the exercise of legal duties by autonomous weapon system operators, commanders, and weapons reviewers. As such, this chapter concludes that deep learning-generated algorithms should not be used for target recognition by fully-autonomous weapon systems in armed conflicts, unless they can be made in such a way as to understand the context of targeting decisions and be explainable.
Joshua G. Hughes is a Ph.D. Candidate at Lancaster University Law School. The research for this chapter was carried out through a studentship grant from the North-West Consortium Doctoral Training Partnership, funded by the UK Arts and Humanities Research Council. He would like to thank Professor James Sweeney for his assistance and advice in writing this chapter. He would also like to thank the two anonymous reviewers for their very helpful comments. All errors remain the author’s own.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Steinberg R (2017) 6 areas where artificial neural networks outperform humans. https://venturebeat.com/2017/12/08/6-areas-where-artificial-neural-networks-outperform-humans/. Accessed 1 February 2019.
- 2.
See Geirhos et al. 2018, p. 1; Sharma N, Bluemstein M (2018) SharkSpotter combines AI and drone technology to spot sharks and aid swimmers on Australian beaches. https://theconversation.com/sharkspotter-combines-ai-and-drone-technology-to-spot-sharks-and-aid-swimmers-on-australian-beaches-92667. Accessed 1 February 2019.
- 3.
Marcus 2018, p. 2.
- 4.
Hawkins A (2018) Inside the Lab where Waymo is Building the Brains for its Driverless Cars. https://www.theverge.com/2018/5/9/17307156/google-waymo-driverless-cars-deep-learning-neural-net-interview. Accessed 1 February 2019.
- 5.
- 6.
Kasparov 2018, p. 265.
- 7.
- 8.
Boulanin and Verbruggen 2017, p. 26.
- 9.
- 10.
SBIR 2018.
- 11.
TASS (2017) Kalashnikov gunmaker develops combat module based on artificial intelligence. http://tass.com/defense/954894. Accessed 1 February 2019.
- 12.
See, e.g., Rogers et al. 1995.
- 13.
See, e.g., Furukawa 2018.
- 14.
SBIR 2018.
- 15.
TASS (2017) Kalashnikov gunmaker develops combat module based on artificial intelligence. http://tass.com/defense/954894. Accessed 1 February 2019.
- 16.
US Department of Defense 2012, p. 13.
- 17.
Ibid., p. 14.
- 18.
For an overview of the major issues, see Bhuta et al. 2016.
- 19.
Scharre 2018, p. 91.
- 20.
Boden 2016, p. 1.
- 21.
Jajal 2018.
- 22.
Ibid.
- 23.
Ibid.
- 24.
Boden 2016, pp. 6 f.
- 25.
Fry 2018, pp. 10 f.
- 26.
Ibid., pp. 10 f.
- 27.
Ibid.
- 28.
Ibid.
- 29.
Descriptions of various data types in a multitude of algorithms are discussed in Fry 2018.
- 30.
Boden 2016, p. 47.
- 31.
Ibid., p. 80.
- 32.
Ibid., p. 49; Goodfellow et al. 2016, p. 87.
- 33.
Boden 2016, p. 49.
- 34.
- 35.
Fry 2018, p. 86.
- 36.
- 37.
Scharre 2018, pp. 327–330.
- 38.
Alston 2011, p. 43.
- 39.
Scharre 2018, pp. 328–330.
- 40.
See Gill 2018.
- 41.
See UNOG 2018 and links. These discussions took place under the auspices of the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, opened for signature 10 April 1981, 1342 UNTS 137 (entered into force 2 December 1983).
- 42.
For more on the movement towards prohibiting AWS, see Campaign to Stop Killer Robots 2019a.
- 43.
“Consensus” meaning the absence of formal disagreement. Definition taken from United Nations Convention on the Law of the Sea, opened for signature on 10 December 1982, 1833 UNTS 3 (entered into force 16 November 1994), Article 161(8)(e). In terms of disagreements, see Campaign to stop Killer Robots 2018a, 2018b.
- 44.
Boothby 2019, pp. 145–150.
- 45.
- 46.
- 47.
- 48.
For an overview of state and some NGO views on characterising AWS, see Gill 2018.
- 49.
Boulanin and Verbruggen 2017.
- 50.
- 51.
Whilst this chapter is focussed upon algorithms, it does not use the “war algorithm” concept. Although this is an important concept, considering “war algorithms” as “any algorithm […] capable of operating in relation to armed conflict” does not enable greater understanding of the issues emanating from deep learning. See Lewis et al. 2016.
- 52.
- 53.
See, e.g., Sejnowski 2018 pp. 7, 8.
- 54.
Scharre 2018, pp. 124–130.
- 55.
Boulanin and Verbruggen 2017, pp. 17, 25–26, 114, 120.
- 56.
iPRAW 2017, p. 12.
- 57.
Farrant and Ford 2017, pp. 399–404.
- 58.
Ibid., p. 404.
- 59.
See iPRAW 2017, p. 11 on off-line and on-line learning.
- 60.
Brandom R (2018) Self-Driving Cars are Headed toward an AI Roadblock. https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber. Accessed 1 February 2019; Brown J (2018) IBM Watson Reportedly Recommended Cancer Treatments that were ‘Unsafe and Incorrect’. https://gizmodo.com/ibm-watson-reportedly-recommended-cancer-treatments-tha-1827868882. Accessed 1 February 2019.
- 61.
See, e.g., Marcus 2018.
- 62.
Marcus 2018, pp. 7 f.
- 63.
Boulanin and Verbruggen 2017, p. 17.
- 64.
- 65.
See, e.g., Jackson 2018.
- 66.
Sharkey 2018.
- 67.
See, e.g., Common Article 3(1) to the Geneva Conventions, e.g. Geneva Convention (I) for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field, opened for signature 12 August 1949, 75 UNTS 31 (entered into force 21 October 1950); Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), opened for signature 8 June 1977, 1125 UNTS 3 (entered into force 7 December 1978) (AP I), Article 84(4)(c). Also note AP I, Articles 9(1), 75(1).
- 68.
- 69.
Under-fitting is where there is too much variance in the data such that nothing can be adequately recognised. Over-fitting is where an algorithm is too specific and can only recognise its training data. Both lead to sub-optimal performance. See Brownlee 2016.
- 70.
Marcus 2018, pp. 6, 13. Techniques such as transfer learning and domain adaption can allow algorithms to be used in different environments than they were trained for, but only work with simple data sets and so are inappropriate for use with data from constantly changing armed conflicts. See Goodfellow et al. 2016, pp. 526 ff., Domingos 2015, p. 115.
- 71.
- 72.
Scharre 2018, pp. 181 f.
- 73.
This is the comparison between the concrete and direct military advantage to be expected from an operation and the level of incidental harm to civilians, the civilian population, and civilian objects, or a combination thereof. An excessive level of civilian harm in relation to the military advantage would be unlawful. See AP I, above n 67, Articles 51(5)(b), 57(2)(b); Henckaerts and Doswald-Beck 2005, Rule 14.
- 74.
Marcus 2018, p. 7 f.
- 75.
Handy 2007, p. 87.
- 76.
Srinivasan 2016; Abate T (2013) Stanford algorithm analyzes sentence sentiment, advances machine learning. https://engineering.stanford.edu/magazine/article/stanford-algorithm-analyzes-sentence-sentiment-advances-machine-learning. Accessed 1 February 2019.
- 77.
Characteristics recognised by deep learning systems would not easily apply to concepts of “justice” or “democracy”, or even to notions of military advantage, for example. See Marcus 2018, p. 7.
- 78.
Srinivasan 2016.
- 79.
Anderson et al. 2014, pp. 388–395.
- 80.
Fry 2018, pp. 10 f.
- 81.
Domingos 2015, p. 117.
- 82.
Marcus 2018, p. 11.
- 83.
This would be different from “codespace”, which relates to how physical spaces are altered by technology. See Kitchin and Dodge 2011.
- 84.
Nguyen et al. 2016.
- 85.
Marcus 2018, p. 11.
- 86.
Hussain A (2016) AI On The Battlefield: A Framework For Ethical Autonomy. https://www.forbes.com/sites/forbestechcouncil/2016/11/28/ai-on-the-battlefield-a-framework-for-ethical-autonomy/#767535675cf2. Accessed 1 February 2019.
- 87.
- 88.
Ananny and Crawford 2016, p. 7.
- 89.
See Kimball W (2019) Why Is It Called A Black Box If It’s Actually Orange? http://www.hopesandfears.com/hopes/now/question/168795-why-is-it-called-a-black-box-if-it-s-actually-orange. Accessed 1 February 2019.
- 90.
Ananny and Crawford 2016, p. 9.
- 91.
Knight W (2017) The Dark Secret at the Heart of AI. https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/. Accessed 1 February 2019.
- 92.
- 93.
For another example, also see Vanhemert K (2015) Simple Pictures that State-Of-The-Art AI Still Can’t Recognize. https://www.wired.com/2015/01/simple-pictures-state-art-ai-still-cant-recognize/. Accessed 1 February 2019. Further, algorithms that only give a semblance of compliance will still suggest highly-accurate image recognition, which would create additional problems in relation to automation bias (see for this below, Sect. 4.6). See Nguyen et al. 2015, Szegedy et al. 2014.
- 94.
- 95.
Gunning 2016.
- 96.
Gunning 2017, slide 5.
- 97.
Ibid., slides 12–17.
- 98.
Ibid., slide 21.
- 99.
Henckaerts and Doswald-Beck 2005.
- 100.
- 101.
See, e.g., Wood 2018.
- 102.
See, e.g., Handy 2007, p. 87.
- 103.
AP I, above n 67, Articles 50(1), 52(3); Henckaerts and Doswald-Beck 2005, Rules 6 and 10.
- 104.
AP I, above n 67, Article 57(2)(a)(i); Henckaerts and Doswald-Beck 2005, Rule 16.
- 105.
AP I, above n 67, Article 51(4)(a); Henckaerts and Doswald-Beck 2005, Rules 11, 12.
- 106.
AP I, above n 67, Articles 35(3), 55; Henckaerts and Doswald-Beck 2005, Rules 43–45.
- 107.
AP I, above n 67, Article 50(1). Also see “Situations of doubt as to the character of a person” in Henckaerts and Doswald-Beck 2005, Rule 6.
- 108.
- 109.
See Schmitt and Vihul 2017, Rule 95, para 1.
- 110.
Henckaerts and Doswald-Beck 2005, Rule 6.
- 111.
AP I, above n 67, Article 52(3).
- 112.
Henckaerts and Doswald-Beck 2005, Rule 10.
- 113.
UK Ministry of Defence 2004, para 5.3.4.
- 114.
ICTY, Prosecutor v Stanislav Galić, Judgment and Opinion, 5 December 2003, Case No. IT-98-29-T, para 55.
- 115.
- 116.
AP I, above n 67, Article 57(2)(a)(i); Henckaerts and Doswald-Beck 2005, Rule 16.
- 117.
AP I, above n 67, Article 57(2)(a)(i).
- 118.
Boothby 2016, p. 256.
- 119.
AP I, above n 67, Article 51(4)(a).
- 120.
Henckaerts and Doswald-Beck 2005, Rules 11, 12.
- 121.
There is no expansion in the AP I commentary or case law on how much confidence is required. However, the ICTY Trial Chamber alludes to part (a) of the indiscriminate attack prohibition in ICTY, Prosecutor v Dragomir Milošević, Judgement, 12 December 2007, Case No. IT-98-29/1-T, para 431, although the Chamber focuses upon parts (b) and (c) in its later deliberations and offers no expansion on part (a).
- 122.
Convention on the prohibition of military or any hostile use of environmental modification techniques, opened for signature 18 May 1977, 1108 UNTS 151 (entered into force 5 October 1978) (ENMOD Treaty), Article 1; AP I, above n 67, Articles 35(3), 55.
- 123.
Henckaerts and Doswald-Beck 2005, Rules 43, 44, 45.
- 124.
Humanitarian Policy and Conflict Research (2010), p. 204, Section M, para 4.
- 125.
The use of camouflage is a lawful ruse as they mislead the enemy but do not invite their confidence and therefore are not perfidious. See AP I, above n 67, Article 37(2).
- 126.
- 127.
ICJ, Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, 8 July 1996, [1996] ICJ Rep 226, para 78.
- 128.
This customary rule is applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rules 1, 7.
- 129.
AP I, above n 67, Article 48.
- 130.
Murray 2016, para 5.35.
- 131.
Algorithms capable of distinguishing combatants from civilians have been publicised. See Rosenberg M, Markoff J (2016) The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own. https://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html. Accessed 1 February 2019. For other algorithms able to distinguish police uniforms from civilians, see Guersenzvaig 2018.
- 132.
AP I, above n 67, Article 43(1).
- 133.
See Gaggioli 2018, pp. 912–915.
- 134.
For more on targeting members of an OAG based upon their membership, see Gaggioli 2018.
- 135.
AP I, above n 67, Article 52(2).
- 136.
Ibid., Article 52(2).
- 137.
“Neutralisation” is used here for brevity, but expected military advantage can also come from “total or partial destruction, [or] capture”. See ibid., Article 52(2).
- 138.
Jachec-Neale 2015, p. 116, see also pp. 117–119.
- 139.
UK Ministry of Defence 2004, para 5.4.4(j).
- 140.
Schmitt and Thurnher 2013, p. 256 f.
- 141.
Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), opened for signature 8 June 1977, 1125 UNTS 609 (entered into force 7 December 1978) (AP II), Article 13(3).
- 142.
Humanitarian Policy and Conflict Research 2010, p. 87, Rule 12(a), para 3.
- 143.
Heller, p. 95 f.
- 144.
See Ford 2017, p. 438.
- 145.
AP I, above n 67, Articles 8, 13(2)(a). See Ipsen 2013, para 316.
- 146.
AP I, above n 67, Article 41; Henckaerts and Doswald-Beck 2005, Rule 47.
- 147.
Pike 2011.
- 148.
AP I, above n 67, Article 41(a); Henckaerts and Doswald-Beck 2005, Rule 47(a).
- 149.
AP I, above n 67, Article 48; AP II, above n 141, Article 13; Henckaerts and Doswald-Beck 2005, Rules 1–10, 25–45.
- 150.
AP I, above n 67, Articles 51(4), 51(4)(b), 51(5); Henckaerts and Doswald-Beck 2005, Rules 11, 12.
- 151.
On catastrophic errors, see UK Ministry of Defence 2018, para 4.2.
- 152.
See, e.g., Scharre 2018, pp. 59–76.
- 153.
AP I, above n 67, Article 57(1). According to the ICRC, this is a customary rule applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rule 15.
- 154.
AP I, above n 67, Article 57(2)(a)(ii). According to the ICRC, this is a customary rule applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rule 17.
- 155.
AP I, above n 67, Article 57(2)(c). According to the ICRC, this is a customary rule applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rule 20.
- 156.
AP I, above n 67, Articles 57(2)(a)(iii), 57(2)(b). According to the ICRC, this is a customary rule applicable in both IAC and NIAC. See Henckaerts and Doswald-Beck 2005, Rules 18, 19.
- 157.
AP I, above n 67, Article 57(3). According to the ICRC, this is a customary rule applicable in IAC, and arguably also in NIAC. See Henckaerts and Doswald-Beck 2005, Rule 21.
- 158.
Note that the UK manual extends precautionary duties to all those with discretion over attacks. See UK Ministry of Defence 2004, para 5.32.9. Also note Boothby’s suggestion that in the case of autonomous attacks, these precautionary duties would extend to those who evaluate intelligence, to those who set areas to be searched and targets to be attacked, and those who input this information into the system. See Boothby 2016, 254.
- 159.
Schmitt and Thurnher 2013, pp. 254–257.
- 160.
ENMOD Treaty, above n 122, Article 1; AP I, above n 67, Articles 35(3), 55. According to the ICRC, this is a customary rule applicable in IAC, and arguably also in NIAC. See Henckaerts and Doswald-Beck 2005, Rules 43, 44, 45.
- 161.
Such an attack would also be prohibited due to the protection of works and installations containing dangerous forces, see AP I, above n 67, Article 56. Boothby also provides an example of an attack against an adverse super-tanker which causes an oil spill. However, as such an object is not of a military nature, this example does not fit perfectly here. See Boothby 2016, p. 84.
- 162.
AP I, above n 67, Articles 35(3), 55.
- 163.
- 164.
UK Ministry of Defence 2018, para 4.5.
- 165.
Ibid., para 4.6.
- 166.
- 167.
See, e.g., BAE Systems (2018) Taranis. https://www.baesystems.com/en/product/taranis. Accessed 1 February 2019.
- 168.
- 169.
Skitka et al. 2000a.
- 170.
Algorithms recognising and using the wrong characteristics are not unusual. See, e.g., Vanhemert K (2015) Simple Pictures that State-Of-The-Art AI Still Can’t Recognize. https://www.wired.com/2015/01/simple-pictures-state-art-ai-still-cant-recognize/. Accessed 1 February 2019.
- 171.
See AP I, above n 67, Articles 15, 21, 59(1), 70(4), 71(2), 85(3); Henckaerts and Doswald-Beck 2005, Rules 25, 27, 29–38, 40, 42.
- 172.
AP I, above n 67, Article 48; AP II, above n 141, Article 13; Henckaerts and Doswald-Beck 2005, Rules 1–10, 25–45
- 173.
AP I, above n 67, Article 51(4), 51(4)(b), 51(5); Henckaerts and Doswald-Beck 2005, Rules 11, 12.
- 174.
AP I, above n 67, Articles 50(1), 52(3); Henckaerts and Doswald-Beck 2005, Rules 6 and 10.
- 175.
AP I, above n 67, Article 57; Henckaerts and Doswald-Beck 2005, Rules 15–24.
- 176.
AP I, above n 67, Articles 51(5)(b), 57(2)(b); Henckaerts and Doswald-Beck 2005, Rule 14.
- 177.
AP I, above n 67, Articles 15, 21, 59(1), 70(4), 71(2), 85(3); Henckaerts and Doswald-Beck 2005, Rules 25, 27, 29–38, 40, 42.
- 178.
ENMOD Treaty, above n 122, Article 1; AP I, above n 67, Articles 35(3), 55; Henckaerts and Doswald-Beck 2005, Rules 43–45.
- 179.
On catastrophic errors, see UK Ministry of Defence 2018, para 4.2.
- 180.
- 181.
Sharkey 2016, pp. 34–37.
- 182.
- 183.
- 184.
Ford 2017, p. 456.
- 185.
Sassòli 2014, p. 324.
- 186.
ICTY, Prosecutor v Tihomir Blaškić, Judgement, 29 July 2004, Case No IT-95-14-A, para 417; ICTR, Prosecutor v Clément Kayishema and Obed Ruzindana, Judgement, 1 June 2001, Case No. ICTR-95-1-A, para 302; ICTY, Prosecutor v Momčilo Krajišnik, Judgement, 17 March 2009, Case No. IT-00-39-A, para 193 f.
- 187.
ICC, The Prosecutor v Jean-Pierre Bemba Gombo, Decision Pursuant to Article 61(7)(a) and (b) of the Rome Statute on the Charges of the Prosecutor Against Jean-Pierre Bemba Gombo, 15 June 2009, Case No. ICC-01/05-01/08-424, para 438.
- 188.
Schmitt and Vihul 2017, pp. 399 f.
- 189.
Ibid., pp. 399 f.
- 190.
- 191.
Ford 2017, p. 474. Ford likens the failure of command responsibility by a commander who allowed drunk or unstable subordinates to operate to a commander who could not control an autonomous system. See ICTY, Prosecutor v Zdravko Mucić, Judgement, 20 February 2001, Case No. IT-96-21-A, para 238.
- 192.
Jevglevskaja 2018.
- 193.
Henckaerts and Doswald-Beck 2005, Rule 71, in particular p. 250.
- 194.
UK Ministry of Defence 2016, p. 2.
- 195.
- 196.
- 197.
- 198.
Australia 2018, p. 5.
- 199.
Sandoz et al. 1987, para 1410.
- 200.
See Boothby 2016, pp. 76–91, 347–348.
- 201.
AP I, above n 67, Article 1(2). It states: “[i]n cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.” Note that this provision has appeared in different forms in the 1899 Hague Convention (IV) respecting the Laws and Customs of War on Land and its annex: Regulations concerning the Laws and Customs of War on Land, opened for signature 18 October 1907, International Peace Conference, The Hague, Official Record 631 (entered into force 26 January 1910), all four Geneva Conventions, and AP II.
- 202.
- 203.
Australia 2018, p. 5.
- 204.
Ibid., p. 5; Sandoz et al. 1987, para 55.
- 205.
Boothby 2016, pp. 348 f.
- 206.
AP I, above n 67, Article 48; AP II, above n 141, Article 13; Henckaerts and Doswald-Beck 2005, Rules 1–10, 25–45.
- 207.
AP I, above n 67, Articles 50(1) and 52(3); Henckaerts and Doswald-Beck 2005, Rule 6.
- 208.
AP I, above n 67, Article 57; Henckaerts and Doswald-Beck 2005, Rules 15–24.
- 209.
Hussain A (2016) AI On The Battlefield: A Framework For Ethical Autonomy. https://www.forbes.com/sites/forbestechcouncil/2016/11/28/ai-on-the-battlefield-a-framework-for-ethical-autonomy/#767535675cf2. Accessed 1 February 2019.
- 210.
Simulations are an accepted method of testing complex technologies. See Gillespie 2015, p. 52.
- 211.
Marcus 2018, pp. 6, 13.
- 212.
Levy S (2015) Inside Deep Dreams: How Google Made its Computers Go Crazy. https://www.wired.com/2015/12/inside-deep-dreams-how-google-made-its-computers-go-crazy/. Accessed 1 February 2019.
- 213.
To see examples of Deep Dream images, or to create your own, see Deep Dream Generator 2018.
- 214.
AP I, above n 67, Article 85.
- 215.
Rome Statute of the International Criminal Court, opened for signature 17 July 1998, 2187 UNTS 3 (entered into force 1 July 2002), Articles 6–8.
- 216.
States are under an obligation to investigate such acts. See AP I, above n 67, Article 85. See also UN General Assembly (2005) Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims of Gross Violations of International Human Rights Law and Serious Violations of International Humanitarian Law, UN Doc. A/Res/60/147, Article 4.
- 217.
It has been suggested that soldiers experienced with using autonomous systems could create trust between other soldiers and machines. This chapter sees no reason why weapons reviewers could not play the same role. See Roff and Danks 2018, pp. 12 f.
- 218.
- 219.
- 220.
Boden 2016, pp. 6 f., 108–112.
- 221.
Wilson et al. 2018.
- 222.
MIT Technology Review 2018.
- 223.
- 224.
Pande V (2018) Artificial Intelligence’s ‘Black Box’ Is Nothing To Fear. https://www.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html. Accessed 1 February 2019.
References
Articles, Books and Other Documents
Alston P (2011) Lethal Robotic Technologies: The Implications for Human Rights and International Humanitarian Law. Journal of Law, Information and Science 21(2):35–60
Ananny M, Crawford K (2016) Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20:973–989
Anderson K, Reisner D, Waxman M (2014) Adapting the Law of Armed Conflict to Autonomous Weapon Systems. International Law Studies. 90:386–411
Arkin R (2009) Governing Lethal Behaviour in Autonomous Robots. CRC Press, Boca Raton
Article 36 (2016) Key elements of meaningful human control. Background Paper. http://www.article36.org/wp-content/uploads/2016/04/MHC-2016-FINAL.pdf. Accessed 1 February 2019
Asaro P (2012) On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross 94(886):687–709
Australia (2018) The Australian Article 36 Process, UN Doc. CCW/GGE.2/2018/WP.6
Barocas S, Selbst AD (2016) Big Data’s Disparate Impact. California Law Review 104:671–732
Bellinger JB, Haynes WJ (2007) A US government response to the International Committee of the Red Cross study Customary International Humanitarian Law. International Review of the Red Cross 89(866):443–471
Bhuta N, Beck S, Geiss R, Liu H-Y, Kress C (2016) Autonomous Weapons Systems. Cambridge University Press, Cambridge
Boden MA (2016) AI: It’s nature and future. Oxford University Press, Oxford
Boothby WH (2016) Weapons and the Law of Armed Conflict, 2nd edn. Oxford University Press, Oxford
Boothby WH (2019) Highly Automated and Autonomous technologies. In: Boothby WH (ed) New Technologies and the Law in War and Peace. Cambridge University Press, Cambridge, pp. 137–181
Boulanin V, Verbruggen M (2017) Mapping The Development of Autonomy in Weapon Systems. SIPRI, Stockholm
Brownlee J (2016) Overfitting and Underfitting With Machine Learning Algorithms. https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/. Accessed 1 February 2019
Campaign to Stop Killer Robots (2018a) Country Views on Killer Robots. https://www.stopkillerrobots.org/wp-content/uploads/2018/04/KRC_CountryViews_13Apr2018.pdf. Accessed 1 February 2019
Campaign to Stop Killer Robots (2018b) Convergence on retaining human control of weapons systems. https://www.stopkillerrobots.org/2018/04/convergence/. Accessed 1 February 2019
Campaign to Stop Killer Robots (2019a) Home. https://www.stopkillerrobots.org/. Accessed 1 February 2019
Campaign to Stop Killer Robots (2019b) Learn. https://www.stopkillerrobots.org/learn/. Accessed 1 February 2019
Cassese A (2000) The Martens Clause: Half a Loaf or Simply Pie in the Sky? European Journal of International Law 11(1):187–216
Chengeta T (2016) Accountability Gap: Autonomous Weapon Systems And Modes Of Responsibility In International Law. Denver Journal of International Law 45(1):1–50
Crootof R (2015) The Killer Robots Are Here: Legality and Policy Implications. Cardozo Law Review 36:1837–1915
Crootof R (2016) A Meaningful Floor for “Meaningful Human Control”. Temple International & Comparative Law Journal 30(1):53–62
Cummings ML (2017) Artificial Intelligence and the Future of Warfare. Chatham House. https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf. Accessed 24 April 2019
Deep Dream Generator (2018) Human AI Collaboration. https://deepdreamgenerator.com/. Accessed 1 February 2019
Deep Learning Playground (2019) Tinker With a Neural Network Right Here in Your Browser. Don’t Worry, You Can’t Break It. We Promise. http://playground.tensorflow.org. Accessed 1 February 2019
Domingos P (2015) The Master Algorithm. Perseus Books, New York
Etzioni A, Etzioni O (2017) Pros and Cons of Autonomous Weapons Systems. Military Review May-June:71–82
Farrant J, Ford CM (2017) Autonomous Weapons and Weapon Reviews: The UK Second International Weapon Review Forum. International Law Studies 93:389–422
Ford CM (2017) Autonomous Weapons and International Law. South Carolina Law Review 69:413–478
France (2001) Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977. Reservation/Declaration upon ratification. ICRC IHL Database. https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Notification.xsp?action=openDocument&documentId=D8041036B40EBC44C1256A34004897B2. Accessed 24 April 2019
Fry H (2018) Hello World: How to be Human in the Age of the Machine. Penguin, London
Furukawa H (2018) Deep Learning for End-to-End Automatic Target Recognition from Synthetic Aperture Radar Imagery. The Institute for Electronics, Information and Communication Engineers. https://arxiv.org/pdf/1801.08558.pdf. Accessed 1 February 2019
Gaggioli G (2018) Targeting Individuals Belonging to an Armed Group. Vanderbilt Journal of Transnational Law 51:901–917
Geirhos R, Janssen DHJ, Schütt HH, Rauber J, Bethge M, Wichmann FA (2018) Comparing deep neural networks against humans: object recognition when the signal gets weaker. https://arxiv.org/pdf/1706.06969.pdf. Accessed 1 February 2019
Gill A (2018) Characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the Convention. https://www.unog.ch/80256EDD006B8954/(httpAssets)/C43B731506CE4D35C1258272003399DB/$file/Chart.1+Updated.pdf. Accessed 1 February 2019
Gillespie T (2015) New Technologies and Design for the Laws of Armed Conflict. The RUSI Journal 160(6):50–56
Goodfellow I, Bengio Y, Courville A (2016) Deep Learning. Massachusetts Institute for Technology Press, Cambridge
Guersenzvaig A (2018) Autonomous Weapon Systems: Failing the Principle of Discrimination. IEEE Technology and Society Magazine 37(1):55–61
Gunning D (2016) Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency. https://www.darpa.mil/program/explainable-artificial-intelligence. Accessed 1 February 2019
Gunning D (2017) Explainable Artificial Intelligence (XAI): Program Update November 2017. Defense Advanced Research Projects Agency. https://www.darpa.mil/attachments/XAIProgramUpdate.pdf. Accessed 1 February 2019
Handy B (2007) Royal Air Force: Aircraft And Weapons. Royal Air Force, London
Heller KJ (2013) ‘One Hell Of A Killing Machine’: Signature Strikes And International Law. Journal of International Criminal Justice 11:89–119
Henckaerts JM, Doswald-Beck L (2005) Customary International Humanitarian Law, Volume I: Rules. Cambridge University Press, Cambridge
Heyns C (2017) Autonomous weapons in armed conflict and the right to a dignified life: an African perspective. South African Journal on Human Rights 33(1):46–71
Horowitz M, Scharre P (2015) Meaningful Human Control in Weapon Systems: A Primer. Center for a New American Security. https://s3.amazonaws.com/files.cnas.org/documents/Ethical_Autonomy_Working_Paper_031315.pdf?mtime=20160906082316. Accessed 1 February 2019
Human Rights Watch (2012) Losing Humanity: The Case against Killer Robots. https://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf. Accessed 11 April 2019
Human Rights Watch (2018) Heed the Call: A Moral and Legal Imperative to Ban Killer Robots. https://www.hrw.org/sites/default/files/report_pdf/arms0818_web.pdf. Accessed 11 April 2019
Humanitarian Policy and Conflict Research (2010) Commentary to the HPCR Manual on International Law Applicable to Air and Missile Warfare. Harvard University Press, Cambridge
iPRAW (2017) Focus on Computational Methods in the context of LAWS. https://www.ipraw.org/wp-content/uploads/2017/11/2017-11-10_iPRAW_Focus-On-Report-2.pdf. Accessed 24 April 2019
Ipsen K (2013) Combatants and Non-combatants. In: Fleck D (ed) The Handbook of International Humanitarian Law, 3rd edn. Oxford University Press, Oxford, pp. 79–113
Jachec-Neale A (2015) The Concept Of Military Objectives In International Law And Targeting Practice. Routledge, London/New York
Jajal TD (2018) Distinguishing between Narrow AI, General AI and Super AI. Medium. https://medium.com/@tjajal/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22. Accessed 1 February 2019
Jackson JR (2018) Algorithmic Bias. Journal of Leadership, Accountability and Ethics 15(4):55–65
Jevglevskaja N (2018) Weapons Review Obligation under Customary International Law. International Law Studies 94:185–221
Kaikobad K, Hartmann J, Shah S, Warbrick C (2005) United Kingdom Materials on International Law. British Yearbook of International Law 76(1):684–970
Kasparov G (2018) Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. John Murray Publishers, Croydon
Kaufman J (2015) Detecting Tanks. https://www.jefftk.com/p/detecting-tanks. Accessed 1 February 2019
Kitchin R, Dodge M (2011) Code/Space: Software and Everyday Life. Massachusetts Institute for Technology Press, Cambridge
Leveringhaus A (2016) Ethics and Autonomous Weapons. Palgrave Macmillan, London
Lewis DA, Blum G, Modirzadeh NK (2016) War-Algorithm Accountability. Harvard Law School Program on International Law and Armed Conflict. https://blogs.harvard.edu/pilac/files/2016/09/War-Algorithm-Accountability-Appendices-Only-Searchable-August-2016.pdf. Accessed 24 April 2019
Marcus G (2018) Deep Learning: A Critical Appraisal. https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf. Accessed 1 February 2019
Meron T (2000) The Martens Clause, Principles of Humanity, and Dictates of Public Conscience. American Journal of International Law 94(1):78–89
MIT Technology Review (2018) Evolutionary algorithm outperforms deep-learning machines at video games. Emerging Technology from the arXiv. https://www.technologyreview.com/s/611568/evolutionary-algorithm-outperforms-deep-learning-machines-at-video-games/. Accessed 28 September 2018
Murray D (2016) Practitioners’ Guide to Human Rights Law in Armed Conflict. Oxford University Press, Oxford
Nguyen A, Clune J, Bengio Y, Dosovitskiy A, Yosinski J (2016) Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space. https://arxiv.org/pdf/1612.00005.pdf. Accessed 1 February 2019
Nguyen A, Yosinkski J, Clune J (2015) Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 427–436
Parasuraman R, Manzey DH (2010) Complacency and bias in human use of automation: an attentional integration. The Journal of the Human Factors and Ergonomics Society 52(3):381–410
Pike J (2011) Samsung Techwin SGR-A1 Sentry Guard Robot. Global Security. http://www.globalsecurity.org/military/world/rok/sgr-a1.htm. Accessed 1 February 2019
Press M (2017) Of Robots and Rules: Autonomous Weapon Systems in the Law of Armed Conflict. Georgetown Journal of International Law 48:1337–1366
Robillard M (2017) No Such Thing as Killer Robots. Journal of Applied Philosophy 35(4):705–717
Roff HM, Danks D (2018) “Trust but Verify”: The difficulty of trusting autonomous weapons systems. Journal of Military Ethics 17(1):2–20
Rogers SK, Colombi JM, Martin CE, Gainey JC, Fielding KH, Burns TJ, Ruck DW, Kabrisky M, Oxley M (1995) Neural networks for automatic target recognition. Neural Networks 8(7–8): 1153–1184
Sandoz Y, Swinarski C, Zimmermann B (1987) Commentary on the Additional Protocols of 8 June 1977 to the Geneva Conventions of 12 August 1949. Martinus Nijhoff/ICRC, Geneva
Sassòli M (2014) Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to be Clarified. International Law Studies 90:308–340
Sauer F (2018) ICRAC statement on the human control of weapons systems at the August 2018 CCW GGE. https://www.icrac.net/icrac-statement-on-the-human-control-of-weapons-systems-at-the-august-2018-ccw-gge/. Accessed 1 February 2019
SBIR (2018) A Automatic Target Recognition of Personnel and Vehicles from an Unmanned Aerial System Using Learning Algorithms. https://www.sbir.gov/sbirsearch/detail/1413823. Accessed 1 February 2019
Scharre P (2018) Army of None: Autonomous Weapons and the Future of War. W. W. Norton & Company, New York/London
Schmitt MN (2013) Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics. Harvard National Security Journal Features. https://harvardnsj.org/wp-content/uploads/sites/13/2013/02/Schmitt-Autonomous-Weapon-Systems-and-IHL-Final.pdf. Accessed 1 February 2019
Schmitt MN, Thurnher JS (2013) “Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict. Harvard National Security Journal 4(2):231-281
Schmitt MN, Vihul L (2017) Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, 2nd edn. Cambridge University Press, Cambridge
Sejnowski TJ (2018) The Deep Learning Revolution. Massachusetts Institute for Technology Press, Cambridge
Selbst A, Barocas S (2017) AI Now 2017 Report. AI Now Institute. https://ainowinstitute.org/AI_Now_2017_Report.pdf. Accessed 1 February 2019
Sharkey N (2016) Staying in the loop: human supervisory control of weapons. In: Bhuta N, Beck S, Geiss R, Liu H-Y, Kress C (eds) Autonomous Weapons Systems. Cambridge University Press, Cambridge, pp. 23–38
Sharkey N (2017) Why robots should not be delegated with the decision to kill. Connection Science 29(2):177–186
Sharkey N (2018) The impact of gender and race bias in AI. ICRC Humanitarian Law & Policy Blog. https://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/. Accessed 1 February 2019
Silver D, Hubert T, Schrittwieser J, Antonoglou I, Lai M, Guez A, Lanctot M, Sifre L, Kumaran D, Graepel T, Lillicrap T, Simonyan K, Hassabis D (2018) A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362(6419):1140–1144
Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y, Lillicrap T, Hui F, Sifre L, van den Driessche G, Graepel T, Hassabis D (2017) Mastering the game of Go without human knowledge. Nature 550:354–359
Skitka L, Mosier K, Burdick M (1999) Does automation bias decision-making? International Journal of Human-Computer Studies 51(5):991–1006
Skitka LJ, Mosier KL, Burdick M (2000a) Accountability and automation bias. International Journal of Human-Computer Studies 52(4):707–717
Skitka LJ, Mosier KL, Burdick, M, Rosenblatt B (2000b) Automation bias and errors: are crews better than individuals? The International Journal of Aviation Psychology 10(1):85–97
Sparrow R (2007) Killer Robots. Journal of Applied Philosophy 24(1):62–77
Srinivasan V (2016) Context, Language, and Reasoning in AI: Three Key Challenges. https://www.technologyreview.com/s/602658/context-language-and-reasoning-in-ai-three-key-challenges/. Accessed 1 February 2019
Su J, Vargas DV, Sakurai K (2018) One Pixel Attack for Fooling Deep Neural Networks. https://arxiv.org/pdf/1710.08864.pdf. Accessed 1 February 2019
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. https://arxiv.org/pdf/1312.6199.pdf. Accessed 1 February 2019
United Kingdom (2002) Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977. Reservation/Declaration upon ratification. ICRC IHL Database. https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Notification.xsp?action=openDocument&documentId=0A9E03F0F2EE757CC1256402003FB6D2. Accessed 24 April 2019
United Kingdom (2016) Statement To The Informal Meeting Of Experts On Lethal Autonomous Weapons Systems 11 - 15 April 2016 (Challenges To IHL) http://www.unog.ch/80256EDD006B8954/(httpAssets)/37B0481990BC31DAC1257F940053D2AE/$file/2016_LAWS+MX_ChallengestoIHL_Statements_United+Kingdom.pdf. Accessed 1 February 2019
UK Ministry of Defence (2004) Manual of the Law of Armed Conflict. Oxford University Press, Oxford
UK Ministry of Defence (2009) Campaign Execution: Joint Doctrine Publication 3-00, 3rd edn. Development, Concepts and Doctrine Centre, Shrivenham
UK Ministry of Defence (2016) UK Weapons Reviews. Development, Concepts and Doctrine Centre, Shrivenham
UK Ministry of Defence (2018) Human-Machine Teaming: Joint Concept Note 1/18. Development, Concepts and Doctrine Centre, Shrivenham
UN General Assembly (2005) Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims of Gross Violations of International Human Rights Law and Serious Violations of International Humanitarian Law, UN Doc. A/Res/60/147
UNOG (2018) Background on Lethal Autonomous Weapons Systems in the CCW. https://www.unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument. Accessed 1 February 2019
US Department of Defense (2012) Directive 3000.09: Autonomy in Weapon Systems. https://www.hsdl.org/?view&did=726163. Accessed 24 April 2019
US Department of Defense (2016) Law of War Manual 2015, Updated December 2016. https://dod.defense.gov/Portals/1/Documents/pubs/DoD%20Law%20of%20War%20Manual%20-%20June%202015%20Updated%20Dec%202016.pdf?ver=2016-12-13-172036-190. Accessed 24 April 2019
US Joint Chiefs of Staff (2013) Joint Targeting: Joint Publication 3-60. https://www.justsecurity.org/wp-content/uploads/2015/06/Joint_Chiefs-Joint_Targeting_20130131.pdf. Accessed 24 April 2019
Wagner M (2014) The Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapon Systems. Vanderbilt Journal of Transnational Law 47:1371–1424
Wilson DG, Cussat-Blanc S, Luga H, Miller JF (2018) Evolving simple programs for playing Atari games. https://arxiv.org/pdf/1806.05695.pdf. Accessed 1 February 2019
Wood M (2018) The Evolution and Identification of the Customary International Law of Armed Conflict. Vanderbilt Journal of Transnational Law 51:727–736
Yudkowsky E (2018) Artificial Intelligence as a Positive and Negative Factor in Global Risk. Machine Intelligence Research Institute. http://intelligence.org/files/AIPosNegFactor.pdf. Accessed 1 February 2019
Case Law
ICC, The Prosecutor v Jean-Pierre Bemba Gombo, Decision Pursuant to Article 61(7)(a) and (b) of the Rome Statute on the Charges of the Prosecutor Against Jean-Pierre Bemba Gombo, 15 June 2009, Case No. ICC-01/05-01/08-424
ICJ, Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, 8 July 1996, [1996] ICJ Rep 226
ICTR, Prosecutor v Clément Kayishema and Obed Ruzindana, Judgement, 1 June 2001, Case No. ICTR-95-1-A
ICTY, Prosecutor v Zdravko Mucić, Judgement, 20 February 2001, Case No. IT-96-21-A
ICTY, Prosecutor v Tihomir Blaškić, Judgement, 29 July 2004, Case No IT-95-14-A
ICTY, Prosecutor v Stanislav Galić, Judgment and Opinion, 5 December 2003, Case No. IT-98-29-T
ICTY, Prosecutor v Momčilo Krajišnik, Judgement, 17 March 2009, Case No. IT-00-39-A
ICTY, Prosecutor v Dragomir Milošević, Judgement, 12 December 2007, Case No. IT-98-29/1-T
Treaties
Convention (IV) respecting the Laws and Customs of War on Land and its annex: Regulations concerning the Laws and Customs of War on Land, opened for signature 18 October 1907, International Peace Conference, The Hague, Official Record 631 (entered into force 26 January 1910)
Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, opened for signature 10 April 1981, 1342 UNTS 137 (entered into force 2 December 1983)
Convention on the prohibition of military or any hostile use of environmental modification techniques, opened for signature 18 May 1977, 1108 UNTS 151 (entered into force 5 October 1978)
Geneva Convention (I) for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field, opened for signature 12 August 1949, 75 UNTS 31 (entered into force 21 October 1950)
Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), opened for signature 8 June 1977, 1125 UNTS 3 (entered into force 7 December 1978)
Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), opened for signature 8 June 1977, 1125 UNTS 609 (entered into force 7 December 1978)
Rome Statute of the International Criminal Court, opened for signature 17 July 1998, 2187 UNTS 3 (entered into force 1 July 2002)
United Nations Convention on the Law of the Sea, opened for signature on 10 December 1982, 1833 UNTS 3 (entered into force 16 November 1994)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 T.M.C. Asser Press and the authors
About this chapter
Cite this chapter
Hughes, J.G. (2020). The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods. In: Gill, T., Geiß, R., Krieger, H., Paulussen, C. (eds) Yearbook of International Humanitarian Law, Volume 21 (2018). Yearbook of International Humanitarian Law, vol 21. T.M.C. Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-343-6_4
Download citation
DOI: https://doi.org/10.1007/978-94-6265-343-6_4
Published:
Publisher Name: T.M.C. Asser Press, The Hague
Print ISBN: 978-94-6265-342-9
Online ISBN: 978-94-6265-343-6
eBook Packages: Law and CriminologyLaw and Criminology (R0)