1 Introduction

In their historical assessment of weapons, DeVries and Smith (2007, p. vii) rightly comment that “[w]eapons have evolved over time to become both more lethal and more complex.” There is a paradigm shift from crude weapons, the medieval weapons to semi-autonomous and then to autonomous weapons that are fully integrated with artificial intelligence. This paper will use a working definition of autonomous weapons as warfare machines that are designed with high-level capacities to function on their own in warfare without human interference and/or facilitation. ‘Function’ here suggests the capacity for action, to helpfully distinguish autonomous weapons from similar (standard) weapons, like mines or grenades that can explode or detonate on their own. Due to their increasing usage in war and their capacity to act independently, or probably better than human combatants, there are views that warfare could soon be an activity for autonomous weapons without the need for human combatants. For the purpose of this paper, I will refer to autonomous weapons, and autonomous weapon system (AWS) broadly, as smart soldiers.Footnote 1 This paper focuses on the possibility that the integration of ethics into smart soldiers will help address moral challenges in modern warfare. Let us refer to supporters of this claim as smart-soldier advocates.

Orend (2007, p. 571) defines just war theory as “a coherent set of concepts and values designed to enable systematic and principled moral judgement in war time”. The concepts and values are poised to minimize unjust suffering in war, especially for (innocent) civilians, right from the start of war (ad bellum), during the war (in bello), and after the war (post bellum). But our focus here is on in bello context (i.e., during the war) and there are arguably four relevant principles (Frowe 2016, p. 105). The first defines the required qualifications of being a combatant. The second concerns those that could be legitimately attacked in war. The third explains the permissible strategies, weapons and tactics of engagement in war. The fourth provides the guidelines concerning the treatment of prisoners of war. It is in the light of this clarification that I will describe ‘ethical warfare’ as warfare fought in adherence to the rules of engagement (RoE) and the laws of war (LoW),Footnote 2 specifically in careful discrimination between legitimate and illegitimate targets and embracing proportionality in either offensive or defensive engagements in war. It is a truism that human soldiers have yet to have sufficiently ethical warfare (due to human weaknesses). But, perhaps, with ethically enhanced smart soldiers we may have ethical warfare. Smart-soldier advocates may claim that ethically enhanced smart soldiers (hereafter as EeSS) can function or act ethically in war due to the supposed inherent limitations in human soldiers, especially that humans are emotional and error-prone, which are attributes lacking in EeSS. Moreover, since autonomous machines are increasingly participating in warfare, it is important to enhance them for moral function and increased participation (Arkin 2009a, b; Łichocki et al. 2011; Hellström 2013).

However, this paper defends the claim that despite human limitations, the capacity of EeSS for moral sensitivity is artificial and unauthentic. This poses some limitations to the exclusive use of the EeSS that I will consider later on. Section A discusses various arguments about the inherent human limitations and how smart soldiers overcome said limitations. Section B will expand on the possibilities of having EeSS and the ethical implication for modern warfare. In Section C, I argue that the inclusion of EeSS in modern warfare would complicate moral issues in war rather than produce more ethical warfare. In the concluding section of the paper, I suggest that while EeSS could further complicate moral issues in warfare, it does not diminish their relevance in achieving ethical warfare. Alternatively, it is better to have a collaborative employment of EeSS and human soldiers to achieve ethical warfare.

2 Section A

2.1 Human limitation and smart capabilities

Among other reactions to the nature of war is just war theory, which emphasizes the upholding of justice through the three main phases of warfare. The upholding of justice is, however, impossible without having responsible agents or participators who act in accordance with the ethics of war. This reiterates Michael Walzer’s comment that “[t]here can be no justice in war if there are not, ultimately, responsible men and women.” (1977, p. 288). The nature of war increases the significance of moral responsibility to curb the negative excesses of war. But the complexities of war and the limitations of human soldiers to grapple with those complexities often reveal human weaknesses to be morally responsible agents in war. Thus, instead of seeking responsible men and women to achieve justice in war, the alternative for smart-soldier advocates here is to develop and employ smart soldiers that are enhanced with ethical capabilities. For smart-soldier advocates, it is more realistic to make competent entities (i.e., smart soldiers) become morally responsible than to enforce moral responsibility on human soldiers with inherent (moral) limitations. Umbrello et al. (2020, p. 273) made an even bolder claim, advocating that autonomous weapons could become “the only ethical means for waging wars.” This somewhat seeks a total replacement of human soldiers with EeSS. This would be a radical transition from the usual anthropocentric warfare to a robocentric warfare. Nevertheless, smart-soldier advocates believe that this transition has moral advantages.

Going with the colloquialism that “to err is human” there is a general mentality concerning the limit of human dexterity or even accuracy. In war context, a sniper for example may fail to live up to the aphorism of ‘one shot, one kill’ due to internal or external factors that may affect his accuracy. In contrast to human soldiers, smart soldiers are not encumbered with such human feelings that could limit their effective and accurate killing of enemies whose escape or survival could jeopardise the possibility of victory.

Since smart soldiers are not humans, they do not experience nervousness or an increase in adrenaline at the sight of enemies or in tense situations in war; they are built for exactness. According to Singer (2009, p. 40), “Some exactness can lessen the number of mistakes made, as well as the number of civilians advertently killed”. Smart soldiers are not concerned about breaking bones, getting shot or anxious about missing loved ones at home. Human soldiers often carry memorabilia as reminders of their loved ones that they hope to see after the war. These emotional connections could affect moral soldiering. For example, human soldiers impacted by the brutal killing of their colleagues may seek to kill their enemies in a similar way, destroy their enemies’ properties or attack their enemies’ civilian relations. Thus, due to emotional weakness, the likelihood of human soldiers to breach the Principle of Non-Combatant Immunity (PNCI) is higher in contrast to smart soldiers who are emotionally inept.

A tentative response to be expanded later is that the supposed emotional weakness of human soldiers may be helpful to prevent merciless killings in war, especially in the involvement of child-soldiers. Since smart soldiers are unemotional, they could kill enemies without a merciful discrimination between real soldiers and child-soldiers who may need rehabilitation rather than extermination. Smart-soldier advocates may reply that war is not a sport but an intense activity of killing or getting killed where mercy is less applicable. It will be unwise for soldiers on the just side of a war to intentionally allow defeat in the name of mercy. A further defence for the smart-soldier advocates is that the lack of emotion in smart soldiers helps their accuracy or exactness in combat. In contrast, an emotional soldier may struggle to shoot an enemy.

Despite international legal stances against biological and nuclear weapons, contemporary wars still involve the usage of these weapons. For example, the three declarations of the first Hague Convention (1899), respectively, forbids the discharge of projectiles and explosives from balloons (Scott 1915, p. 220) and the use of "asphyxiating or deleterious" gases as weapons (Scott 1915, p. 225). It is expected that in circumstances where those weapons were engaged, international interventions, especially to rescue innocent civilians, are almost impossible because of the toxicity of the war zones. In contrast, smart soldiers could operate within that toxicity because they are immune to chemical and biological weapons (Arkin 2009a, b, p. 32). If, as highlighted by Singer that, smart soldiers “can operate in daunting environments such as battle zones filled with biological and chemical weapons…” (Singer 2009, p. 29),Footnote 3 then Smart soldiers can protect innocent civilians—thereby ensuring the PNCI even in hostile conditions.

One challenge with the above argument is that it appears to legitimise prohibited weapons that are also morally impermissible. Since smart soldiers can operate in the context of biological/chemical weapons, then users of smart soldiers may favour the use of biological/ chemical weapons, though they are forbidden. Nevertheless, I think this counter-argument is not entirely against smart soldiers. The prohibition of biological/chemical weapons in war does not ensure that every party in war will follow these restrictions. There could be autocratic regimes or terrorists that may still use the prohibited weapons, and the use of smart soldiers to rescue innocent civilians from such vicinity will be invaluable.

Similarly, there are domains where human soldiers are inherently incapable of operating successfully, for example, in dangerous weather conditions, or in “space, in rough seas, or flight with very high gravitational pressures” (Singer 2009, p. 29). Extreme environments could prevent human soldiers from fighting justly because their survival instinct might compete with their moral judgments and cloud their decision-making process. Again, smart soldiers are not affected by such situations: they lack flesh, blood and senses to subject them to the severity of those unusual war domains. With their agility, durability and senselessness, smart soldiers are not distracted by the quest for survival in those tough situations. It is arguable that the physical and emotional limitations of human soldiers may actually make them effective and capable in dealing with uncertainties in war. In contrast, the absence of physical and emotional limitations in smart soldiers, could make them unable to discriminate between legitimate and illegitimate targets. However, in the context of surviving in unfriendly domains, smart soldiers may have more efficiency than human soldiers. Additionally, human soldiers may suffer post-traumatic disorder (PTSD) due to the horrors of war but such does not apply to smart soldiers. Thus, the engagement of smart soldiers may be necessary for the psychological preservation of human soldiers. The potential benefit of reducing the risk of PTSD and similar suffering for human soldiers could be another ground to support the use of ‘smart soldiers.’ This point may sound appealing but it may not be professionally viable. For example, it may imply that some human soldiers will be redeployed from war or they may be replaced with smart soldiers.

Finally, modern warfare now transcends the domains of land, water and air. Cyberspace is another domain for warfare. Cyberspace is not like the physical domains of warfare, it is virtual and could be more demanding than other domains of warfare. Also, cyberspace is consistently active and demands consistently active actors. Unfortunately, notwithstanding human military prowess, human soldiers are susceptible to exhaustion, hunger, tiredness and sleep and may even need to retire from war. In contrast, smart soldiers (and AI in general) are capable of faster continuous processing and acting to combat cyber threats.

Here, the susceptibility of human soldiers to weakening circumstances, emotional challenges and severe war domains are convincing indications that human soldiers have limitations. The intensity of modern warfare and the pressures from human rights organisations and peace advocates for just war are likely to make smart soldiers the desirable alternative for ethical warfare. This possibility and its implications will be the focus of the next section.

3 Section B

3.1 Ethically enhanced smart soldiers (EeSS) for an ethical warfare

I will start by expanding the arguments in support of EeSS and then explore the ethical implication of EeSS for modern warfare.

It may be appealing that smart soldiers can be the alternative for achieving ethical warfare. But to what extent is this achievable? Are smart soldiers capable of moral agency? Before addressing these questions and other emerging questions, I will first expatiate on the ethical enhancement of smart soldier. To ethically enhance smart soldiers is to put into them ethical (and even legal) principles of warfare so that they will be able to make moral decisions and take corresponding actions and/or omissions. A quick question that follows, as mentioned above, concerns the possibility of putting ethical principles into non-human entities. Some sceptics, like Leveringhaus (2016, p. 91), claim that “it is nonsense” to think that smart soldiers are capable of moral enhancement. One may wonder if non-biological entities (i.e., smart soldiers) can be programmed to engage morally with the complexities of modern warfare.

There are two main reactions to the above:

The first reaction is that the programming of moral capacity into smart soldiers is crucial because of the increasing possibility of employing smart soldiers in modern warfare. The need for smart soldiers increases as they are considered to expand “the options available to military planners.” (UK MoD 2004, p. 27). Politicians and military commanders are keen for victories with limited casualties and smart soldiers seem to offer this solution. Therefore, to retain moral clarity in an increasingly robocentric modern warfare, it is necessary to make smart soldiers moral agents. The point is that smart soldiers “must be constrained to adhere to the same laws as humans or they should not be permitted on the battlefield.” (Arkin 2009a, b, p. 33). This reaction emphasises the need for their ethical enhancement since they are foreseeable means of waging modern wars. We may consider this point as an indirect support or perhaps a helpless surrender to the use of smart soldiers.

The second reaction posits that the characteristics of smart soldiers, as discussed in the previous section, could enable them to adhere to war ethics. For example, while human soldiers may falter, especially in the face of psychological trauma, smart soldiers would still adhere to the laws of war. Also, unlike smart soldiers, emotionally traumatised human-combatants could perpetrate atrocities and war crimes, such as rape or poisoning of water and/or food sources. Thus, the claim is that there are some inherent characteristics in smart soldiers that show they can be morally enhanced (Arkin 2009a, b; Singer 2009, pp. 40–41). A stronger interpretation is that EeSS “could become the only ethical means for waging wars” because of their moral capabilities in contrast to human soldiers (Umbrello et al. 2020, p. 277).

Despite the supports for EeSS, a challenge concerns how to build moral capacity into smart soldiers if they are not human. A response here is that the difficulty in conceiving non-human moral agents is due to our anthropocentric or anthropomorphic stance that only humans are capable of being moral agents. The claim here is that we wrongly link moral agency only with capacity for free will, emotions and mental states. Nevertheless, as argued by Floridi and Sanders (2004), the lack of emotions, does not translate to a lack of moral capacity or agenthood. Just as in humans, there could be artificial intelligence in which the non-human entity is encoded with moral principles and norms and becomes a morally sensitive agent. Moreover, there are existing thoughts on how to make artificial beings become artificial moral agents (AMA). So the possibility of upgrading smart soldiers into EeSS is not farfetched. There are two main sets of approaches: (1) top-down approaches and (2) bottom-up approaches (Allen et al. 2005, p. 149). I will now consider them and their potential challenges. In (1), the procedure is to turn explicit theories of moral behaviour into algorithms that can be coded into artificial agents so they can be morally active. Instances of such theories that could be encoded include religious ideals, moral codes, culturally endorsed values and philosophical systems (Allen et al 2005, p. 150). The aim of (1) is to wire these theories into artificial beings so that they will be capable of making moral decisions.

Top-down approaches can be problematic. I will consider some briefly. First is the difficulty of gathering enough information to be developed into electrical neurons that can be installed to activate morality in machines. Life is complexly dynamic and difficulties may arise in consistently gathering relevant data that can be encoded into the machines. As Wallach and Allen (2009, p. 178) note, there is no guarantee that moral ideals “are computationally tractable” and moral decisions in war context can be difficult, even for humans. It is also possible for coders to be biased in encoding morality into the artificial agent probably for their ulterior motives or if incited to do so. Even if the collation and computation of values and ideals are possible, there is a place for wisdom acquired through experience (Allen et al. 2005, p. 151). Experience, at least in humans, is a ground for moral growth and maturity; the absence of which could disqualify the genuineness of the morality to be coded into machines.

A response to the challenges in (1) is the bottom-up approaches. The bottom-up approaches include an alternative method to develop machines with the capacity to learn by themselves and these machines will acquire moral ideals during the learning process (Allen et al. 2005, p. 151)—which is on its own a type of artificial experience. A typical example of the bottom-up approaches is the “learning intelligent distribution agent (LIDA)” model (Wallach and Allen 2009, p. 173). An artificial agent with LIDA is built to receive sensory inputs through some sets of deliberative mechanisms. The artificial agent will thereby be able to learn and respond to moral challenges and the learning could be facilitated through reward and punishment as appropriate. In this way, we can see that the LIDA model overcomes the challenges in (1) about the absence of experience. As the artificial agent is learning and receiving sensory inputs, it is gaining experience that is useful for future encounters with moral challenges.

However, the bottom-up approaches have challenges too. First, the facilitation of the learning process could be difficult since there is no clarity yet on how to punish or reward an artificial agent. Would it suffice to reward an artificial agent probably by charging its batteries or ensuring that it is more updated than its counterparts? Even if rewards and punishments are possible, how do we ensure proportionality so that we do not under or over punish/reward? Also, it seems to me that a proper simulation of experience in artificial agents will include ensuring unbiased learning. So the artificial agent will be free to manage its external inputs without human interference. But this freedom is problematic because artificial agents could receive morally wrong inputs since there may be no filtering mechanisms of what is learnt. A further problem here is that the wrong inputs may be detected after a huge mishap in the employment of the artificial agent. Recovery might take a while before the immoral inputs are unlearned or filtered out.

A solution to overcome the various challenges of (1) and (2) is to amalgamate the two approaches, and have a hybrid approach (Allen et al. 2005, pp. 153–154). Thus, with a hybrid approach, the artificial agent will be coded with moral algorithms (the top-down element) and designed with the ability to receive external inputs and acquire experiences necessary for moral agency (bottom-up element). While the hybrid model benefits from the benefits of (1) and (2), it is also at the mercy of their deficiencies and even separate challenges that could ensue from the attempts to combine (1) and (2). Despite these challenges, smart-soldier advocates may remain optimistic that with time and the advancement of technology the challenges will be surmounted.

The overall ethical implication of EeSS is that if EeSS become the main combatants in war, there will be more ethical warfare through a stricter adherence to RoE and LoW. The EeSS are smart, precise and accurate in contrast to humans; second, the EeSS, like other forms of artificial intelligence, are not impaired by emotions that may distort their moral decisions. If smart-soldier advocates are correct in their claims about smart soldiers, then we need to examine the claim that EeSS will lead to more ethical warfare.

With EeSS as the main means of warfare, smart-soldier advocates think that such will prevent unjust warfare. There is usually the epistemic challenge that concerns the inability to know the justness of war at the ad bellum stage except in the in bello stage. Rodin and Shue (2008, p. 7) noted that this epistemic problem is a challenge for combatants because they may have “less ready access to information, less time for consideration, and are often subject to systematic pressure or even deception from their own governments and superior officers.” However, with the EeSS, policymakers and politicians may be able to accurately predict if a war will be just or not right at the ad bellum stage. For example, in situations where the war could cause disproportionate civilian casualties, the EeSS could verify the war zone for civilian population that may be affected without spilling blood even if the EeSS are attacked by enemies that use the civilians as shields. Such prevention of civilian casualties indicates strong adherence to both the RoE and LoW.

Furthermore, there are contentions about the participation of non-combatants in war which the introduction of EeSS could arguably resolve. Based on the RoE and LoW, only combatants are liable to attack while non-combatants (or civilians) are not liable to attack and they are often protected by the PNCI. According to the 1949 Additional Protocol I, it is a grave breach or a war crime to make a civilian population or an individual civilian object of attack. However, non-combatants often make some significant contributions to war. There are important war supplies like food and clothes which are mainly provided by non-combatants. There are thoughts that non-combatants may lose their innocence along with the right not to be attacked if they involve in “warlike activities” and are therefore responsible for their “direct contribution to the business of war” (Walzer 2006, pp. 145–146). How are EeSS helpful in this regard? With the full use of EeSS (that is, EeSS are available and deployed by both sides of the war), there may be no basis for non-combatants to contribute food and clothes since the EeSS will not require such unlike their human counterparts. As such, there will be no concerns about attacking non-combatants for their participation in war since their participation is now unnecessary or even not needed. In this case, there will be an increase in civilian safety in coherence with the RoE and LoW.

4 Section C

4.1 EeSS and moral complications

Despite the appealing ethical benefits of the exclusive employment of EeSS for modern warfare, I think there is a weightier side of the coin which should not be overlooked. Here, in section C, I argue that the inclusion and autonomous use of EeSS in modern warfare would complicate moral issues in war rather than produce more ethical warfare as envisaged. I will explain these potential complications below.

An ethical complication that may ensue from the use of EeSS concerns the possibility of glitches or faults within the technological framework that supports the EeSS. We can envisage the possibility of hackers manipulating the technological platforms for the EeSS and then use them unethically. Notwithstanding how this happens, a major concern is that casualties from human weaknesses will be relatively minimal to casualties from faults in the EeSS platform. Human agents that act unethically, will eventually stop, due to human limitations, but the malfunctioning EeSS will continue the unethical act until the malfunction is identified and addressed. Whether it is against just or unjust civilians or combatants, EeSS, fighting under the manipulation of a hacker or due to technical faults, will make massive destruction of lives and properties, particularly of non-combatants who may lack the skills to protect themselves. This possibility suggests that rather than achieve an ethical warfare, there will be a worse breach of the PNCI even with the use of EeSS.

Additionally, Matthias (2004, pp. 175–183) envisages what he calls a ‘responsibility gap’, especially when it is unclear who should be responsible in cases of technological mishap. Since EeSS are (advanced) machines there is no known way to make them responsible for the unethical acts in the human sense of responsibility.

Robert Sparrow (2007, p. 71) reflects further:

Why should it be so hard to imagine holding a machine responsible for its actions? One reason is that it is hard to imagine how we would hold a machine responsible—or, to put it another way, what would follow from holding it to be responsible. To hold that someone is morally responsible is to hold that they are the appropriate locus of blame or praise and consequently for punishment or reward. … Thus to be able to hold a machine morally responsible for its actions it must be possible for us to imagine punishing or rewarding it. Yet how would we go about punishing or rewarding a machine?

Would it suffice to simply withdraw the erring EeSS from service? Or would it make a bigger impact to crush, probably, some parts of the erring EeSS? It sounds like self-deceit to conceive liability in terms of punishment for automaton. Perhaps humans should be responsible for the errors of EeSS? Johnson (2014, p. 714) identifies some complications if we choose to make humans responsible for the acts of the EeSS because of various expertise and contributions to make EeSS. The implication could be that the “many hands” involved in the production of the EeSS or its functional platforms will share the responsibility for a different aspect of the operation of EeSS. But this will be a very complex approach to responsibility. The feasible approach is either to avoid making and using EeSS or to just overlook issues of responsibility entirely. But the consequences of the latter will contradict the purposes of both the Rules of Engagement and Laws of War.

One final point that I will consider here is the authenticity of an artificial moral framework that is coded into an EeSS. Our moral sensitivity develops in us, as human beings, as we grow gradually. The gradual development of our moral sensitivity is also hinged on our emotional capabilities to fear, sympathise, empathise and even to feel guilty (Blasi 1999). These emotions could serve as friction in human actions in the context of war. Empathy could prompt a human soldier to spare her enemies, especially where they do not pose immediate threat or they are child-soldiers. In contrast to emotional human beings, the moral framework of the EeSS is artificially coded rather than naturally developed in connection with emotions. As such the EeSS are incapable of emotions and because their essence is to kill and to destroy, they lack such emotions that could curb their excessive destruction of lives and properties. Allen et al (2006, p. 16) aptly note this point that “the development of appropriate reactions is a crucial part of normal moral development. The relationship between emotions and ethics is an ancient issue…” The significance of emotions is more important in the context of child soldiers where the EeSS may be unable to discriminate between radicalised children and normal combatants. While human soldiers can fathom some means of neutralising the children and seek their de-radicalisation, the EeSS will only eradicate them just like any opposing combatant. Leveringhaus (2016, p. 92) rightly notes that “…something morally valuable is lost when human agency is replaced [by artificial agency].” This goes back to the view of Walzer, mentioned earlier, that “[t]here can be no justice in war if there are not, ultimately, responsible men and women.” (1977, p. 288). Thus, it is arguably convincing that the exclusive use of the EeSS may not translate into achieving ethical warfare because of their lack of emotions and that their capacity for moral sensitivity is artificial and unauthentic.

5 Conclusion

In conclusion, the foregoing discussion indicates that there are envisaged benefits of having artificial intelligence in warfare. However, EeSS as a replacement for human soldiers is an extreme solution to the problem of an unjust war. This replacement predicates ethical complications that outweigh the benefits from the exclusive use. Just like in the instance of preventing non-combatant contributions to unjust war, it is probably better to consider the ethical benefits of EeSS on the basis of their use and not their potential for carrying out morality. It is only the ethical use of EeSS that prevents atrocities in war.

Despite the artificial ethics of EeSS, the characteristics of EeSS such as accuracy and ability to withstand toxic environments could help human soldiers conduct warfare more ethically than warfare with only human combatants. Warfare is a dynamic activity and should not be approached with a limited ethical perspective that loses sight of the benefits of emotions to intervene to create a more ethical warfare. Thus, the application of EeSS could have contextual significance but their employment should not be exclusive.