Advertisement

Design for Values in the Armed Forces: Nonlethal Weapons and Military Robots

  • Lambèr RoyakkersEmail author
  • Sjef Orbons
Living reference work entry

Abstract

Since the end of the Cold War, Western military forces became frequently involved in missions to stabilize conflicts around the world. In those conflicts, the military forces increasingly found themselves operating among the people. The emerging need in military interventions to prevent casualties translated into a range of value-driven military technological developments, such as military robots and nonlethal weapons (NLW). NLWs are characterized by a certain technological and operational design “window” of permissible physiological effect, defined at each end by values: one value is a controlled physiological impact to enforce compliance by targeted individuals and the other value is the prevention of inflicting serious harm of fatality. Robot drones, mine detectors, and sensing devices are employed on the battlefield but are operated at a safe distance by humans. Their deployment serves to decrease casualties and traumatic stress among own military personnel and seeks to enhance efficiency and tactical and operational superiority.

This chapter points out that societal and political implications of designing for values in the military domain are governed by a fundamentally different scheme than is the case in the civil domain. The practical cases examined illustrate how values incorporated in military concept and system designs are exposed to counteraction and annihilation when deployed in real-world operational missions.

Keywords

Nonlethal weapons Military robots Military ethics Designing for values Value sensitive design 

Introduction

The end of the Cold War marked the beginning of a new era in the international security arena. In the decades before, the East-West confrontation, with its epicenter in Central Europe, had dominated military thinking and planning, and the military balance was predominantly built on the mutual large-scale destruction potential of military structures and arsenals.

The emergence of enabling technologies during the 1980s, in particular in the area of information, communication, and computing, has introduced military precision strike capabilities, implemented as precision-guided munitions and missiles (PGMs), and capable of autonomously finding and striking targets at long range. Technological advances in the area of information and communication also enabled the introduction of the so-called network-enabled capabilities (NEC). In NEC, military command and control systems are integrated with a variety of sensor platforms collecting military data and with PGMs. Such “system of systems” provided for a dramatic increase both in effectiveness and in efficiency in warfare. Rather than introducing fundamentally new technologies, military innovation focused on system technology, to optimally exploit and combine the potential of emerging civil technologies in novel military system concepts. Fielding such military systems concepts also entailed an increase in automation of military tasks and functionalities. This led to a shift in responsibilities in military decision-making processes.

The technological advances in the military domain were first applied at large scale during the First Gulf War in 1990. The operations in Iraq not only demonstrated the effectiveness of long-range precision attacks, but at the same time these new military-technological capabilities intrinsically entailed the value of drastically reducing the number of military casualties on the side of the intervention forces. Thus, the design and fielding of a new family of long-range PGMs had, alongside the significant increase in military effectiveness, served a key value: the protection of the life of troops deployed in expeditionary military missions. 1

Traditionally, in the military debate the above innovations are often referred to as the Revolution in Military Affairs (RMA): the emerging technologies provided for more precision and discrimination in the application of military firepower causing less collateral damage and fewer own causalities (see, e.g., Freedman 1998; Latham 1999). They support the two most important values on which the jus in bello – which involves the legal standards that apply during the fight and monitors the way in which war is waged – is based: discrimination and proportionality. The principle of proportionality states that the applied force must be proportional to legitimate military goals. Civilian casualties are acceptable as collateral damage if it is proportionate to a legitimate military advantage. The principle of discrimination states that in target selection a distinction must be made between combatants and noncombatants and between civilian objects, e.g., hospitals and churches and military objectives. Soldiers who are injured or have surrendered should cease to be targets. These two values have been interpreted and materialized in, for instance, international humanitarian law and in international treaties banning, regulating, or limiting the possession and use of particular forms of weaponry. The principle of discrimination forbids the intentional killing of “the innocent,” and the underlying idea is that civilians should not be made to suffer in war; overall, this is a rights-based principle. The principle of proportionality is a consequentialist one and requires that enemy combatants should not be subjected to unnecessary suffering and superfluous injury, that it is unjust to inflict greater harm than that which is unavoidable in order to achieve legitimate military objectives, and that a mission is permitted only if the excepted military gain outweighs the expected number of unintended civilian casualties.

Since the end of the Cold War, Western military forces became frequently involved in missions to stabilize conflicts around the world. In those conflicts, the military forces increasingly found themselves operating among the people, often in built-up areas, with opposing militant forces exploiting this environment as cover. Former General Sir Rupert Smith introduced a new paradigm that contemporary military forces were now facing “the war among the people” (Smith 2006) or asymmetric warfare. In this complex environment, with blurring distinction lines between combatants and noncombatants, the casualty aversion norm for own military personnel was soon extended to include the protection of the lives of the civilian population in conflict areas as well. Some scholars predicted and claimed that from now on warfare would be conducted “humane” and, ultimately, “bloodless” (Coker 2001; Toffler and Toffler 1994). A new value was born, sharply contrasting against the armed forces’ core business of killing and destroying the safeguarding of citizens during armed operations.

The emerging need in military interventions to prevent innocent casualties among the local population translated into a range of value-driven military technological developments, such as military robots and nonlethal weapons (NLW). Whereas proponents of the concept exclaimed high expectations of this new category of military capabilities, empirical analysis, both into military robots and NLW deployments in recent operations, reveals that the operational effect incorporating the intended value is flawed and, in some cases, even reversed. Other than is often the case in the cooperative and socially benign civil domain, where interests and values are the subject of constructive dialogue, in the military conflict domain, different actors have, almost by definition, sharply different and competing interests. These actors are noncooperative and even hostile toward the other side’s operational objectives and focus on de-optimizing the opponent’s capabilities, including those incorporating self-imposed value-based military effect characteristics. Insurgents, for instance, attempt to create conditions in such a way that the value-based purpose of preventing innocent civilian casualties is denied by bringing innocent civilians close to or inside the legitimate military target, without the operator of an armed robot or the robot itself able to detect this. Similarly, opponents to security forces that apply NLWs against them may use countermeasures to neutralize the NLW effect, which in turn may bring security forces to use the NLWs beyond its safety margins, thus risking civilian casualties. Hence, in the military domain, the functional intent of value sensitive design (VSD) of NLWs and robots is undermined by noncooperativeness and counteraction. The implication of such VSD denial is the loss of credibility of weapon user presenting themselves as protector of the innocent population. The VSD preemptive mechanisms reflect the essence of military conflict, which is an armed clash of interests.

This chapter will point out that societal and political implications of VSDs in the military domain are governed by a fundamentally different scheme than is the case in the civil domain. The practical cases examined here illustrate how values incorporated in military concept and system designs are exposed to counteraction and annihilation when deployed in real-world operational missions. In section “Non-lethal Technologies and Weapon Concepts” we will discuss the nonlethal weapons and in section “Military Robots” military robots. We will end with some conclusions.

Nonlethal Technologies and Weapon Concepts

Although the notion of nonlethal weapon (NLW) was already coined in the first half of the twentieth century, in the military domain, it made a rebirth in the early 1990s.2

NLWs are designed and deployed with the purpose to enforce change and correction of human behavior, in order to achieve people’s compliance with orders or directions, without causing (innocent) casualties or serious and permanent harm to people.

Several definitions of NLWs exist. A broadly accepted one comes from NATO, which defines NLWs as:

Weapons which are explicitly designed and developed to incapacitate or repel personnel, with a low probability of fatality or permanent injury, or to disable equipment, with minimal undesired damage or impact on the environment.3

Description of Nonlethal Weapons

Several dozens of types of NLW are in use or under development, either designed for use against people or against material and infrastructure (Fig. 1). The technologies and associated types of physiological effects of NLWs are wide ranging. They are arranged into four categories, namely, kinetic energy concepts, electromagnetic effectors, acoustic energy concepts, and chemical and biological effectors. Various types from all four categories are already in use for many years and occasionally even for decades, with the police and law enforcement organizations being the forerunners in fielding NLWs before military organizations did. Long-standing NLWs are:
Fig. 1

Taxonomy and examples of NLWs

  • Kinetic energy NLWs such as the baton round, a cylindrically shaped PVC projectile designed to be fired against an individual to cause pain and blunt trauma, or the bean bag, a small sack filled with pellets launched from a small caliber projectile to cause a similar effect when striking the human body

  • Chemical NLWs such as tear gas (CS) that has an irritating effect on the eyes, skin, and airways

  • Electromagnetic energy NLWs like the Taser that causes a muscular incapacitation effect by electrical current

  • Acoustic NLWs such as the fighter aircraft afterburner that can be used as an acoustic weapon and the acoustic hailing device consisting of an array of loud speakers to produce a focused high-energy noise beam or to use for messaging.

In addition, a considerable number of NLWs are under development, in engineering or testing phase, or redesigned. The Active Denial System, laser warning devices, and extended effect flash/bang grenade are a few examples. This chapter addresses antipersonnel NLWs only.

NLWs intended for use against people are tailor-made to inflict pain or other forms of physical discomfort. The intensity of the unpleasant sensation should pass a certain minimal threshold to accomplish a particular behavioral effect, which at the same time should remain below a certain maximum for safety reasons, to ensure that the physiological effect is nonlethal. While these requirements are incorporated in the technological design of the NLW, they also rely on the methods, procedures, and tactical guidelines for military personnel on when and how to operate an NLW. Each NLW can be characterized by a certain technological and operational design “window” of permissible physiological effect, defined at each end by values: one value is a controlled physiological impact to enforce compliance by targeted individuals, and the other value is the prevention of inflicting serious harm or fatality.

Ever since their inception, NLWs have been the subject of intensive debate. To some analysts and commentators, their emergence raised high expectations in reducing the number of civilian casualties in military missions, symbolized by some proponents portraying NLWs as “weapons of mass protection” (Morris 1992). Such optimism coincides with the responsibility felt in Western states to humanize war and to comply with the associated imperative of casualty aversion that is amplified by media presence in conflict zones (Coker 2001, p. 18). Others, such as McNab and Scott (2009), add that in an increasingly complex operational environment, NLWs may reduce the level of violence (US) that forces incur as well as experience in asymmetric warfare. Innovative NLW concepts have also been claimed to be promising for military tasks, due to their potentially broad applicability and to the sheer novelty of the technologies applied (Gompert et al. 2009, pp. 95–110).

Such claims are disputed by skeptics, who stress the unreliability of NLWs on the basis of accounts of incidents in which the application of NLWs led to severe harm or even fatal injury to individuals. In most of the cases NLWs were mostly used by the police. Such opposing views are reinforced by reports from human rights organizations stressing the excessive use, or abuse, of such devices by law enforcement agencies (Amnesty International 2004). In addition, the use of NLWs against civilians by military forces abroad has been disputed on moral grounds, as it would potentially violate the principle of noncombatant immunity that considers noncombatants as complete “outsiders” in armed conflict and should therefore not be harmed (Mayer 2007).

A key question underlying this debate is whether existing and novel NLWs meet their promises under real-world conditions. Where NLWs are claimed to help manage violence in the complexity of today’s operational environment, in reality this complexity may also backfire against NLW performance.

Experiences with NLW in real-world operations reveal that in many events dynamics are at work that tend to put the design window of permissible action under pressure. The reason for this is that target individuals may decide to develop countermeasures to reduce or neutralize the NLW physiological effect. In addition, the user, security forces, who are tasked to control a riot or public disorder, may be inclined to use the NLW at their disposal excessively and beyond the prescribed mode of permissible employment, in an effort to achieve the desired effect of compliance by the targeted individuals.

Mitigation of the NLW performance, induced by the dynamics at play during a real event, has the potential to transform the dual positive and benign value as envisioned with NLW design into an instrument for abuse and repression. The underlying premise in the design as de-escalating violent confrontations proves illusive under the presence of overriding factors in the operational context that negate the NLWs’ original rationale and value.

Hereafter, two NLW system concepts will be more closely investigated, in order to assess their reliability in meeting the promise in terms of operational effectiveness, own force protection, and nonlethality as intrinsic design values . Challenges will be discussed that degenerate the intended value sensitivity of NLW design. The first system concept is a kinetic energy projectile, called the baton round (BR), a classical NLW, in use with police and military forces worldwide since several decades. The second is a millimeter-wave electromagnetic energy weapon concept, the Active Denial System (ADS), a US-developed concept, currently in the prototype testing and evaluation phase.

Two Examples of Main NLW Technologies

Innovating Classical Nonlethal Technologies: The Baton Round

BRs, also called “plastic bullets,” are blunt impact weapons launched against individuals. BRs are cylindrically shaped, have diameters between 30 and 40 mm, are between 10 and 15 cm long, and have a rounded impact face. The purpose of the BR is to induce pain, irritation, and minimal injury, in order to dissuade or prevent a violent or potentially violent person from pursuing the intended course of action. The physiological effect depends on the area where the projectile strikes the human body (Vilke and Chan 2007, p. 342). The intended effect on the target individual resembles the punch of a boxer. Ideally, the BR strikes the abdomen, while hits on the extremities, in particular the legs, are also effective. The delivery system for BRs is usually a handheld baton gun.

The projectile’s velocity and ballistic stability are key factors for aiming accuracy. Launching velocities are around 80 m/s. Ballistic stability can be enhanced with spin stabilization of the projectile. Required accuracy on the target is usually defined as a probability that the projectile strikes in an area of 20 cm wide and 60 cm high. In 2004 a report of a UK program for an improved BR set this probability at 85 % for a minimal distance of 25 m and desirably up to 40 m. Desirable accuracy on a target should be 20 cm wide and only 40 cm high, with the aiming center on the abdominal part of the body (UK Steering Group 2004, pp. 11–18).

BRs have been deployed in many law enforcement and military forces around the world. In Northern Ireland, during the Troubles lasting from 1969 until 1998, more than a hundred thousand BRs have been fired by the British Army and the Royal Ulster Constabulary. During these three decades and thereafter, several technological innovations have been implemented to improve the performance and reliability of the BR. The round was introduced in Northern Ireland in the mid-1970s. Until then rubber bullets were deployed, with lower ballistic accuracy standards. A medical report on injuries caused by rubber bullets states that due to firing the round from a tear gas (CS) canister launcher, the tumbling of the projectile in flight and poor aerodynamic shape, it was difficult to hit at 18 m a target with a 2 m diameter (Millar et al. 1975, p. 480). Early versions of the BR deployed in Northern Ireland also had relatively low performance standards, which were gradually improved through successive innovative designs (Burrows 2002, pp. 105–107).

The BR’s potential for raising its performance is facing difficult challenges. Effectiveness at ranges above 50 m is poor. Kinetic energy drops significantly at longer ranges, at 25 m to about 75 % of the level at a range of 10 m. At longer engagement ranges, the flight trajectory of the round is more curved, reducing aiming accuracy (Arnesen and Rahimi 2007). Shorter firing ranges enhance accuracy, but deliver a heavier impact on the target, thereby increasing the potential of injury.

Efforts to further improve baton rounds are ongoing. In the UK, for instance, a new projectile has been developed that should be safer and more reliable than its predecessors. This projectile, the Attenuating Energy Projectile (AEP), was introduced in the UK in 2005, to replace a more hazardous predecessor (UK Steering Group 2006, pp. 15–20).

BRs are generally employed to meet two values: to provide security forces with a capability to outrange missile throwing rioters and other violent actors, thus enabling the use of armed force without having to resort to lethal fire for self-defense and at the same time forcing revolting individuals to stop their violent behavior.

Over time, seasoned rioters managed to develop countermeasures to negate the BR effect, such as makeshift body protection and evading tactics. Security forces, facing the declining effectiveness of the NLW, were in many situations tempted to use the weapon beyond its safety margins to acquire effect, while putting targets at risk of serious harm, thus compromising the design value of nonlethality. Contextual issues, interfering with the attitude and behavior of security personnel operating the BRs, have led to prohibitive use and reckless abuse of the NLW. As a result, in Northern Ireland, inappropriate use has importantly contributed to the death of 17 civilians by BRs and permanent harm to many hundred victims (Weir 1983, p. 83).

Any conceivable effort to cope with countermeasures against BRs and to maintain the window of permissible use of the BR most likely requires the introduction of smartness in the technological design of the BR. Smart BRs should be capable of autonomously identifying the shape of the engaged target, to fine-tune the kinetic impact energy and to “decide” where precisely to strike the body. The feasibility of such value restoring innovation hinges on the affordability of the relatively large numbers of BRs usually required.

Introducing a Novel Nonlethal Technology: The Active Denial System

The millimeter-wave (MMW) directed energy technology, called the Active Denial System (ADS), is a unique NLW concept developed in the USA. The weapon effect can attain ranges of many hundreds of meters to engage human targets. Its MMW beam is invisible and no traces when properly employed. At the same time, the effect mechanism is entirely new, and, other than with most “first-generation” NLWs, there is no precedent available with law enforcement agencies.

The ADS delivers a totally different type of effect. There are no empirical data available to which a military planner, commander, or operator can refer, other than the many tests and experiments that have been conducted to map human bio effects (Murphy et al. 2003).

The development of the MMW technology for the ADS started already in the early 1990s. It is based on 94 GHz radiation emitter technology, and the radiation beam interacts with the human body such that it penetrates the skin to a depth of less than 1/64th of an inch or less than half a millimeter. The beam shape can be adjusted to engage between one and four target individuals at a time. A targeted person will experience the effect as a sharp burning pain on the skin but no actual burning, which he immediately wants to escape by jumping aside or running away. This pain effect is universal, as it is independent from the size or physical condition of the target individual.

The system concept’s particular value is the exceptional long range at which it can deliver its effect. Much more than with other NLWs, the own troops can stay out of harm’s way. The second value entails that it forces target persons to comply with the security forces’ directions, due to the intolerable pain it causes. Hence, if the system is capable to deliver radiation energy intensities with sufficient accuracy at the target over hundreds of meters, it offers a promising perspective for being capable of serving both the envisioned value of the force’s self-protection and a reliable nonlethal impact on target individuals.

In reality, circumstances may be such that effect control is only marginally achievable. One important limiting factor is that the radiation energy is strongly attenuated by water; even under high ambient humidity the transmission gain will drop substantially. Rain, in particular heavy rain, will strongly reduce the radiation energy level arriving at the target. This degradation is more significant with increasing range. Similarly, wet cloths (either self-wetted or by rain) will reduce the effect felt on the body. The complicating factor is the high uncertainty about what the residual energy arriving at the skin will actually be. This may vary by orders of magnitude, depending on the specific attenuation effect in the specific situation. The fact that this cannot be measured while the ADS is in action is problematic, and rough estimates or trial and error are not acceptable options. This would resemble Russian roulette.

The sheer novelty of the design leaves questions unanswered like “how does it work?” and “does it work as it should?” Today, most military personnel have only been trained on the use of kinetic force and when and how to use it. They can hardly grasp the practical utility of ADS in the face of the many uncertainties surrounding such a revolutionary concept. When the system does not facilitate automatic beam energy modulation, will the operator be susceptible to human error when having to tune the MMW transmitter manually under circumstances of incomplete information collection?

Central Moral Values and Value Issues of NLWs

Both the BR and ADS discussions demonstrate that nonlethality as the defining value of the technology and design of both NLWs is challenged when the systems are deployed to feature in real-world operational events they are intended for. Experiences and assessments of the two NLWs have identified a range of factors and phenomena that, either naturally or human driven, narrow down the weapons’ design window of nonlethal performance and effect. Many of those factors are pertinent to the so-called fog of war,4 implying that much of what military forces encounter in real-world operations is unforeseeable, hence renders their capability of controlling the scenario illusive, and tends to counteract their intended approach and the accomplishment of their mission (Orbons and Royakkers 2014).

This section focuses on key value issues related to NLW design and military implementation of NLWs and claims on values of NLWs with respect to their political purpose.

Performance of NLWs in a Military Context

A range of conditions shapes a non-cooperative environment for the user of NLWs, denying the efficacy in projecting the value of nonlethality that is embedded in NLW designs. In essence, in military operations a conflict of values is at work, which puts the military forces’ responsibility for self-protection in many scenarios at odds with the requirement imposed on the forces to prevent innocent casualties. Hence, the extent to which the value of nonlethality embedded in NLWs is brought to bear is to a far extent in the hands of the military operator, rather than an NLW system attribution. In addition, “smart” targets have ways to overcome the intended effect of NLWs, as has been pointed out by Allgood (2009) and Hussey and Berry (2008), in the case of riots in a detainee camp under command of US forces.

While degenerated use of NLWs may result from mechanisms emerging from stress among the military forces applying the NLWs, situations have occurred in which NLWs were intentionally used in a non-regular or degenerated use. Such incidents took place in Northern Ireland, where civilians have been killed by BRs that where aimed at vulnerable parts of the body or fired from very short distances (Pat Finucane Centre 1996). Nonpermissive uses of NLWs, in particular kinetic NLWs, have been found in Iraq detention centers as well. It is difficult to determine to what extent user forces intentionally apply NLWs irregularly or not. The operational context shapes a gray zone in which the imperative of self-defense can hardly be distinguished from unnecessary and excessive harm. NLWs have been used as instruments of punishment and retribution for negative outcomes of previous events or confrontations to the user forces, as the Northern Ireland case demonstrates. A similar observation has been made in 2011, in the aftermath of the Arab Spring in Egypt.5 Lack of discipline and training, and insufficiently restrictive instructions and Rules of Engagements for user forces, can contribute to the probability of such wrong uses to occur. It is an inherent problematic and risk of many types of NLW weapons and technologies that carry the potential for harmful and even lethal use, when specified safety margins are ignored.

Can innovative concept solutions in value design overcome the shortfalls of current NLWs? One important mechanism that underlies the “us-or-them” dilemma in asymmetric conflicts is the closing-in of the military forces with the civilian population: Rupert Smith’s “war among the people” in its truest sense. Obviously, design that enables a larger standoff between force employing NLWs and the civilian population would serve to diminish the us-or-them dilemma. The ADS is actually the concept standing out as the champion technology and designed to support that aim. As we have seen, however, scenarios and circumstances are conceivable that annihilate the accomplishment of the value-based effect.

Political Level Significance of NLWs

At the political level, mission accomplishment strategies call for instruments at the tactical level compatible with the spirit and objectives of the military mission. It means the balanced use of force, with a measured application of armed force. In situations where civilians are involved, compliance should be accomplished without causing harm. NLWs are considered and assigned as appropriate instruments for that task: they are expected to enable humane military operations and performance, in support of the hearts and minds strategy. The implementation and purpose of NLWs is publicly announced: intentions and expectations are declared explicitly.

As pointed out by Orbons (2012), in real-world situations, NLW deployment is fraught with problems related to the operational context; consequently the level of control over NLW effects is much less than what is militarily and politically desired. Most soldiers are far from perfect in dealing with the dynamics and uncertainty on the ground. Moreover, the political rationale for NLW deployment is counteracted and undermined by opponents who force the military user into the lethal part of the spectrum of violence, thus annihilating the nonlethal intent.

But progress in hearts and minds efforts at the tactical level, conversely and ironically, may under circumstances also be affected by trends and events at politico-strategic level. If these trends have the effect of antagonizing user forces and target populations, the ensuing operational context will frustrate the outcome of NLW deployment as politically intended and expected. Hence, the political level rationale of NLW deployment becomes annihilated at the tactical level if particular developments at the political level meet disapproval and trigger agitation on the ground. Obviously, with regard to the nonlethality incentive, a dialectic is at work between the political and tactical level. This dialectic is fueled when operational context mechanisms as friction and confusion produce fatal errors and further amplified by the media connectivity between the tactical and political level. The tragedy is that the media are inclined to report only the mishaps (innocent casualties, despite or even caused by NLWs), while refraining from spreading good news about NLWs performing “normal” as expected and announced: good news is “bad” news.

Hence, if NLWs perform badly, their deployment backfires at the political level. If, however, the tactical/political dialectic link is weakened or cut through flaws in reporting along the chain of command or in public information, chances of optimal NLW use increase. The flip side of this condition is the growing risk of abuse due to the tactical isolation of physical engagements, as accountability mechanisms would be dysfunctional and only have a delayed political impact at best. In the latter case, abuse will surface sooner or later and will give NLWs a bad reputation after all.

In coping with the dialectic, which in essence is described by Rupert Smith’s (2006) “war amongst the people” paradigm, some planners and developers search for nonlethal technological options to physically disengage the user from the target. The ADS, with its long-range and semi-area denial capability, is the ultimate material expression of this quest. However, the technology fix approach ignores that disengaging the user force from the target population is at odds with the hearts and minds approach. This reflects another dialectic, namely, that between (community) policing and military operations.

Military Robots

In the last two decades, we have entered the era of remote-controlled military technology: robot drones, mine detectors, and sensing devices are employed on the battlefield but are controlled at a safe distance by humans. Its aim is to decrease the number of soldiers killed on the battlefield, to gain more efficiency and tactical and operational superiority and to reduce emotional and traumatic stress among soldiers (Veruggio and Operto 2008).

All over the world military robots are currently being developed, and thousands of military robots are already deployed during military operations. According to Peter Singer this development forms the new “revolution in military affairs” (Singer 2009). The US Future Combat Systems, a US $200 billion plus program for future weapons and communications systems – commissioned by the Pentagon – has a major impact. Military robots are a focal point in this program. Besides a technology push, a demand pull has been added for the development of military robots. The call of US society to reduce the number of military casualties has contributed to a huge boost of alternatives in robotics developments in the USA.6 A few years ago, the number of US soldiers killed in action rose to a high level because of insurgents operating in Iraq and Afghanistan using their popular homemade and deadly weapon: the improvised explosive device (IED) or the roadside bomb. 40 % of those killed American soldiers died because of these IEDs (Iraq Coalition Casualty Count 2008). During the invasion of Iraq in 2003, no use was being made of robots, as conventional weapons were thought to yield enough “shock and awe.” However, thousands of American soldiers and Iraqi civilians killed reduced popular support for the invasion and made the deployment of military robots desirable. By the end of 2008, there were 12,000 ground robots operating in Iraq, mostly used to defuse roadside bombs, and 7,000 reconnaissance planes or drones were deployed (Singer 2009).

Description of Military Robots

We will define military robots as reusable unmanned systems for military purposes with any level of autonomy. It may involve both unmanned systems that are self-propelled, i.e., mobile robots, as well as static systems that perform tasks, as in immobile robots. An example of an immobile robot is the operational Goalkeeper on board of Dutch frigates. This is a computerized air defense system with infrared detection of enemy missiles which autonomously detects approaching missiles, calculates the path they follow, and then aims the weapon, being a rapid-fire gun, in order to neutralize the approaching danger.

Military mobile robots are commonly divided into ground vehicles, water surface and underwater vehicles, and aerial vehicles. The most famous unmanned ground vehicle, which has been developed by Foster-Miller, is SWORDS (Special Weapons Observation Reconnaissance Detection System – see Fig. 4). It originated from TALON, a robot equipped with cameras, a gripper arm, communication, distraction devices, and various sensors – thus a device especially designed for unmanned reconnaissance and clearing roadside bombs. SWORDS is equipped with machine guns and a remotely controlled, tele-led armed robotic system. After some years of research, a number of SWORDS have been deployed since 2007 on patrols in Iraq. These SWORDS mainly perform reconnaissance missions, street patrols, and other missions with an increased risk. The successor of the SWORDS already exists in the MAARS, Modular Advanced Armed Robotic System. This robot can be equipped with heavier machine guns, has a larger payload, and has nearly twice the speed at about 11 km/h. The entire system weighs about 160 lb.

Unmanned submarines equipped with torpedoes are currently being developed. Existing unmanned mini-submarines can autonomously explore the seabed with sensitive listening devices, detect ships and mines, and destroy those mines with an explosive charge. Unmanned vessels such as the 9 meter Protector or the nearly 2-m long Silver Marlin are equipped with sensors, a satellite connection, and light armament that can take over patrols from small warships.

An example of unmanned aerial vehicles is the micro air vehicles, unmanned reconnaissance helicopters. These are remote-controlled propeller planes as small as a model airplane with a weight of about 20 g to a few 100 g and equipped with powerful regular or infrared cameras for autonomous observation tasks. The camera images are so sharp that persons placing parcel bombs or roadside bombs can be detected and monitored, alerting local forces to act. Also, these aircrafts can search targets and communicate the position for conventional bombing. At the end of 2001, the US deployed about 10 unmanned reconnaissance aircraft in Afghanistan, but in 2008 these numbers had already grown to more than 7,000 (Singer 2009). Besides these small aircrafts, there is the reconnaissance Global Hawk with a wingspan of nearly 40 m. This unit can eavesdrop on mobile phone calls, even if they are encrypted, and could provide real-life images from an altitude of some kilometers, spotting a car on the road, not quite making out the license plate of the car, but surely the car type and how many people are moving around it.

Presently, more than 20,000 military robots are active in the US military. Most of these robots are unarmed and are mainly used for clearing improvised explosive devices and reconnaissance; however, over the last years the deployment of armed military robots is on the increase. In this chapter we will focus on the unmanned combat aerial vehicles.

Unmanned Combat Aerial Vehicles (UCAVs)

One of the most widely used unmanned combat aerial vehicles (UCAVs) is the Predator. This unmanned airplane which can remain airborne for 24 h is currently employed extensively in Afghanistan. The Predator drones can fire hellfire missiles and are flown by pilots located at a military base in the Nevada Desert, thousands of miles away from the battlefield. On the top of this its successor, the Reaper, which may phase out the F-16, has already been spotted in Afghanistan in 2008. This machine with a wingspan of 20 m can carry 5,000 lb of explosive devices, Hellfire missiles, or laser-directed bombs and uses day-and-night cameras to navigate through a sheet of clouds. This unmanned combat aerial vehicle is operated by two pilots located at a ground control station behind a computer at a safe distance from the war zone.

As if the tactical advantages brought by this technology were not enough, we now face the prospect of genuinely autonomous robot vehicles, those that involve “artificial intelligence” and hence do not need human operators. This shift is also stimulated by the National Research Council (2005): “The Navy and Marine Corps should aggressively exploit the considerable warfighting benefits offered by autonomous vehicles.” The United States Air Force (2009), for example, expects the deployment of autonomous UCAVs with “a fully autonomous capability” between 2,025 and 2,047. Though it is unclear what degree of autonomy these UCAVs will have, “the eventual deployment of systems with ever increasing autonomy is inevitable” (Arkin 2009). The deployment of genuinely autonomous armed robots in battle, capable of making independent decisions as to the application of lethal force without human control, and often without any direct human oversight at all, would not only constitute a genuine military revolution, but also a moral one (Kaag and Kaufman 2009).

Given the distinction between, on the one hand, UCAVs today, in which – to differing degrees – human operators remain in the loop, and, on the other, the future of military robotics which promises autonomous UCAVs capable of ethical decision-making, we will try to separate our analysis along these lines.

Tele-operated UCAVs

In the relevant literature the role of the human operator is often underplayed. The importance of having an element of human control incorporated in the design of UCAVs has often been stressed, for example, by the Pentagon or the British Ministry of Defence (Krishnan 2009). From a legal and ethical perspective, the value of keeping the “man-in-the-loop” is important because it is indispensable for the attribution of responsibility (cf. Singer 2009). It is not without reason that the “International Law of Armed Conflict dictates that unmanned systems cannot fire their weapons without a human operator in the loop” (Isenberg 2007). Yet, while it is certainly true that currently humans are kept “in-the-loop,” it is not certain, or even likely, that this will remain so. The logic that brought unmanned systems into being leads more or less naturally to the wish to take the human out of the system altogether (Sparrow 2011, p. 121; see also Sullins 2010), and it seems almost a given that the future will hold autonomous and even learning robots.7 We will turn to these autonomous and learning robots in the next subsection.

The tele-operated UCAVs connect the human operators with the war zone; they are the eyes of the tele-soldier. These semiautonomous UCAVs (they can navigate to their goal auto controlled, but the decision to fire is made by a human operator) like the Predator and the Reaper send GPS coordinates and camera images back to the operator. Based on the information projected on his computer screen, the interface, the human operator has to decide, for example, whether or not to launch a missile. With regard to the user interface design, display characteristics, interaction mechanisms, and control limitations all have a potentially huge impact on the situational awareness of the human operator and decision-making by the human operator.8

There is a growing concern about and interest in the ethical design of weapon systems interfaces and lethal tele-operated systems (see, e.g., Asaro 2009; Cummings 2006). In the future, his decision might be mediated by a computer-aided diagnosis of the war situation (see also Sullins 2010, p. 268), and military robots may even have ethical constraints built into their design – a so-called ethical governor, which suppresses unethical lethal behavior. For example, Arkin (2009) has done research (sponsored by the US Army) to create a mathematical decision mechanism consisting of prohibitions and obligations derived directly from the laws of war. The idea is that future military robots might give a warning if orders, according to their ethical governor, are illegal or unethical. For example, a military robot might advise a human operator not to fire because the diagnosis of the camera images tells it and the operator is about to attack noncombatants, i.e., the software of the military robot that diagnoses the war situation provides the human operator with ethical advice to support values as limiting civilian deaths and war crimes or atrocities. The software must function reliably in complex and dynamic environments, and its ethics cannot simply be a list of rules or norms as the situations that most often require ethical decision-making are exceptional cases where the standard rules or norms do not apply.

Autonomous UCAVs

The ultimate goal of autonomous military robots, according to United States Air Force (2009), is to create a military robot capable of making independent decisions as to the application of lethal force without human control, in other words, to strive for the man-out-of-the-loop. We need to make a distinction for these autonomous robots between non-learning machines and learning machines. Learning military robots, based on neural networks, genetic algorithms, and agent architectures, are able to decide on a course of action and to act without human intervention.9 The rules by which they act are not fixed during the production process, but can be changed during the operation of the robot, by the robot itself (Matthias 2004). The problem with these robots is that there will be a class of actions where no one is capable of predicting the future behavior of these robots anymore. So, these robots would become a “black box” for difficult moral decisions, preventing any second-guessing of their decisions. The control transfers then to the robot itself. This will constitute a responsibility gap (Matthias 2004), since the value responsibility cannot be added in a design for autonomous learning military robots. It would constitute the injustice of holding people responsible for actions of robots over which they could not have any control (see also Sparrow 2007).10

The learning machines for military purposes seem, at least under present and near-term conditions, far from feasible. However, the development of autonomous non-learning machines with an additional ethical dimension is a newly emerging field of machine ethics. These robots, based on syntactic manipulation of linguistic symbols with the help of formal logic, are “able to calculate the best action in ethical dilemmas using ethical principles” (Anderson and Anderson 2007). It is thus assumed that it is sufficient to represent ethical theory in terms of a logical theory and to deduce the consequences of that theory. This view, analogous to the reduction of ethics to law or reflection to an algorithm, misunderstands the unique – nonreducible – nature of ethical reflection. Arkin (2009) argues that some ethical theories, such as virtue ethics, do not lend themselves well by definition to a model based on a strict ethical code. While military robotic specialists claim that the solution is simply to eliminate ethical approaches that refuse such reduction, we argue that this nonreducibility is the hallmark of ethics. While many ethical situations may be reducible, it is the ability to act ethically in situations that call for judgment that are distinctly human. Furthermore, a consequence of this approach is that ethical principles themselves will be modified to suit the needs of a technological imperative: “Technology perpetually threatens to coopt ethics. Efficient means tend to become ends in themselves by means of the “technological imperative” in which it becomes perceived as morally permissible to use a tool merely because we have it” (Kaagman and Kaufman 2009).

Central Moral Values and Value Issues of Tele-operated UCAVs

The use of UCAVs provides us with an ambivalent picture. On the one hand the deployment of these robots has many positive effects. The most compelling arguments in favor of UCAVs are the decreasing of financial costs; reducing of the number of military casualties; added value in performing dull, dangerous and dirty tasks to solve operational problems; and effective and efficient performance of tasks. According to Strawser (2010) in certain circumstances the use of armed military robots, for the reasons mentioned above, is not only ethically permissible, but instead even ethically mandatory under what he calls the “principle of unnecessary risk.” Strawser argued that it is morally reprehensible to command a soldier into running the risk of fatal injury, if that task that could also have been carried out by a military robot. UCAVs, however, also raise all kinds of social and ethical questions that are of importance in a responsible use of these weapons. In this section we will discuss some value issues related to the decision-making process involving life and death by the human operators of tele-operated UCAVs and by autonomous UCAVs.

Reducing Psychological Stress

The human operators, who remotely control armed military robots by computers, can be emotionally and psychologically affected by the things they see on screen. Although fighting from behind a computer is not as emotionally potent as being on the battlefield, killing from a distance is still stressful; various studies have reported physical and emotional fatigue and increased tensions in the private lives of military personnel operating the Predators in Iraq and Afghanistan (Donnelly 2005; Kaplan 2006). For example, a drone pilot may witness war crimes yet find himself in a situation in which he is helpless to prevent it, or he may see how civilians are killed by his own actions. The latter is not an entirely hypothetical situation. This problem of “residual stress” of human operators has led to proposals to diminish these tensions. In particular, the visual interface can play an important role in reducing stress; interfaces that only show abstract and indirect images of the battlefield will probably cause less stress than the more advanced real images (Singer 2009). From a technical perspective this proposal is a feasible one, since it will not be hard to digitally recode the war scene in such a way that it induces less moral discomfort with the war operator. From a moral point of view, this would mean that a soldier gets detached even further, both physically and emotionally, from his actions then is presently the case (cf. Royakkers and Van Est 2010). This detachment reduces or even eliminates the stress of human operators, but also limits reflection on their decisions leading to human operators not being fully aware of the consequences of their decisions. Instead, they are only focused on the outcome, for example, the targeting of the blips on a screen, and it has to be feared that killing might get a bit easier (see also Singer 2009, pp. 395–396; Sparrow 2009, p. 179). The last observation brings us to the important role of dehumanization, i.e., seeing people for something less than humans, in making unethical conduct more likely to occur (Bandura 1999). So, the value “reducing the psychological stress on remote operators” by dehumanization may come at odds with the value “preventing unethical conduct by remote operators.”

Almost 20 % of the soldiers returning from Iraq or Afghanistan have posttraumatic stress disorder or suffer from depression (cf. Tanielian and Jaycox 2008) causing a wave of suicide, particularly among American veterans that have fought in Afghanistan or Iraq. Since remotely controlled devices can reduce stress, they could also enable more humane decision-making by soldiers. It is well known that in the heat of battle, the minds of soldiers can become clouded with fear, anger, or vengefulness, resulting in unethical behavior or even war crimes. A 2006 survey done by the US Army Surgeon General’s Office (2006) confirmed this picture. Remote-controlled robotic warfare thus might have some fundamental advantages, as it distances soldiers from direct physical contact with some of the sources of this emotional stress. To further the goal of minimizing military casualties and stress-related casualties, Arkin has proposed equipping military robots with an artificial conscience that would suppress unethical lethal behavior by adding an ethical dimension to these robots. This ethical dimension consists of prescriptive ethical codes which can govern its actions in a manner consistent with the Laws of War and Rules of Engagement. Arkin (2009) stated that “they [robot soldiers] can perform more ethically than humans are capable of,” because they have no revenge motive.11 While Arkin’s statement may seem like science-fiction to most, the fact is that the deployment of military robots or unmanned semiautonomous vehicles is rapidly growing.

Responsibility

A value issue related to what constitutes an ethical design is that the ethical governors proposed by Arkin may form a “moral buffer” between human operators and their actions, allowing them to tell themselves that the military robot took the decision. According to Cummings (2004, p. 30), “[t]hese moral buffers, in effect, allow people to ethically distance themselves from their actions and diminish a sense of accountability and responsibility.” A consequence is that humans then simply show a type of behavior that was desired by the designers of the technology instead of explicitly choosing to act this way and thus over-rely on military robots (the “automation bias”). This can lead to dangerous situations since the technology is imperfectly reliable, so the human operator must intervene when some aspect of the technology fails (Wickens et al. 2010). The values safety and keeping the man-in-the-loop with the related value responsibility are at stake here. According to several authors (e.g., Sparrow 2007; Fieser and Dowden 2007; Sharkey 2008; 2010, and Asaro 2007), the assumption and/or allocation of responsibility is a fundamental condition of fighting a just war is that an individual person may be held responsible for civilian deaths in the course of it, and that this condition is one of the requirements of jus in bello. Ethical governors might blur the line between nonautonomous and autonomous UCAVs, as the decision of a human operator is not the result of deliberation, but is mainly determined or even enforced by a military robot. In other words, human operators do not have sufficient freedom to make independent decisions, which makes the attribution of responsibility difficult. The moralizing of the military robot can deprive the human operator from controlling the situation; his future role will be restricted to monitoring. The value “keeping the man-in-the-loop” will then be eroded and replaced by “keeping the man-on-the-loop.” This can have consequences for the question of responsibility: Detert et al. (2008) have argued that people who believe that they have little personal control in certain situations – such as those who monitor, i.e., who are on-the-loop – are more likely to go along with rules, decisions, and situations even if they are unethical or have harmful effects. This would imply that it would be more difficult to hold a human operator reasonably responsible for his decisions, since it is not really the operator that takes the decisions, but a military robot (see Royakkers and Van Est 2010).

Moral Reflexivity and “Better” Ethical Decisions

Within a military context, reflexivity is essential for ethical decision-making for fundamental reasons. In order for moral judgements to be legitimate, they must be the result of a careful process of moral reflection. This entails that the determinations made by military robotics which are based on algorithms are not forms of moral deliberation and reflection. While it is clear that military robotics are capable of processing a greater amount of information at a much faster rate than human beings (which is the reason so many members of the military community are greatly in favor of drones), this ability is distinct from the ability to critically evaluate this information and to consider it when making difficult strategic decisions. Knowledge, the process of transforming information into understanding, is a skill that only human beings are capable of. Robots lack the ability to ask themselves questions about their own choices, actions, and how their interactions affect those of their environment. As all military robots lack the ability to reflect, they lack the understanding necessary for making ethical decisions in complex and changing environments. Military robots’ inability to think, to reflect, or to understand their complex situational environment has been demonstrated by the fact that they often miss important details or incorrectly interpret situations in a complex and dynamic military environment. Even the most excellent sensors can never compensate for a robot’s deficient understanding of its environment (Krishnan 2009). Humans are better at discriminating targets, because they understand what a target is and when and why to target something or somebody. The lesson learned is that designing and fielding an autonomous military robot for reducing mental discomfort and reducing financial costs can be at odds with a careful process of moral reflection on a lethal decision. In our opinion the value of moral reflexivity must trump the values of reducing mental discomfort and reducing financial costs if it is at the expense of moral reflexivity. The implementation of military robots must be preceded by a careful reflection on the ethics of warfare in that warfare must be regarded as a strictly human activity, for which human beings must remain responsible and in control and that ethical decision-making can never be transferred to machines, since machines are not capable of making ethical decisions.

Ethical decision-making is thus an approach that emphasizes the importance of a process of critical understanding. This differs greatly from approaches, like that of Arkin, who examines ethics from a military robotics specialist’s perspective, and therefore in terms of information. They imply that applied ethics is essentially the application of theories to particular situations: “A machine (…) is able to calculate the best action in ethical dilemmas using ethical principles” (Anderson and Anderson 2007). This view is, however, also problematic for other reasons than mentioned above, especially in the military context. One reason is that no moral theory is universally accepted. Different theories might yield different judgments about a particular case. But even if there were one accepted theory, framework, or set of principles, it is doubtful whether it could be straightforwardly applied to all particular cases. Theory development in ethics in general does not take place independent of particular cases. Rather, theory development is an attempt to systematize judgments over particular cases and to provide a rational justification for these judgments. So if we encounter a new case, we can of course try to apply the ethical theory we have developed until then to that case, but we should also be open to the possibility that the new case might sometimes reveal a flaw in the theory we have developed so far.12 Furthermore, the laws of armed conflict, rules of engagement, and just war tradition providing the two general ethical values for lethal decision-making, discrimination and proportionality, are open to challenges and interpretations, which depend heavily upon awareness of particular situations and may not be effectively enforceable. The rules of engagement are devised by military lawyers to suit the needs of specific operations and missions, but they often appear ambiguous or vague to military personnel who observe situations that do not always fall neatly into the distinctions made by lawyers (Asaro 2009).

However, machines can help humans to make better decisions. For example, the ethical governor introduced by Arkin can be a very useful tool if it does not remove the human from the loop. If serving as a safety mechanism preventing and warning humans from making mistakes that require an override, it might be designed in a way that it does not lead to dehumanization and loss of moral reflexivity of the human operator. Thus, designing an ethics-based systems interface should be carefully investigated (cf. Hellström 2013).

The Essence of an Ethical Design for UCAV Systems Interface

To avoid the problems mentioned above, Cummings (2006) argues in favor of the design methods of value sensitive design , which considers the impact of various design proposals on a set of values. The relevant values in play in designing UCAV systems interfaces are reducing civilian deaths and war crimes, reducing psychological stress on remote operators, meeting the criteria of discrimination and proportionality, moral reflexivity, and responsibility. The idea is that design proposals are evaluated based on this set of values. The problem with this conceptualization of what constitutes an ethical design is that it only demonstrates that a certain design is better than another design according to a given set of values, but it does not give any indication how to provide an actual ethical design for a UCAV systems interface.

Essential for an ethical design of a user interface is the understanding of ethical and psychological problems the human operators face, not just in theory but empirically. A lot of research is necessary to explore the cognitive and psychological processes the human operators employ to make ethical decisions. Furthermore, it is necessary to investigate what kind of information is useful and relevant, how information should be represented, and how much information can be dealt with, so that design systems can improve the ethical decision-making of the human operators. In other words, display characteristics, interaction mechanisms, and control limitations are of huge influence of the ethical decision process. The design system of a user interface should make transparent how, from whom, and when information was obtained and how reliable it is, enabling a human operator to make a responsible ethical decision based on moral reflection. It might be required to impose high levels of psychological stress on human operators in order to improve their ethical decision-making. According to Asaro (2009), it will be valuable to study these kinds of trade-offs through an effort “to model the moral user” by designing systems that improve the ethical decision-making of the human operators. By developing a sophisticated empirical model of human operators, we can better understand, for example, the impact of psychological stresses.

Conclusions and Outlook

This chapter has examined the role of VSD applications in the military domain. Focusing on the innovative military system concepts of NLWs and military robots, the intended values of these systems have been assessed. Due the nature of the military context, various mechanisms are at work that tend to preempt the values these weapons are designed to yield.

Nonlethality as the defining value of NLWs comes under pressure when they are deployed in real-world events. In many situations, the operational context tends to narrow down the weapons’ nonlethal design window. Key factors in this operational context reside in the domain of target behavior, in the domain of the user of the NLW, and in the physical environment. Many of those factors are pertinent to the so-called fog of war, implying that much in the operational context of NLW deployment is unforeseeable and reduces the feasibility of controlling the scenario in such a way that the NLW produces the desired effect. The “fog of war” is intimately linked to the noncooperative nature of conflict environments and to the limitations on acquiring sufficient and reliable information on factors and actors shaping this environment. In turn, this noncooperativeness generates a conflict of values that puts the military forces’ responsibility for self-protection in many scenarios at odds with the requirement to the forces to prevent innocent casualties. This “us-or-them” dilemma is to a large extent defining the extent to which the value of nonlethality embedded in NLWs is brought to bear. Rather than that nonlethality is an NLW system attribution, and it is an outcome determined by its operator, who is potentially capable to overrule the weapon’s design value of nonlethality. As a consequence, situations occur, in which the deployment and use of NLW can produce an escalating cycle of violence that enhances the risk of innocent casualties, rather than reduce it; hence, it becomes counterproductive.13

An ethical design of a user interface for a human operator remotely controlling UCAVs should strike a proper balance between emotional and moral attachment and detachment. This requires an ethical design of the computer systems used by human operators to make life-and-death decisions without removing the moral-psychological barriers to killing. On the one hand, such systems should communicate the moral reality of the consequences of the decisions of human operators, and on the other hand such systems should reduce the strong emotions human operators feel to reduce the number of war crimes. To develop such systems is a real challenge, but the existence of such systems is necessary to solve the problem of the attribution of responsibility in order to fight a just war.

As we have argued, there can be no value sensitive design for autonomous military robots, since the value of moral reflexivity is a necessary condition for ethical decision-making, which cannot be delegated to nonhumans such as robots (in the near future). We are in favor of the idea of Asaro (2009) to improve further ethical decision-making of the human operators with the help of technology instead of improving robot performance in decision-making.

Where does the above leave the concept of VSD when focused on the value of preventing of innocent casualties in the military domain? Clearly, the noncooperative nature of the domain calls for a wider approach in comparison with VSD applied in a relatively benign, cooperative context. However complex it may be, the search for and a VSD-based analysis and design of NLW technologies and concepts should be complemented by value sensitive scenarios and human behavioral models, in order to arrive at well-balanced and realistic designs and associated applicability assessments.

Cross-References

Footnotes

  1. 1.

    In hindsight, however, the follow-up in Iraq between 2003 and 2011 of the First Gulf War was much more lethal, as the nature of the conflict had become irregular and asymmetrical, thus marginalizing the role of PGMs.

  2. 2.

    The term nonlethal weapon already appeared in writings on colonial policing during the 1930s (Gwynn 1934, pp. 32–33).

  3. 3.

    NATO: NATO Policy on Non-Lethal Weapons, NATO, Brussels (13 Oct 1999.

  4. 4.

    The expression of the “fog of war” was coined early in the nineteenth century by the Prussian General Carl Von Clausewitz [1831] (1984), in his famous work “Vom Kriege” (“On War”). Its relevance for NLWs has been addressed in Orbons (2010).

  5. 5.

    In November 2011, protestors in Cairo were killed as a consequence of asphyxiation by particular types of tear gas and others blinded or otherwise injured by rubber bullets intentionally fired at the head and neck (Human Rights Watch 2012).

  6. 6.

    Former Senator John Glenn once coined the term “Dover Test”: whether the public still supports a war is measured by responses to returning body bags. He called it the “Dover Test” as the coffins of killed American soldiers came in from abroad at the air base in Dover, Delaware.

  7. 7.

    In fact, the USA expects to operate autonomous robots in 2035 (US Department of Defense 2009), while South Korea already has autonomous robots, stationary but armed with a derivative of the FN Minimi – a light machine gun, capable of fully automatic fire – guarding the border of North Korea.

  8. 8.

    For the impact on situational awareness, we refer to Riley et al. (2010).

  9. 9.

    Although learning armed military robots appear high on the US military agenda (Sharkey 2008), the deployment of these robots is, at least under present and near-term conditions, not reasonable within the next two decades (Arkin 2009). Barring some major significant breakthrough in artificial intelligence research, situational awareness cannot be incorporated in software for lethal military robots (Gulam and Lee 2006; Fitzsimonds and Mahnken 2007; Kenyon 2006; Sharkey 2008; Sparrow 2007).

  10. 10.

    Schulzke (2013) has argued that it is possible to attribute responsibility to autonomous robots by addressing it within the context of the military chain of command.

  11. 11.

    Johnson and Axinn (2013) have countered Arkin’s statement and argued that robots with no emotions do not have the attitude toward people that “healthy” humans are expected to have, and that therefore well-trained humans with healthy emotions are more desirable than autonomous robots.

  12. 12.

    See Van de Poel and Royakkers (2011). If ethical theories do not provide moral principles that can be straightforwardly applied to get the right answer, what then is their role, if any, in applied ethics? Their role is, first, instrumental in discovering the ethical aspects of a problem or situation. Different ethical theories stress different aspects of a situation; consequentialism, for example, draws attention to how consequences of actions may be morally relevant; deontological theories might draw attention to the moral importance of promises, rights, and obligations. And virtue ethics may remind us that certain character traits can be morally relevant. Ethical theories also suggest certain arguments or reasons that can play a role in moral judgments.

  13. 13.

    As, for instance, Wright (2006, pp. 190–191) found that during the Troubles, strong correlations existed between events with baton round use and the occurrence of violence and insurgency activity against British Army personnel soon after the such events.

References

  1. Allgood M (2009) The end of US military detainee operations at Abu Ghraib. Master thesis, University of Florida, OrlandoGoogle Scholar
  2. Amnesty International (2004) United States of America: excessive and lethal force? (Report AMR 51/139/2004). Amnesty International, LondonGoogle Scholar
  3. Anderson M, Anderson S (2007) Machine ethics: creating an ethical intelligent agent. AI Mag 28(4):15–26Google Scholar
  4. Arkin R (2009) Governing lethal behaviour in autonomous robots. Chapman & Hall/CRC, LondonCrossRefGoogle Scholar
  5. Arnesen O, Rahimi R (2007) Military non-lethal solutions for medium to long ranges. Paper presented at the 4th European symposium on non-lethal weapons, Ettlingen, 21–23 MayGoogle Scholar
  6. Asaro P (2007) Robots and responsibility from a legal perspective. In: Proceedings of the 8th IEEE 2007 international conference on robotics and automation. Workshop on RoboEthics, Rome, 14 April 2007Google Scholar
  7. Asaro P (2009) Modeling the moral user. IEEE Technol Soc 28(1):20–24CrossRefGoogle Scholar
  8. Bandura A (1999) Moral disengagement in the perpetration of inhumanities. Pers Soc Psychol Rev 3(3):193–209CrossRefGoogle Scholar
  9. Burrows C (2002) Operationalizing non-lethality: a Northern Ireland perspective. In: Lewer N (ed) The future of non-lethal weapons – technologies, operations, ethics and law. Frank Cass, London, pp 99–111Google Scholar
  10. Centre PF (1996) In the line of fire – Derry July 1996. Pat Finucane Centre, Derry LondonderryGoogle Scholar
  11. Coker C (2001) Humane warfare. Routledge, LondonCrossRefGoogle Scholar
  12. Cummings M (2004) Creating moral buffers in weapon control interface design. IEEE Technol Soc Mag 23:28–33CrossRefGoogle Scholar
  13. Cummings M (2006) Automation and accountability in decision support system interface design. J Technol Stud 32(1):23–31Google Scholar
  14. Detert J, Treviño L, Sweitzer V (2008) Moral disengagement in ethical decision making: a study of antecedents and outcomes. J Appl Psychol 93(2):374–391CrossRefGoogle Scholar
  15. Donnelly S (2005) Long-distance warriors, Time Magazine, 4 DecGoogle Scholar
  16. Fieser J, Dowden B (2007) Just war theory. The internet encyclopedia of philosophy. http://www.iep.utm.edu/j/justwar.htm
  17. Fitzsimonds J, Mahnken T (2007) Military officer attitudes towards UAV adoption: exploring institutional impediments to innovation. Jt Force Q 46:96–103Google Scholar
  18. Freedman L (1998) The revolution in military affairs. Oxford University Press, LondonGoogle Scholar
  19. Gompert D, Johnson S, Libicki M et al (2009) Underkill – scalable capabilities for military operations amid populations. RAND Cooperation, ArlingtonGoogle Scholar
  20. Gulam H, Lee S (2006) Uninhabited combat aerial vehicles and the law of armed conflicts. Aust Army J 3(2):123–136Google Scholar
  21. Gwynn C (1934) Imperial policing. MacMillan, LondonGoogle Scholar
  22. Hellström T (2013) On the moral responsibility of military robots. Ethics Inf Technol 15(2):99–107CrossRefGoogle Scholar
  23. Human Rights Watch (2012) Egypt: Protesters’ blood on the military leadership’s hands. http://www.hrw.org/news/2011/11/22/egypt-protesters-blood-military-leadership-s-hands. Accessed on 12 Feb 2012
  24. Hussey J, Berry R (2008) When the earth shakes! Detainee disturbances in an internment facility. Mil Police 19(1):9–12Google Scholar
  25. Iraq Coalition Casaulty Count (2008) Deaths caused bu IEDs and U.S. deaths by month. http://icasualties.org/oif/IED.aspx. Accessed on 12 Feb 2012
  26. Isenberg D (2007) Robots replace trigger fingers in Iraq, Asia Times Online. http://www.atimes.com/atimes/Middle_East/IH29Ak01.html
  27. Johnson A, Axinn S (2013) The morality of autonomous robots. J Mil Ethics 12(2):129–141CrossRefGoogle Scholar
  28. Kaag J, Kaufman W (2009) Military frameworks: technological know-how and the legitimization of warfare. Camb Rev Int Aff 22(4):585–606CrossRefGoogle Scholar
  29. Kaplan R (2006) Hunting the taliban in Las Vegas. Atlantic Monthly 4 AugGoogle Scholar
  30. Kenyon H (2006) Israel deploys robot guardians. Signal 60(7):41–44Google Scholar
  31. Krishnan A (2009) Killer robots. Legality and ethicality of autonomous weapons. Ashgate, FarnhamGoogle Scholar
  32. Latham A (1999) Re-imagining warfare: the “revolution in military affairs”. In: Snyder C (ed) Contemporary security and strategy. MacMillan, LondonGoogle Scholar
  33. Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6:175–183CrossRefGoogle Scholar
  34. Mayer C (2007) Non-lethal weapons and non-combatant immunity: is it permissible to target non-combatants? J Mil Ethics 6(3):221–231CrossRefGoogle Scholar
  35. McNab R, Scott R (2009) Non-lethal weapons and the long tail of warfare. Small Wars Insurg 20(1):141–159CrossRefGoogle Scholar
  36. Millar R, Rutherford W, Johnston S et al (1975) Injuries caused by rubber bullets: a report on 90 patients. Br J Surg 62:480–486CrossRefGoogle Scholar
  37. Morris J (1992) Non-lethality: a global strategy (white paper). US Global Strategy Council, Washington, DCGoogle Scholar
  38. Murphy M, Merrit J, Mason P et al (2003) Bioeffects research in support of the active denial system (ADS): a novel directed energy non-lethal weapon. Paper presented at the European Working Group NLW 2nd symposium on non-lethal weapons, Ettlingen, 13–14 MayGoogle Scholar
  39. National Research Council (2005) Autonomous vehicles in support of naval operations. The National Academies Press, Washington, DCGoogle Scholar
  40. Northern Ireland Office (2006) Patten report recommendations 69 and 70 relating to public order equipment. A research programme into alternative policing approaches towards the management of conflict. Fifth report prepared by the UK Steering Group led by the Northern Ireland Office, in consultation with the Association of Chief Police Officers. Northern Ireland Office, BelfastGoogle Scholar
  41. Orbons S (2010) Do non-lethal weapons license to silence? J Mil Ethics 9(1):78–99CrossRefGoogle Scholar
  42. Orbons S (2012) Non-lethality in reality. A defence technology assessment of its political and military potential. PhD thesis. University of Amsterdam. http://dare.uva.nl/record/436342
  43. Orbons S, Royakkers L (2014) Non-lethal weapons: striking experiences in a non-cooperative environment. Int J Technoethics 5(1):15–27Google Scholar
  44. Riley J, Strater L, Chappell S et al (2010) Situational awareness in human-robot interaction: challenges and user interface requirements. In: Jentsch F, Barnes M (eds) Human-robot interaction in future military operations. Ashgate, BurlingtonGoogle Scholar
  45. Royakkers L, Van Est Q (2010) The cubicle warrior: the marionette of digitalized warfare. Ethics Inf Technol 12:289–296CrossRefGoogle Scholar
  46. Schulzke M (2013) Autonomous weapons and distributed responsibility. Philos Technol 26(2):203–219CrossRefGoogle Scholar
  47. Sharkey N (2008) Cassandra or false prophet of doom: AI robots and war. IEEE Intell Syst 23(4):14–17CrossRefGoogle Scholar
  48. Sharkey N (2010) Saying “no!” to lethal autonomous targeting. J Mil Ethics 9(4):369–383CrossRefGoogle Scholar
  49. Singer P (2009) Wired for war: the robotics revolution and conflict in the twenty-first century. Penguin, New YorkGoogle Scholar
  50. Smith R (2006) The utility of force – the art of war in the modern world. Penguin, LondonGoogle Scholar
  51. Sparrow R (2007) Killer robots. J Appl Philos 24(1):62–77CrossRefGoogle Scholar
  52. Sparrow R (2009) Building a better warbot: ethical issues in the design of unmanned systems for military applications. Sci Eng Ethics 15(2):169–187CrossRefGoogle Scholar
  53. Sparrow R (2011) Robotic weapons and the future of war. In: Tripodi P, Wolfendale J (eds) New wars and new soldiers: military ethics in the contemporary world. Ashgate, Farnham, pp 117–133Google Scholar
  54. Strawser B (2010) Moral predators: the duty to employ uninhabited aerial vehicles. J Mil Ethics 9(4):342–368Google Scholar
  55. Sullins J (2010) RoboWarfare: can robots be more ethical than humans on the battlefield? Ethics Inf Technol 12(3):263–275CrossRefGoogle Scholar
  56. Tanielian T, Jaycox L (eds) (2008) Invisible wounds of war: psychological and cognitive injuries, their consequences, and services to assist recovery. RAND Corporation, Santa MonicaGoogle Scholar
  57. Toffler A, Toffler H (1994) War and anti-war – survival at the dawn of the 21st century. Warner, LondonGoogle Scholar
  58. UK Steering Group (2004) Patten report recommendations 69 and 70 relating to public order equipment: a research report into alternative policing approaches towards the management of conflict. Fourth report, Northern Ireland Office, BelfastGoogle Scholar
  59. UK Steering Group (2006) Patten report recommendations 69 and 70 relating to public order equipment: a research programme into alternative policing approaches towards the management of conflict. Fifth report, Northern Ireland Office, BelfastGoogle Scholar
  60. US Department of Defense (2009) Unmanned systems roadmap: 2009–2034 (OMB No. 0704–0188). Department of Defence, Washington, DCGoogle Scholar
  61. United States Air Force (2009) Unmanned aircraft systems flight plan 2009–2047. Headquarters, United States Air Force, Washington, DCGoogle Scholar
  62. US Army Surgeon General’s Office (2006) Mental health advisory team (MHAT) IV: Operation Iraqi Freedom 05–07, 17 Nov 2006. www.globalpolicy.org/security/issues/iraq/attack/consequences/2006/1117mhatreport.pdf
  63. Van de Poel I, Royakkers L (2011) Ethics engineering and technology. Blackwell, OxfordGoogle Scholar
  64. Veruggio G, Operto F (2008) Roboethics: social and ethical implications of robotics. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Berlin, pp 1499–1524CrossRefGoogle Scholar
  65. Vilke G, Chan T (2007) Less lethal technology: medical issues. Polic Int J Police Strateg Manage 30(3):341–357CrossRefGoogle Scholar
  66. Von Clausewitz C ([1831] 1984) On war. In: Howard M, Paret P (eds) On war. Princeton University Press, PrincetonGoogle Scholar
  67. Weir S (1983) No weapon which deters rioters is free from risk. New Soc :83–86Google Scholar
  68. Wickens C, Levinthal B, Rice S (2010) Imperfect reliability in unmanned air vehicle supervision and control. In: Barnes M, Jentsch F (eds) Human-robot interaction in future military operations. Ashgate, BurlingtonGoogle Scholar
  69. Wright S (2006) A system approach to analysing sub-state conflicts. Kybernetes 35(1/2):182–194CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Technical University of EindhovenEindhovenNetherlands
  2. 2.Nederlandse Defensie AcademieThe HagueNetherlands

Personalised recommendations