1 Introduction

Unmanned combat aerial vehicles, also known as “combat drones,” combine surveillance technology with missile equipment. The idea behind the deployment of military drones is to strike precisely and eliminate combatants and (potential) terrorists in a supposedly “surgical” way. With the advances in the field of artificial intelligence (AI), combat drones, like other military robots, are becoming more and more autonomous. Today, only remotely controlled and partially automated drones are deployed. Fully autonomous combat drones are still a long way off. But there is an ongoing discussion about the next steps of autonomization, including decisions about life and death.

Ronald Arkin’s book Governing Lethal Behavior in Autonomous Robots is still the most advanced attempt in the development of an algorithmic architecture that is designed to train weapons such as drones to “act” according to the Laws of War and the Rules of Engagement (Arkin 2009). Among the most astonishing of Arkin’s ideas is that his system includes the “application of an artificial affective function (e.g. guilt, remorse, or grief),” an algorithmically based functional analogue of an emotional reaction to moral misbehavior that we know from humans (Arkin 2009, 66). The robot does not actually feel, as commentators on Arkin’s approach make clear, but “the idea is that some of the functional role of guilt can be mimicked in the robot […]. The idea is that determining which options are available and providing an initial assessment is all done in an emotionless manner, and if the results of the process run afoul of the guilt censor, so called, then the option is rejected.” (Guarini & Bello 2012, 135). In another context, Arkin speaks even of an “artificial conscience” that would be able to govern the robot ethically (Arkin and Moshkina 2007).

The deliberative and reactive robot architecture Arkin is proposing has four “ethical” components: The Ethical Governor as a “deliberative bottleneck” (Arkin 2009, 125) that transforms or suppresses lethal action if it is not permissible (Arkin 2009, 127–133); the Ethical Behavioral Control constraints all behavior to produce lethal responses only within “accepted ethical boundaries” (Arkin 2009, 133–138); the Ethical Adapter is an architectural component that assesses the deployment in an “after-action reflective critical review of the system’s performance” (Arkin 2009, 138–143); and the Responsibility Adviser finally is a human robot-interaction component that identifies the responsibilities of an action from the commander to the programmer to the operator through to the robot itself (Arkin 2009, 143–153). “The [artificial] agents are thus capable of learning from others’ mistakes, a useful trait, not always seen in humans,” states Arkin bluntly (Arkin 2009, 140). He is obviously driven by a humanitarian agenda when he writes: “It is the goal of this work to create systems that can perform more ethically than human soldiers do in the battlefield, albeit they will still be imperfect” (Arkin 2009, 39). And he is convinced: “This challenge seems achievable” (ibid.). In his view, robots could be more “humane” than humans in wars (Arkin and Moshkina 2007). In accordance with this line of thought, Arkin makes a case for a “moral imperative” to deploy warfighting robots that are more ethical in the battlefield than human soldiers (Arkin 2009, 40). Arkin’s book is fairly symptomatic for discussions on future encounters with intelligent systems endowed with “ethical algorithms”: Because we can observe a certain focus on the question whether the robots are really able to act autonomously and, hence, ethically or not. But even if Arkin gives an outlook of an imagined “meaningful bond between man and machine” (Arkin 2009, 44), it is somehow typical for the debate that the (future) cooperation between human beings and intelligent machines as such is taken for granted. However, it is important to see that if we implement agentive features in robotic systems, we also need to reflect on the potential changes of human self-understanding, agency, and alterity. This is because the increasing autonomy of robots does not only have an impact on the robots’ status as moral agents. It also effects the implications for the relation between human and robotic actors in complex decision-making processes and joint actions—in particular when we talk about acts of war.

We can see the importance of reflecting on new forms of agencies and questions of alterity when we look at the “Argument from Human Agency” that Alex Leveringhaus recently had put forward (Leveringhaus 2016a, 89–117). Leveringhaus states that there is a fundamental morally relevant difference between human beings that kill other human beings and machines that kill human beings. Based on this difference, he makes the case against autonomous weapons. As right as he may be regarding his critical assessment of autonomous weapons, the Argument from Human Agency becomes problematic when we question the underlying and (implicitly) presupposed dissolution of human agencies from machinic agencies. According to approaches that understand agency as a fluid competence that can be distributed and shared by humans and non-humans, it is difficult to identify an isolated human agency (cf. chapter 3.3.). One of the major tasks in the future will be to describe and understand the hybrid agencies in drone warfare in order to develop an ethical framework.

Peter W. Singer quotes in his book Wired for War the laconic comment of the journalist Noah Shachtman capturing the discomfort we attribute to “intelligent” and “autonomous” weapon systems: “Machines can filter down what we need to see. Instead of us telling machines where to go, it is increasingly machines telling us.” (Singer 2009, 109). Even though we still may say that we humans make the final decisions, we can identify agencies distributed among human beings and artificially intelligent systems and new roles of machines as “others” in morally sensitive decision-making processes—especially in systems where humans are dependent on the so-called machine’s learning algorithms. The changes of human-self-understanding, agency, and alterity are needed to be reflected as one of the big challenges of future warfare. It is not about machines taking command but about a slow transformation process that generates hybrid agencies challenging our self-understanding—because we are not only dependent on machines but also encounter ourselves in the light of “machinic others.”

With this paper, I aim at contributing to the debate by offering a philosophical framework that helps to capture the intricacies of human-robot interaction in general and the respective hybrid agencies in particular. I will draw on the idea of a “relational ontology” developed in postphenomenology in order to describe and critically reflect the complexities of autonomous weapon systems. Following this relational approach, I will focus on the changes in human self-understanding, agency, and alterity, with a special view on drone warfare. No technology raises totally new questions, but I think we ought to treat the drone technology as a kind of a “game changer,” even if the newness is more evolutionary than revolutionary (cf. Enemark 2014, 3). Heraclitus’ statement that war is the father of all things has been quoted as much as it has been interpreted in various ways. One interpretation of this dictum could be the following: war brings the truth of technology to light through the use of new technologies. This is because it shows us in advance what is technically possible, which only years later will also shape our civilian world. Therefore, we must learn to read and interpret combat drones today in order to understand how advanced artificial intelligence could change our world—and ourselves.

In the following, I, firstly, will give some introductory remarks on the combat drone technology as such, referring in particular on Chamayou’s theory of the drone (2.). In the second chapter, I will draw on the relational ontology of the postphenomenological approach in the philosophy of technology in order to systematize the dimensions of human self-understanding, agency, and alterity in a four-level model of human-technology relations (3.). Thirdly, I shall use this four-level model with respect to drone warfare; in order to structure the dimensions, we have to take into account when we want to understand how human-technology interactions are shaped (4.). In doing so, I do not intent to offer an exhaustive relational ontology of combat drones, but I am convinced that this framework is able to capture some of the core dimensions and their multi-layered correlation. And finally, I will add some closing remarks to this inquiry (5.).

2 What Combat Drones Are and Why These Weapon Systems Are of Philosophical Interest

“Drone” or “combat drone” is the short version for “unmanned combat aerial vehicle” (UCAV) or “unmanned aerial vehicle” (UAV), sometimes also called “uninhabited combat aerial vehicle,” presumably because “unmanned” may also have an unpleasant Freudian connotation. Speaking of denotations, it is noteworthy that the drones deployed by the US Army (cf. Enemark 2014, 17) have quite telling names: One current system is called “MQ-9 Reaper” which evokes rather medieval associations. Another one is known as “MQ-1 Predator”—a name that sounds even more archaic. There are drones that are just equipped with surveillance technology but “Reaper” and “Predator” are armed with missiles; the most popular of these is called, in view of the aforementioned naming not surprisingly, “Hellfire missile.” These lethal weapon systems can be navigated and controlled by an operator who sits thousands of miles away in an operation room, supported by various technology and intelligence experts; so-called decapitation or counter-insurgency strikes, coordinated by the CIA, seem to be the main purpose for the deployment of these “hunter killers” in the “war on terror” (cf. Wall and Monahan 2011, 242–245; Sharkey 2012; 114, Enemark 2014, 30–31). Once a legitimate target is identified, the operator is able to kill it by “remote control.” In Grégoire Chamayou’s words, the history of the drones “is that of an eye turned into a weapon” (Chamayou 2015, 11).

If it is still the operator who presses the button, we speak of systems in which humans are “in the loop” (Enemark 2014, 100). However, there is a current debate whether drones should actually behave more autonomously (e.g., Sparrow 2007; Arkin 2009; Strawser 2013, 211–245; Enemark 2014, 97–111). This implicates that the system is not only trained by machine learning algorithms to better recognize potential targets but is also programmed to analyze the moral and legal legitimacy of a lethal attack on the target. Systems that are supposed to operate without human control are called “out-of-the-loop systems” and are only feasible in a distant future, if at all. But what we can expect to come are systems that integrate not only machine intelligence and human intelligence but also machine autonomy and human autonomy in an intricate constellation of co-acting and co-deciding. One of the most important challenges are these new types of human-technology interactions in which moral decision-making processes are distributed among the human and the machinic “parts” of the interface. Even if human beings will stay “in the loop,” they do have to rely more and more on the algorithms that are used for pattern recognition in order to identify suspect persons. This will be a challenge for our concept of responsibility and leads to hybrid forms of agency and autonomy. This topic will be discussed later against the background of new hybrid types of systems that are sometimes called “human-on-the-loop systems” (Sharkey 2012, 115sq, Enemark 2014, 100, cf. chapter 3.2.).

But how can we approach these socio-technical configurations from a philosophical perspective in the first place? For sure, by now, the philosophically most important book in this context is A Theory of the Drone by the above-mentioned Grégoire Chamayou, published in French 2013. I agree with Chamayou who states that we do not only have to reflect the ends and purposes of war but also the means as such. He writes: “Those means not only make it possible to take action but also determine the form of that action, and one must find out how they do so. Rather than wonder whether the ends justify the means, one must ask what the choice of those means, in itself, tends to impose. Rather than seek moral justifications for armed violence, one should favor a technical and political analysis of the weapons themselves.” (Chamayou 2015, 15). Following this idea, Chamayou’s book provides impressing and disturbing analyses of drone warfare. Subsequent to Chamayou, I would like to deepen the philosophical reflection on drones by using a relational model of the so-called postphenomenology, developed by Don Ihde (esp. Ihde 1990, cf. Selinger 2006), adopted by philosophers such as Peter-Paul Verbeek und Mark Coeckelbergh (e.g., Verbeek 2015, Coeckelbergh 2011), with a strong focus on technoscience (cf. Friis and Crease 2015). In addition, one can draw on the debates on machines as moral agents and patients and other issues in “robot ethics” (e.g., Wallach and Allen 2009, Gunkel 2012, Decker and Gutmann 2012, Lin et al. 2017; Misselhorn 2018). And finally, one has to take the above-mentioned ongoing ethical debate on warfare with (semi-)autonomous weapon systems in particular into account (e.g., Strawser 2013; Leveringhaus 2016a).

3 Postphenomenology and Relational Ontology: a Four-Level Model of Human-Technology Relations

With a clear sense of irony, Don Ihde says that the title “postphenomenology” was inevitable: after all, he claims that we live in an age of “posts,” from postmodernism to the postindustrial age (Ihde 1993). Ihde has been working phenomenologically since the 1970s, discovering technology quite early as a phenomenological topic, in particular documented in the book Technics and Praxis (Ihde 1979). He consistently refers to Edmund Husserl, Martin Heidegger, and Maurice Merleau-Ponty. So why did he coin the term “postphenomenology” since one could say that Ihde is just a phenomenologist? One of the main reasons for his “postification” of phenomenology is that he claims to have discovered new ways to describe the human-world relation. We can understand intentionality (in the Husserlian sense) our being-in-the-world (following Heidegger’s important insight) and the embodied and situated perception (in line with Merleau-Ponty) in an essential sense as technologically mediated.

Postphenomenology emanates from the premise that almost every human relation to the world is shaped by technology—take as an example the omnipresence of smartphones that modify our communication or medical technologies which affect our horizon of decision-making in an existential manner. We get “to the things themselves,” it could be stated pointedly, only through the “technologies themselves.” As a “modified, hybrid phenomenology,” postphenomenology “finds a way to probe and analyze the role of technologies in social, personal and cultural life that it undertakes by concrete – empirical studies of technology in the plural” (Ihde 2009, 23).

Like other approaches in the philosophy of technology, postphenomenology proclaims an “empirical turn” (cf. Achterhuis 2001). We must pay far more attention to our technologies than usual because our cognition and action are embedded in technological objects, devices, and machineries to such extent that we must turn Husserl’s concept of intentionality into a kind of “technological intentionality” (Ihde 1990, 141, Verbeek 2001, 136sq). This empirical turn has three immediate implications. Firstly, it means that we must bring a host of objects into our philosophical reflection in order to grasp the complexity of our technological relationship with the world and with ourselves. Monolithic discussions of ‘technology’ in singular, as we see, for instance, in the late Heidegger, must be abandoned as we face the plurality of technologies. Secondly, since human beings are continually engaged with the world, postphenomenologists make usage of Heidegger’s notion of being-in-the-world stating that there neither exist human beings in themselves nor a world-in-itself (Verbeek 2005, 110). And since this engagement is very often technologically mediated, a philosophy of technology can systematically examine basic human-world relations. These relations constitute an “(inter)relational ontology” (Ihde 2009, 44). We understand ourselves largely through the technologies with which we perceive and act in the world. And at the same time, there are no technologies “as such.” They “are” only in connection with the respective domains of human experience und subjectivity. We consequently cannot understand, for instance, what technologies in reproductive medicine “are” if we only know how they work. We do not understand these technologies until we describe how they modify life plans and biographies; how they can reconfigure gender models; how neocolonial business models flourish with the idea of a “surrogate motherhood”; and how ontologically delicate intermediate, limbo-like worlds could emerge from, for instance, the storage of “surplus” embryos, and so on. Accordingly, technologies are non-neutral: “Technologies change and transform situations, however minimally. There is no thing as a ‘mere use’ of technology.” (Ihde 1998, 47). Ihde occasionally talks of a “cocon-situation” addressing with this term the following ontological and epistemological condition: “Technologies transform our experience of the world and our perceptions and interpretations of our world, and we in turn become transformed in this process.” (Ihde 2009, 44). Thirdly, we must fundamentally rethink the subject-object relationship. Too often, it is taken for granted that “active” human subjects use “passive” technological objects, in order to achieve certain goals. Postphenomenology aims at overcoming this subject-object dichotomy in claiming that we have to focus on the relations. Moreover, postphenomenology criticizes this perspective and contends that technologies must also be regarded as agents in a certain sense (this clearly has a connection to Bruno Latour’s approach, cf. Verbeek 2005, 112). For instance, technology prompts us to do things (from the peeping washing machine to Artificial Intelligence in the context of new media), alters our perception (which can be experienced in surveillance apparatuses), and modifies our options for acting (as is the case with weapons). We need to conceive human beings as already in interaction or as even being insolubly entangled with their machines.

Postphenomenology approaches the multiple layers of these relations in the tradition of “phenomenological variations.” One of Ihde’s core concepts is “multistability” (Ihde 2009, 12sq): Depending on the perspective and context, technologies may appear before us in ever new and different manners, often revealing the cultural embeddedness of the perception and the use of technologies, sometimes called by Ihde “cultural hermeneutics” (Ihde 1990, 124). Against this background, he also drives forth a “material hermeneutics” (yet based on Heidegger’s hermeneutic phenomenology), because hermeneutics should no longer concern itself only with the interpretation of texts but also with the interpretation of technical artifacts, thus expanding the “interpretive activity” of human beings to a “thing interpretation” (Ihde 1998, 8).

Based on the well-known phenomenological premises that we are intentional and embodied beings characterized by a basic “being-in-the-world,” Ihde developed a phenomenology of technology by distinguishing various technologically mediated relations between human beings and technology. In Technology and the Lifeworld, we find the most elaborated differentiation between four main relations that characterize the human-technology entanglement (Ihde 1990, 72–123; also, e.g., Ihde 2009, 42–44), namely, embodiment relations, we incorporate technologies and experience technologies through our bodies; hermeneutic relations, technologies shape our interpretation of the world and of ourselves in a very basic sense; alterity relations, we understand machines in a specific kind of “otherness”; and background relations, meaning that we usually live in a technological world and accept technical regimes without knowing or reflecting it.

In the next section, I shall transfer Ihde’s four-level model of human-technology relations to the philosophical debate on drone warfare. My primary purpose is not to apply this method but rather to provide a systematic review of the postphenomenologically relevant literature that shows the complexities of human-technology interactions with a view on drone warfare. On this basis, I would like to demonstrate how we could describe core aspects of human self-understanding, alterity, and hybrid agencies that are crucial in the field of drone warfare (which are of course related to other fields of human-technology interaction). We will see that the four levels cannot be treated as sharply distinguished. Hermeneutic relations, for instance, have also an impact on alterity relations because the technology shapes the way we interpret others as others. Similarly, in the context of embodiment relations, questions of alterity arise too. This is because the “mechanized gaze” of the drone affects the self-understanding of an embodied person. Before I shall go into detail, I would finally like to address a potential concern about the use of this model: The quoted literature often already deals with postphenomenologically relevant aspects of human-technology interactions—without drawing on postphenomenology. However, I would like to underline that the postphenomenologically inspired four-level model presented here offers an “added value” because it systematizes the multi-dimensionality of human-technology interactions and its philosophical implications in an encompassing way that so far is missing in the debate.

By reflecting on a current technology such as combat drones, we encounter another difficulty often associated with highly dynamic technological developments: the philosophical reflection of a novel technology cannot be limited to the status quo of the respective technology but should also address problems of future advancements. Especially in the particular case of autonomization processes, it is philosophically of interest to explore potential future grades of autonomy because we then can better comprehend present trends (that always tend to future technical solutions and the respective visions) and the inherent “logic” of technoscientific processes. To envision future improvements and applications of a technology does not mean that we inevitably enter the fuzzy realm of speculation or science fiction. Au contraire, reflecting on the “inner logic” and its future implications, it is crucial for the thorough understanding of a new technology. Though I mainly discuss the philosophical questions based on the state of the art of combat drones in the following pages, I will also mention some possible upcoming challenges with respect to the (technological envisioned and/or politically expedited) autonomy of these weapon systems.

4 The Four-Level Model of Human-Technology Relations and Drone Warfare

4.1 Embodiment Relations

We can grasp immediately that embodiment relations are crucial to understanding drone warfare technology. The drone operator is separated from his weapon in an extreme way. Acting far away from the battlefield, there is no risk at all for him when he uses his weapon. Therefore, the term “disembodied warrior” was coined (cf. Enemark 2014, 85). We know that a core motivation in the development of arms was always to achieve more distance to the enemy, be it the invention of bows, firearms, air forces, or the notorious “Vergeltungswaffe 2” (V-2 rocket) with all its seductive dreams of absolute power. Technological progress always leads to asymmetries between the combatants (Sharkey 2012, cf. Münker 2015). But a situation of constant asymmetry is obviously a feature of hybrid wars or “new wars” (cf. Kaldor 2013). And the drone technology could be understood as the perfect materialization of the ultimate aim in this development: a highly lethal weapon that allows the soldier to operate from a safe distance. The experience of the new disembodiment in drone warfare is reported in an anecdote in the following way: when a journalist visited the Ground Control Station at the Creech Air Force Base in Nevada, he was told: “Inside that trailer is Iraq, inside the other, Afghanistan.” (Gregory 2011, 192). The drone operators are engaged in Iraq or Afghanistan but are deployed in Nevada. The usual connection between soldier and weapon and battlefield is dissolved in the most possible way.

But on the other side of the coin, we can observe a new quality of proximity. Due to the sophisticated surveillance technology, the drone operator is much closer to the target than with comparable weapons, and he or she experiences the immediate consequences of the attack more intensively than, for instance, in “classical” air raids. The drone operator is almost as close to the enemy as he or she would be in a “face-to-face” fighting situation—while being in fact thousands of miles away and completely safe. Therefore, the surveillance technology could even lead to re-humanization of the targets or even to an “empathic bridging” (Coeckelbergh 2013, 88). Accordingly, some authors make the case that drone operators are able to act more ethically because they are not involved in the stress of the battlefield where they could witness the death of a fellow soldier and might decide injudiciously (e.g., Strawser 2010, cf. Enemark 2014, 46). In contrary to this optimistic assessment of the inherent ethical potentials of the drone technology, it was said that the new kind of visual proximity and the respective visuality regimes “produce a special kind of intimacy that consistently privileges the view of the hunter-killer, and whose implications are far more deadly” (Gregory 2011, 193). However, we can notice with respect to drone warfare a “new combination of physical distance and ocular proximity” (Chamayou 2015, 117).

The phenomenon of disembodiment and distance to the battlefield provokes also debates on the question whether the drone operators could be considered “heroes” or “cowards”—despite the drone operators’ demanding jobs (Enemark 2014, 92)—and whether we have to talk here of post-heroic wars or of a “twilight of the heroes” since the disembodied warrior is not engaged in a real contest with mutual risks anymore (cf. Enemark 2014, 9sq, 91; Bröckling 2015). This includes also the question if the riskless and therefore “unfair” warfare may lead to the enemies’ outrage with unforeseen insurgent consequences (Enemark 2014, 32). Furthermore, Chamayou asks whether we can call the drone operators still “soldiers” in a strict sense—or rather “man hunters” or even “assassins” (Chamayou 2015, 114sq). The difference between “hunting” and “warfare” or “targeted assassination” and “warfare” seems to be not only crucial to the ethical debate on drones on this particular form of violence but also for the self-understanding of the US leadership as being “on the hunt” (cf. Maurer 2016). This shifting of concepts also affects the evaluation of the drone operators’ psychological dispositions, in particular regarding new forms of “battle stress” disorder (Sharkey 2012, 114). The stress and the cognitive demands on drone operators are also described as a core feature of the emerging “new subjectivities” that we have to consider in the context of drone operation actions (cf. Asaro 2017). Some psychologists even propose to call the symptoms “perpetration-induced traumatic stress” instead of the usually in the military context diagnosed “post-traumatic stress disorder” (MacNair 2005). So very intriguingly, the traumatic stress of a soldier in the battlefield and the stress of a drone operator acting in distance to his or her enemy seem to be different; this classification, however, remains a task for psychologists and psychiatrists.

The potential changes of the drone operators’ self-understanding with respect to the omnipotence of their weapons are only one noteworthy aspect. Philosophical thrilling is also the question of how far the human mind can be extended. Since the paper by Andy Clark and David Chalmers, we speak of an “extended mind” that makes use of notebooks, tools, and devices (Clark and Chalmers 1998). And during the last years, various conceptions of an embodied, embedded, extended, and enacted cognition were proposed (cf. Menary 2010). Against this background, we can ask: In which way is the drone “part” of the operator’s mind? Do drone operators merely act in the virtual world of the screens and joysticks—or do they experience an extension of their minds to these distant places? Does it make a difference when one is just seeing pictures provided by the drone or when one is navigating the machine and thus somehow “merging” the machinic body with his own senses and cognitive capacities? As far as I know, these kinds of questions have until now not been asked specifically in the interviews with drone operators. But it could be highly interesting for future investigations of the drone operators’ identity how these extreme extensions affect their minds and the way of perceiving, thinking, and deciding, respectively.

Often neglected in the debate on drones is that they not only endow their users with a quasi-omniscient view but make them at the same time more or less invisible for their enemies. In mythology, invisibility is a privilege of the gods (cf. for the following considerations Müller 2017, 267sq). And it is a well-loved fantasy, played out in many religious and literary myths and narratives, to answer the question of how it would be, if the divine ability of becoming invisible was converted into a human competence. Invisible men are attributed great power in these narratives, as is the case in Plato’s story of the ring of Gyges in the second book of the Republic (359a-360d): Gyges used the power of being invisible, seduced the queen, and killed a king named Candaules, becoming himself the king (see in the context of drone warfare Maurer 2016; Enemark 2014, 112sq). Invisibility generates a withdrawal from the legal order that would hold criminals accountable. The fascination for this scenario survived through the centuries. H.G. Wells, for instance, adapted this narrative in his 1897 novel The Invisible Man (Wells 2017). These stories capture the correlation of invisibility, invulnerability, and super-human power. In the more prosaic language of a former drone operator quoted in the documentary Drone (2014) by Tonje Hessen Schei, the invisible warriors are the “ultimate voyeurs.”

In terms of embodiment, the drones have even more far-reaching consequences. If we take the potential “targets” into account, we can witness changes of embodiment relations as well. We know from reports from some concerned regions in Pakistan and Yemen that the citizens have to live in the knowledge that they are permanently observed by invisible drones and that a drone strike could happen anytime (Bowden 2013, Stanford Law School and NYU School of Law 2012, Weber 2011). The citizens in these areas incorporate the “omniscient” view of the drone technology, integrating potential assaults in their daily life. One way to investigate the respective changes in their self-relation is by drawing on the implications of surveillance regimes that Michel Foucault addresses in Discipline and Punish. Foucault described the totalization of the surveillance logic by using the example of the panopticon, Bentham’s famous construction of a prison building that allows the prison guards to observe all inmates from one tower in the middle of the institution. The purpose of the panopticon is “to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power” (Foucault 1979, 201). Since “[h]e who is subjected to a field of visibility, and who knows it, assumes responsibility for the constraints of power […]; he inscribes in himself the power relation in which he simultaneously plays both roles; he becomes the principle of his own subjection” (Foucault 1979, 202). In the visual culture studies, Foucault’s analyses are discussed in the context of drone warfare by using the concept of “hypervisibility” (Maurer 2016, Gregory 2011, 190). In this context, also a crucial difference to the panopticon was pointed out: the drone surveillance technology is as Kathrin Maurer puts it “no longer organized via a centralized verticality (i.e., the panopticon’s watch tower), but rather a multi-faceted rhizomatic gaze that can constantly change its constellations” (Maurer 2016, 4). Against this background, the term “scopic regime” had been introduced in order to capture the “techno-culturally mediated ways of seeing, the concept is intended as a critical supplement to the idea of vision as a purely biological capacity” (Gregory 2011, 190).

Important in this context from a (post)phenomenological perspective is that these analyses of the power of visibility regimes can be linked to what was described in existential philosophy when visibility—in the sense of fundamentally being seen by others—was discovered as a core experience of human beings. Jean-Paul Sartre and Hans Blumenberg portrayed how our identity is to a certain extent shaped by the others’ gazes and the respective dialectical recognition processes (Sartre 1993; Blumenberg 2006, cf. Müller 2017), meaning that we constitute ourselves as persons via the gazes of the others. And exactly these very fundamental self- and identity-constituting dynamics change when the anonymous and “omniscient” gaze of the surveillance technology substitutes the “other.” The crucial aspect regarding a humane “right to be invisible” is not only that the permanent observation of the outward behavior is a violation of the private sphere. But the point here is that the internalization of the visibility regime of the surveillance apparatus “mechanizes” and “colonizes” the embodied self, establishing a regime that is also called “the drone stare” and its “cosmic control” (Wall and Monahan 2011, 246). As a result of these observations, we can say that the surveillance technology is perfidious because it manipulates the self-conception in accordance with the logic of the control regime. And this pertains also for the people that live under the permanent observation of drones trying to identify legitimate targets with the means of the so-called pattern-of-life analyses (Enemark 2014, 54). We can say that the “panoptical” gaze of the drone technology does not only translate bodies into “targets” (Wall & Monahan 2011, 250). It also shapes the minds of a whole population in the sense that minds are always embodied and that the “drone stare” has effects even on the mere posture of citizens in the respective areas. This has also an impact on how they behave in a “public” that is manufactured by the presence of the drone technology. The US Military, by the way, is itself alluding to prominent myths in this context: The acronym of its Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System is ARGUS-IS, named after the hundred-eyed giant Argus who was supposed to observe Zeus and his adulteries on behalf of Hera (cf. Gregory 2011, 194). And another system is called “Gorgon Stare” after the mythical Gorgon who was able to turn anyone who looks at her into stone (cf. Maurer 2016, 3).

With respect to the potential autonomization of the drone technology, we can ask whether an autonomous system would exacerbate the surveillance situation. In this context, we have to reflect on the interface between human controller and computerized surveillance system. A thorough study on the ethical implications of manual, automated, and autonomous surveillance systems (not in the field of combat drones but still relevant in this context) makes the case for partial automation as the most efficient but also the ethically most justifiable systems (Macnish 2012). Against the background of the assumption that we have to take hybrid agencies into account, we can state with respect to the combat drones that the “eye” that turned into weapon is a “hybrid eye,” half human and half machine. Fully autonomous surveillance systems in combat drones may never be deployed, but we for sure will encounter these “hybrid eyes” more and more in the future. And both in Foucaultian and post-Foucaultian frameworks, we have to address the ontological and power-theoretical status of these “hybrid eyes.” Compared to the technologically mediated but still mere human gaze in the Foucaultian theory, does it make a difference when a human operator in alliance with a partial automated system is surveilling potential targets? The problem of the “hybrid eye” takes us to the next human-technology relation. Since the surveillance technology generates data that are supposed to help to identify potential targets, we, accordingly, enter the level of hermeneutic relations.

4.2 Hermeneutic Relations

The surveillance technology provides permanently high-resolution images of the world—technologically fabricated images that must be interpreted by the operator team embedded in a network of senior officers, intelligence analysts, military lawyers, and political actors (cf. Coeckelbergh 2013, 93). Therefore, the so-called kill-chain can be “thought of as a dispersed and distributed apparatus, a congeries of actors, objects, practices, discourses and affects, that entrains the people who are made part of it and constitutes them as particular kinds of subjects” (Gregory 2011, 196). This complex socio-technical constellation is epistemologically challenging when we ask how reliable knowledge could be generated anyway on the basis of information processes embedded in a complex social network that makes life-and-death decisions on a (at least) questionable legal basis (cf. Stanford Law School and NYU School of Law 2012, 103–124).

According to the reports on the work done in the operation rooms, the analyses and interpretations of the images are extremely exhausting and psychologically demanding (cf. Abé 2012; Enemark 2014, 92). Not only do they need to analyze whether the images show a terrorist or not, whether they display a combatant with a gun or a child with a toy, but the interpretation of these images also requires a huge knowledge about the background of the potential targets. The crucial point here is not just that deciding on life or death is a very responsible job. Beyond ethical questions, we need to analyze the entire socio-technical constellation that frames the interpretation of the data, in order to understand the challenges for the hermeneutic relations. The surveillance technology itself forces the operators to think in certain ways, to apply a specific logic based on the above-mentioned techno-culturally mediated way of seeing (Gregory 2011, 190). Usually the drone operators follow their potential targets several weeks or months, in order to secure that the observed person is a combatant or a “terrorist” (whatever this exactly means). In this process, pattern-of-life analyses are involved, meaning that the operators have to interpret the pictures on their screens against the background of the question whether a person is just doing his or her work and living peacefully their daily life or whether there are irregularities in their behavior that indicate some suspicious activities. The drone technology is in particular challenging in terms of interpretation because it is ultimately a preventive technology. Potential “terrorists” should ideally be identified before they perpetrate an assault—which of course implies the problems of racial biases in the interpretation of the data (cf. Miller 2017).

This means that the hermeneutic relation implies aspects of alterity too. The interpretation of the respective person on the screen is always an “interpretation as.” Within the logic of drone technology, the other is not primarily a person but a potential target, and the other is an image of a screen and as an object that is permanently analyzed regarding the question whether it is a legitimate target that has to be eliminated or not. In addition, it is crucial in this context that the “human other” appears just as an image on the screen. The “real other” behind the image and the image itself are overlapping. Because of these experiences, some experts observe the risk of a certain “PlayStation mentality” in drone operation rooms (Alston 2010, 25). Of course, we can assume that responsible persons are doing their job here, but it is of phenomenological interest that operators experience other persons as a merger between a real person outside in the world and a virtual target in a PlayStation-like fashion.

With the application of machine learning algorithms, the situation becomes even more intricate: The system itself then provides some sorts of interpretation, for instance, when the pattern recognition technology indicates that the person on the screen could be a suspect person. Accordingly, the drone operator has to interpret the machine’s interpretation. This leads to complex human-machine interactions and to changes in the interpretative activity. The human being has to rely on the technology in a joint decision-making process aiming at identifying and killing legitimate targets. Experts speak in this context of “on-the-loop systems” (Enemark 2014, 100; Sharkey 2012, 115sq; Leveringhaus 2016a, 3) when humans delegate und supervise the “decision-making” by the drone, or, in the words of Human Rights Watch, the idea here is that robots “can select targets and deliver force under the oversight of a human operator who can override the robots’ actions” (Human Rights Watch 2012, 2). Thus, “on-the-loop systems” are characterized by a more autonomous behavior of the system’s “machinic parts” than in “in-the-loop systems.” But this means on the one hand that it is unlikely that we will encounter fully autonomous “out-of-the-loop systems” in which humans can only intervene after time delays (cf. Enemark 2014, 101) or systems that keep human beings consequently “in-the-loop.” On the other hand, we can expect that on-the-loop systems will be the systems of the future. And these on-the-loop systems are particular challenging in terms of hermeneutic relations because human and machinic decision-making processes will be entangled in an intricate way (cf. chapter 3.3.).

But the interpretation of the pictures and the pattern-of-life analyses and its assessment are only one aspect of hermeneutic relations in drone warfare. Crucial for identification of potential targets are the so-called “kill lists” and the respective databases, and it is very important to understand these databases as socio-technical artifacts intertwined with human decision-making processes in the production of targets (Weber 2016) that are also discussed against the background of “killing on the basis of meta-data” (Denker 2016, 130). And since we have to take “intelligent” and “learning” algorithms into account that support these human interpretation processes, we can say that the drone technologies involve epistemologically very intricate questions regarding the blend of (not always transparent and explicable) secret service information (cf. Alston 2010). This is due to machine learning techniques and very complex algorithmic processing accompanied by (sometimes fallible) human interpretations of the data, based on a (not always overseeable) division of labor (see for the complexity of the decision-making process also Denker 2016, 128–130). Based on the suggestive evidence of the data and their supposedly uncompromising exactness, these interactions of human intelligence and machine intelligence are often justified by the questionable ideal (and the problematic bio-medical metaphors) of a “surgical” and “clinical” warfare (cf. Gregory 2011, 188sq). But reports of NGOs show that the deployment of armed drones is not “clinical” at all because there is a large number of casualties, including children (cf. Hajjar 2017, 71sq), extensively reported also in the documentary Drone (2014).

Furthermore, one can pose the question whether the rules of International Humanitarian Law such as the discrimination of civilians and combatants may be programmable at all or whether we can ever find a solid computationable definition of “civilian” (Sharkey 2012, 118). Pattern recognition technologies can and will be trained with machine learning algorithms in order to support the discrimination of targets. But aside from the fact that it will be increasingly difficult to identify “enemy soldiers” in the new asymmetrical wars, it remains ethically highly problematic to delegate the decision to kill a person to an autonomously “acting” machine. And regarding the principle of proportionality, it would be even more challenging to translate it into reliable algorithms. However, we can expect to find in this respect further research in Artificial Intelligence in the military in the next few years, given that there are some advanced attempts to ethically program autonomous robotic systems (cf. Arkin 2009, cf. chapter 3.3.). In order to understand the ethical implications of the drone technology, a thorough description of the techno-epistemic constellation is required, by combining (post)phenomenological and hermeneutical traditions with Science and Technology Studies (cf. Coeckelbergh 2013). Quite convincingly, philosopher Mark Coeckelbergh states that the “victim’s informationalisation precedes and makes possible his extermination. Before he is physically killed, he is first morally-epistemically disarmed. Epistemologically speaking, he is already killed before the missile hits him. Being tagged as a target, he has become a node in a network of information, which reveals him as a something-to-be-killed. He does not appear as a human being but as a bit that can only have two values, and his value is now changed from I to 0.” (Coeckelbergh 2013, 93).

Coeckelbergh focuses in his argument on the intricate connection between algorithmic data generation and the decision to kill a person on the basis of these data. But one can also pose the question how we have to deal with the situation that will most likely occur, videlicet the situation that we cannot avoid to rely with our “human” actions on the information provided by algorithms. Analyzing the hermeneutic relations does not only mean to describe the problems of the role of data use in this context but to reflect on the epistemic challenges against the background of a successful cooperation of human and machinic agents. Approaches that investigate the “meaningful human control” in autonomous weapon systems may help to define “function allocations” that integrate human control in the further development of autonomous weapons (cf. Canellas and Haga 2015). Increasing the grades of autonomy does not necessarily mean that the epistemic basis of decision-making processes is becoming questionable. With respect to the hermeneutic relations, the description of the intricacies of the “amalgamation” of human experiences and algorithmically generated knowledge is and will remain a core task when we reflect on interactions between humans and AI systems in general and human-on-the loop systems in the field of “intelligent” weapons in particular.

Hence, both regarding the “informationalisation” of the victims and the “meaningful human control” in AI systems, we can discuss the question if it is worth to make the case for “old” human hermeneutic and epistemic competences that are important in ethics such as the situational and context-sensitive knowledge related to “phronesis” (for instance developed by Aristotle in his Nicomachean Ethics) or to prudence and “practical wisdom” that allow human beings to act with sound judgment based on a complex mixture of a general (teleological, deontological, or utilitarian) ethical framework and concrete experiences, combined with an innate and culturally developed moral sense. In other words, humans are able to act in morally intricate situations on the basis of this complex mixture of moral attitudes, moral emotions, guidelines, manuals, orders, intuitions, and experiences. Nevertheless, human beings can still fail. But if a human being tried its best, we can forgive. But will we ever be able to forgive machines that accurately followed their “ethical algorithms”?

4.3 Alterity Relations

Alterity relations were explicitly and implicitly already addressed in the previous chapters. Alterity relations are in place when we ask who the “others” are in the socio-technical configuration that constitutes drone warfare. In this respect, we have to differentiate between human “otherness” (in line with the traditional understanding of alterity) and technological otherness as “quasi-otherness” (in Ihde’s sense, cf. Ihde 1990, 98). By employing technologies such as drones, we encounter both human others and machinic others (quasi-others). While the human otherness is challenged by our technologically mediated understanding of the human counterparts in our actions, the machinic otherness is important with respect to new forms of cooperation with AI systems.

Firstly, regarding human otherness, we would say that the observed and occasionally killed targets are “the others.” Yet, already this otherness is peculiar. It is clearly another human being that appears on the screen. However, we do not have a face-to-face contact but a unilateral encounter against the background of friend or foe identification, herby mapping the world in an almost Schmittian manner that may be better described by drawing on the predator-prey distinction. The other is not a “foe” with all its implications but more a “prey” when we follow Chamayou’s analyses (Chamayou 2015, 26–35). Yet, the other is in philosophy not only the categorized and excluded other as a potentially intrusive “stranger”, but the experience of the other is also the foundation for an ethical response (Levinas 1998). And due to the perceptive qualities of the surveillance technology, the anonymous “other” on the screen could become a concrete person, a person with a “face” in Levinas’ sense (see for an adaption of Levinas in the context of AI Gunkel 2012). This closeness to targets or civilians may also change to drone operator’s attitude as depicted in movies such as Eye in the Sky (2015). So, we can also say to a certain extend that the potential targets appear to the drone operators as “re-humanized, re-faced, and re-embodied” (Coeckelbergh 2013, 87). It seems to be crucial for drone technology and also for the new subjectivities of drone operators that two kinds of experience of human otherness are intertwined, one the one hand killing in terms of a computerized anonymization and bureaucratization (Asaro 2017), and on the other hand, we can observe new forms of intimacy in the close-ups made possible by the surveillance technology. Maybe, we can say that drone technology epitomizes the intricacies of modern media in such a way that it oscillates between intimacy and distance as well. Anyway, we can detect here a confusing experience of otherness: a concrete person with a “face” is at the same time a mere target on a screen.

Secondly, following the postphenomenological concept of technological quasi-otherness, we can address alterity also regarding peculiar forms of “machinic otherness” (cf. Gunkel 2012). Since the development of military robots is heading towards automatization and autonomization, we can understand machines, and in our context the drone technology, in terms of “otherness” as well as other agents that engage in complex ways with human agents, in other words as “artificial (moral) agents” (Floridi and Saunders 2004). With the increasing autonomy of machines, the question is posed whether we should ascribe machines a specific form of agency and moral accountability or even responsibility, meaning that we have to speak of “moral machines” (cf. Wallach and Allen 2009). In this context, I will shed a light on the alterity relations in terms of interactions with “machinic others.” Apart from the debate on the moral status of artificially intelligent machines, it is important to focus on the relations between humans and machines when acting together. In a future not so far away, the morally trained drones will recommend a certain target assassination procedure, ideally in perfect accordance with the laws of war. Subsequently, the machinic other will become a “partner” in complex decision-making processes, especially when drones become more and more “autonomous,” we can expect a further development of a “force mix” between human and robotic combatants that depends not only on the reliability of the robotic intelligence but also on the human users’ ability to trust (Lucas 2013, 223). And exactly this could be a problem in shared decision-making processes because some experts have identified the phenomenon of an “automation bias,” i.e., an “overly strong trust in the machine’s capacity to carry out the task,” so despite discrepancies, “operators will trust the programming of the machine,” and they could “feel less inclined to override the machine” (Leveringhaus 2016a, 99).

The US Department of Defense distinguishes four levels of autonomy in unmanned systems (all the following, shortened, definitions in this paragraph are taken from Enemark 2014, 101 who is referring to https://fas.org/irp/program/collect/usroadmap2011.pdf): First, human operated, the system has no autonomous control, second, human delegated, the vehicle can perform many functions independently of human control when delegated to do so; this level encompasses automatic controls and other low-level automation and must be activated or deactivated by human input; third, human supervised, the system can perform a wide variety of activities when given top-level permissions by a human; both the human and the system can initiate behaviors based on sensed data, but the system can do so only if within the scope of its currently directed tasks; and finally, fully autonomous drones, the system receives goals from humans and translates them into tasks to be performed without human interaction; a human could still enter the loop in an emergency or change the goals, although in practice, there may be significant time delays before human interventions occurs. If we speak about autonomous drones in this context, it is wise to phrase a caveat—in particular with respect to the envisioned otherness. George R. Lucas states that “anthropomorphic, romantic nonsense attached to robotics in the popular minds by ‘Star Wars’, ‘Blade Runner’, and other movie and science fictions fantasies seriously compromises the ethical analysis of the use of genuine, real-world military robots within the confines of the international law.” (Lucas 2013, 219). And he adds: “And the systems we do propose to build and deploy at present have ‘autonomy’ only in a very limited and highly scripted sense.” (Lucas 2013, 220). However, limited the autonomy is, drone and other military robots will be programmed that they will be able to make decisions in ethically sensitive contexts—and if these decision-making processes are based on “learning” algorithms, then we are allowed to ascribe to these robots a certain degree of autonomy, e.g., when they follow an order “by their own.”

In postphenomenology and in the social sciences, the instrumental theory of technology is questioned, accompanied by doubts regarding the traditional concepts of human subjects. Therefore, it is currently not only discussed whether machines could be understood as artificial moral agents or not, but also concepts of “extended agencies” and “distributed agencies” are examined (cf. Hanson 2009; Rammert 2008). Artificial agents such as drones are undoubtedly different from human actors, but they are also different from classical machines and media, or as Werner Rammert puts it, machines “remain artifacts. However, they lose their passive, blind, and dumb character and gain the capacities to be pro-active, context-sensitive and co-operative. Insofar it is justifiable to define them as agents.” (Rammert 2008, 67). These new kinds of agents are able to cooperate with human beings. Of course, these agents cannot cooperate with others “in a human-like manner. But they can be equipped with an intentional vocabulary by which they really coordinate and communicate their activities as human actors do, with similar semantics.” (Rammert 2008, 76). Leaving the instrumental theory behind and focusing on the new constellations of actions that involve humans and “intelligent” machines, Rammert states convincingly: “Instrumental actions between active people and passive objects are turned more and more into relations of interactivity between two heterogeneous sources of activities […]. Actions emerge out of complicated constellations that are made of a hybrid mix of agencies like people, machines, and programs and that are embedded in coherent frames of action.” (Rammert 2008, 65). These analyses demonstrate that we have to think differently or “otherwise” (cf. Gunkel 2012, p. 159sq) of human-machine co-actions since we can identify a peculiar “machinic otherness” within these hybrid agencies.

This also holds true for the drone technology. We not only can describe alterity relations regarding the mediazation of other persons on the screens in the operation room and the implications of data processing and pattern recognition, but we can speak of a specific otherness of the drone technology. This is because the drone not only provides information but also changes human agency by “inviting” the involved human being to perform certain actions, by being a “partner agent” in complex decision-making processes in which the human and the machinic part are insolubly linked. In the foreseeable future, we can even imagine that the human guilt and the above-mentioned functionalized machinic guilt will have somehow to be synchronized in order to “optimize” the performance of the human-machine co-action. With Arkin’s visions in mind, we can expect a (semi-)autonomous “acting” of the drone as an “other” within a hybrid mix of agencies. But since we can understand “oneself as another” (Ricoeur 1992), we can also say that the machinic otherness shapes the identity of the human, be it in-, on-, or out-the-loop. Rammert convincingly states that the “up to now dominant design of a master-slave architecture is slowly being replaced by open systems of distributed and cooperating agents” (Rammert 2008, 68). For sure, the understanding of human-technology interaction within this master-slave architecture and its implications of a comprehensive human controllability of technology impedes the reflection on new agencies in drone warfare and their (normative) challenges. Just as a side remark, regarding the question of alterity relation, we can also ask if it will be possible to “re-hegelianize” the master-slave architecture by describing it as the recognition processes in complex human-technology interactions where the human and the machine are mutually dependent on each other.

However, this mix of agencies has implications for the question of responsibility. Not surprisingly, responsibility is one of the topics that are intensively discussed in the context of drone warfare, especially since Robert Sparrow raised the question of a “responsibility gap” when “killer robots” are deployed in wars (Sparrows 2007; see for an extensive discussion Leveringhaus 2016b; and Steinhoff 2013). In this context, it is not the time nor the place to discuss the problem of responsibility with the due depth. But for sure, the distributed and extended agencies lead to an intricate allocation of responsibility that should be a major topic of further discussions on drone warfare, in particular when we can observe that life-and-death decisions are made “by the swarm” of humans and technologies (Sharkey 2012, 116, referring to SWARM technology, a US Air Force program).

4.4 Background Relations

Finally, I would like to very briefly give some rough ideas on how we can describe the background relations of technology with respect to drone warfare. We know from reports that it is a challenge for the operators that they are admittedly at war but siting in an office in front of a computer screen killing by remote control (Bumiller 2012). When they “call it a day” the operators are going back to their families like ordinary office workers, perhaps sitting again in front of a screen when they just go on the Internet or play computer games with their kids. Chamayou pointedly states: The drone operators “epitomize the contradiction of societies at war outside but living insight as though they are in peace” (Chamayou 2015, 121). And exactly this contradiction of societies in the time of new wars could also change the understanding of war. It is likely that it will be not the primary concern of societies anymore that sending soldiers out to wage war, and to potentially sacrifice them, must be morally, legally, and emotionally justified. In future, there will be deployed drone operations with more or less no risk for losses in the army. This could in the end also lead to lowering the threshold for war (Di Nucci 2017)—if we can still speak of “wars” in this context. It is very likely that the drone technology epitomizes a kind of warfare that is not warfare anymore but a new form of violence that pretendedly is only happening on the screens.

But we can describe the background relations even in a broader sense of changing the entire culture of a society. Following the idea of a “car culture,” it may indeed be justified to speak now of a “drone culture” that shapes our societies in general and the US American in particular (Rothstein 2015, 123). Drones might then be “the symbol of our reliance upon thin streams of data, and the need to avail ourselves of different metrics in order to navigate our world,” and “might symbolize our technological interrelation with each other, as we begin trusting ever more elements of our lives to automation” (Rothstein 2015, 124). Therefore, the drone technology is not only changing our notions of war and violence but could also be understood as incisive materializations of a radical transformation of our “being in-the-world”—that in turn itself lays the ground for the amazingly smooth integration of the drone technology in our thinking.

5 Closing Remarks

We find some far-sighted remarks on automatically flying military weapons that did not exist at that time but very much resembled today’s drones in Walter Benjamin’s The Work of Art in the Age of Mechanical Reproduction. Drawing on some ideas by US militaries, the philosopher mentions unmanned and remote-controlled airplanes (Benjamin 1989, 359, cf. Rothstein 2015, 26–28). Benjamin not only anticipated proto-drones as novel objects that will be produced in the future but also underlined the completely new quality of warfare associated with these devices. Similarly, Theodor. W. Adorno had some intuitions about future warfare. In his Minima Moralia, he observes that “Hitlers Robotbomben” (Hitler’s robot bombs) are like fascism itself “subjektlos” (without a subject) (Adorno 1969, 64). He sees in the V-2 rockets a perverted Hegelian “Weltgeist” (world spirit).

Even back then, these drone-like objects worried philosophers. Today, combat drones are still enthralling for the philosophical examination both on the descriptive level in order to capture their particular socio-technical configuration and on the normative level where we investigate the advances in implementing “ethical algorithms” in these machines, as seen in the case of Arkin’s architecture. My aim was to demonstrate that the model of a relational ontology can be very useful when we try to describe, understand, and critically reflect the far-reaching philosophical implications of drone warfare. In particular, the basic raster of embodiment relations, hermeneutic relations, alterity relations, and background relations can serve as a structure for approaching drone technology from a philosophical perspective. Some of my considerations may seem to be focused on the worrisome aspects of automated and autonomous drone warfare, transcending the mere description of this technology. And indeed, based on the descriptions of the human-technology relations, I intended to offer a critical reflection of automation and autonomization processes in drone warfare. Hopefully, this paper may foster further discussions both among policymakers and practitioners, in order to either find a way to ultimately ban or restrict the deployment of these weapon systems on the basis of the International Humanitarian Law or—if this development cannot be stopped—to enhance approaches that try to design military drones at least value sensitively.

Concluding, I would like to stress that the vital challenges lie in the interactions between humans and machines that somehow share autonomy and moral responsibility. We will be entangled in machines that are “ethically preprogrammed,” with new roles for humans in complex decision-making processes that will affect drone warfare in particular in a still unimaginable way. When operators have to rely more and more on the algorithms that are used for pattern recognition, in order to identify suspect persons, then in a future not so far away, the morally trained drones will recommend a certain target assassination procedure, in perfect accordance with the computationalized laws of war and rules of engagement. Thus, the main task will be to safeguard that the human in-, on-, or out-of-the-loop will be not “automation biased” but will be capable to take the complex hybrid agencies into account when he or she has to decide together with the machine on life and death. Undoubtedly, this will be a huge challenge for our future power of judgment.