The topic of this volume is terrorism and technology. Typically, discussions about the relationship between terrorism and technology focus on how new technologies, such as drones [21, 51], artificial intelligence [54], social media [18], and surveillance technologies could be used either as a means of fighting terrorism or as a method of terrorism [16].

Few authors, however, recognise how technology shapes and reflects the moral framework through which we think about terrorism, terrorists, and the victims of terrorism—particularly in relation to state terrorism. Instead, the standard view is that “what is good or bad about [technology] is not technologies themselves but the ends to which they are put” ([31], 72). In this chapter I argue that technologies of violence are not simply neutral objects that may be used for good or bad purposes. Instead, the design of these technologies, the contexts in which they are deployed, and the narratives surrounding their use reflect and reinforce biases and frame and limit moral decision-making regarding when and against whom technologies are used. Thus, these technologies profoundly impact our moral understanding of the nature and justification of different forms of violence. Section 1 outlines how both the concept of technology and technological artefacts themselves create and embody normative associations and values that shape the moral landscape of their use. In Sects. 2 and 3, I apply David Rodin’s moral definition of terrorism to the case studies of police control technologies and drone warfare. I argue that police control technologies, including riot control technologies, stun guns, and tasers, function as a terrorist display that reflects and reinforces the long-standing and deeply entrenched association of criminality with blackness and thus play a crucial “signifying role” in delineating who may be harmed, who is a threat, and who is to be protected. In Sect. 3, I argue that the US drone program is also a form of terrorism. However, the nature of drone technology, and the accompanying narrative that frames drones as weapons of precision and discrimination, masks the terrorist impact of drone warfare on those subjected to it and contributes to the illusion that drone warfare is objective, precise, unbiased, and even inherently moral.

In both cases, I show how the narrative of technologies of violence as neutral tools masks the terrorist nature of certain kinds of state violence and obscures the power dynamics inherent in that narrative. As will become clear, the view that these technologies are morally neutral or even benign reflects the privileged stance of users of these technologies. From the perspective of those who are subjected to these technologies, they are far from morally neutral. Thus, as I argue in the conclusion, identifying acts as terrorist requires focusing on the impact of those acts (whether they are “high-tech” or not) on those most affected, regardless of whether those involved in producing these effects conceive of their actions as terrorist. Scholars writing on terrorism and technology must acknowledge that the development and use of technologies of violence encodes and reinforces normative judgments about terrorism, the moral status of victims of terrorism, and moral responsibility for terrorism.

1 The Concept of Technology

We could define “technology” simply as any human made artefact, including everything from basic tools, “specific devices and inventions,” to “complex sociotechnological systems” ([39], 547). But if that is all we mean by “technology,” there is no reason to think that the relationship between technology and terrorism poses any unique ethical questions: of course terrorists use technology (guns, planes, mobile phones, bombs, and so forth) to achieve their goals, to varying degrees of success, and of course technology can be employed to fight terrorism. But this way of thinking about the relationship between technology and terrorism ignores the fact that the term “technology” involves a range of concepts and associations that are not always made explicit, but that shape our moral thinking in important ways.

1.1 Technology and Moral Mediation

It is a mistake to see technologies as inert objects with which we interact with the world. Instead, as Peter-Paul Verbeek argues, technologies “give shape to what we do and how we experience the world. And in doing so they actively contribute to the ways we live our lives” ([56], 1). Technologies “mediate moral decisions and help to attribute responsibilities and instil norms” ([56], 2).

This process occurs along several dimensions. Firstly, from when it first gained widespread usage in the late nineteenth century, the concept of technology was associated with the idea of moral and social progress ([30], 969). This is particularly true in relation to technologies of state violence. To illustrate, in the US, each time a new technology of execution (electric chair, gas chamber, lethal injection) was introduced, it was heralded as offering not only a more efficient means of killing, but a more humane means of killing, thereby conflating technological capacity with moral values. For example, one newspaper described the electric chair as providing a death that was “less painful and more dignified” ([26], 4, emphasis added). Another claimed that “science has devised a much more effective and decent way of putting to death” ([26], 12, emphasis added). Similar statements were made about the gas chamber and lethal injection. Yet, in each case the supposed humanity of the new technology was undermined by the botched executions and visible suffering that occurred almost as soon as the technology was put into use, leading to a further (futile) search for a technological solution to the problem of capital punishment ([26], 22)—a search that obscures the irresolvable moral tension in the very concept of a humane execution. As we shall see, a similar moral tension, and the use of a narrative that conflates efficiency with moral progress, also underlies the search for technological solutions to police brutality, and in the development and use of drones.

The association between technological development and moral and social progress also plays out in the distinction between “high-tech” and “low-tech.” “High-tech” is associated with civilization and progress, whereas “low-tech” suggests primitive societies and backward moral thinking. As Phillip McReynolds argues in his discussion of the discrepancy between Al Qaeda’s low-tech terrorism and the high-tech counterterrorism response of the United States,

the low technology of terrorism [suicide bombs, box cutters, and so forth] bears the marks of a lack of respect for human life in general, for individualism, and for freedom whereas high technology as located within an ideology of progress is understood of leading directly to a greater respect for human life, individuality, and freedom … the notion of high-tech violence as opposed to the more direct, low-tech variety carries as sense of moral superiority. ([31], 82–83)Footnote 1

Secondly, technology organises “situations of choice and suggest[s] the choice that should be made” ([56], 5). As Bruno Latour explains, technology can “authorise, make possible, encourage, make available, allow, suggest, influence, hinder, prohibit, and so on.” ([ 1],104). Different technologies amplify some aspects of the world and reduce the prominence of others, and thereby “direct” or “organise” our perceptions in particular ways ([56], 11). This has significant, but often underappreciated, moral implications. For example, the mere availability of a technology may be viewed as a moral reason for selecting it, as occurred when the Dallas Police Department used a bomb-disposal robot carrying C-4 explosives to kill a man who had shot five officers. In defending this action, Police Chief David Brown stated that “We had no choice, in my mind, but to use all tools necessary” ([42], 281, emphasis added). The availability of the robot thereby played a role in “directing … moral deliberations” ([42], 281) and was “influential in justifying such extreme means” ([42], 285). Once a technology is utilised in this way, further use of the technology rapidly becomes normalised and justified and diverts attention away from other possible courses of action: “legitimating the use of a technology is linked to its naturalization” ([36], 65). Lorna Rhodes makes this point in her discussion of the technology of solitary confinement: “once the option of isolation exists, it tends to be normalized as a ‘common sense’ fix for inadequate mental health care, overcrowding, and failure to adequately protect prisoners in the general population.” ([39], 551).

Thus, the choice of technology shapes moral decision-making in ways that can lead to a conflation between moral concepts such as justification and non-moral concepts such as efficiency. As Elke Schwarz explains, the “moral significance of choosing technological means might make some means that are not necessarily justified seem justified; it might make means that are not absolutely necessary seem necessary, and it might make technological tools that for whatever reason appear to be the most attractive option in a collection of available options seem like the only option” ([42], 284–85).Footnote 2

1.2 Technology and Bias

Technologies often embody and reinforce the moral, social, and political norms and biases of those who create and use them. One obvious way this occurs is when an otherwise “neutral” technology is deployed in ways that disproportionately harm members of a certain group as, for example, when police control technologies such as tasers and stun guns are used disproportionality against persons of colour. But biases and norms can also be literally “built in” to technological systems in ways that can cause disproportionate harm to members of minorities and other stigmatized groups.

Algorithms offer one example of bias in the design and use of technology. As Schwarz explains, “how an algorithm functions and how it is trained reflects the values and principles of its intended uses and its designers … They regularly reflect the aims and intentions of their makers and normalize their positions and priorities (values)” ([42], 292). For example, studies on facial recognition technologies in the context of law enforcement have found that these technologies reflect and reinforce racial bias. Ruha Benjamin describes the scale of this “default discrimination”: “At every stage of the process—from policing, sentencing, and imprisonment to parole—automated risk assessments are employed to determine people’s likelihood of committing a crime.” Yet, multiple studies have found that these automated processes are “remarkably unreliable in forecasting violent crime” ([5], 81). The impact of this encoded bias can be devastating: “Black people are overrepresented in many of the databases faces are routinely searched against” which means that “Black people are more often stopped, investigated, arrested, incarcerated and sentenced as a consequence of facial recognition technology … Black people are more likely to be enrolled in face recognition systems, be subject to their processing and misidentified by them” ([4], 326).

The problem of biased algorithms in facial recognition systems is exacerbated by the phenomenon of automation bias [12]. Research demonstrates that humans have an unwarranted belief in the neutrality and accuracy of technological systems: “humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct” ([42], 290). This means that the “results” of facial recognition algorithms (and other biased algorithms) are likely to be assumed to be objectively correct, leading to a vicious cycle that reinforces embedded biases and lends them an unwarranted patina of legitimacy ([12], 2–3).

Kodak’s Shirley card is an example of bias that is literally “built in” to a technological system. The Shirley card was used as a comparison image to ensure that the colours in a printing look “right”. In its original form, the Shirley card featured a white woman with “ivory skin, brown hair, and red lipstick” ([25], 3). But, “[s]ince the model’s white skin was set as the norm, darker skinned people in photographs would be routinely underexposed” ([5], 104). The Shirley card thus both reflected its creators’ racial biases and then continued use of the Shirley card reinforced this bias, calcifying the view that white skin was the ideal aesthetic standard and the standard of “normal” skin tone (see [5], 103–109).

In sum, technologies “mediate moral decisions” ([56], 2), and so shape our moral understanding of our actions by offering (and restricting) choices, reflecting and reinforcing pre-existing biases, and through the development of accompanying narratives that frame new technologies in terms of moral values such as dignity and humaneness. As is clear from  the example of capital punishment discussed earlier, the narratives that accompany the development and use of new technologies frequently privilege the perspective of users and developers rather than that of those subjected to these technologies. In what follows, I show how this complex dynamic between technology and moral evaluation and decision-making plays out in the context of drone warfare in ways that obscure the impact of drone warfare on those subjected to it—an impact that is, I argue, sufficiently severe to constitute terrorism.

1.3 What Is Terrorism?

What do I mean by terrorism? In this chapter, I adopt elements of David Rodin’s moral definition of terrorism. A moral definition is “an analysis of the features of acknowledged core instances of terrorism [such as the 9/11 attacks] which merit and explain the moral reaction which most of us have toward them” ([40], 753). Rodin locates the moral opprobrium many of us feel toward terrorism in the fact that core instances of terrorism are characterised by “the use of force against those who should not have forced used against them” ([40], 755). He then defines terrorism as “the deliberate, negligent, or reckless use of force against noncombatants, by state or nonstate actors for ideological ends and in the absence of a substantively just legal process” ([40], 755).Footnote 3 The reference to force against noncombatants for ideological ends is consistent with many other definitions of terrorism. Rodin’s inclusion of reckless and negligent acts in his definition is controversial but given that the case studies I discuss involve intentional actions, I will not weigh in on this controversy here.Footnote 4 Given this definition, we can now turn to the case of police control technologies.

2 Police Control Technologies as Terrorist Display

Police control technologies include devices such as tasers and stun guns, as well as riot control technologies such as tear gas, rubber bullets, and the use of militarised weapons, tactics, and uniforms “that were once the preserve of military units in war zones” ([14], 110). The contexts in which these technologies are used, the class of people against whom they are deployed, and the justifications offered for their use, reveal much about who is perceived as a threat, who is judged liable to be killed and wounded, and who is judged worthy of protection.

2.1 Riot Control Technologies

2.1.1 The Narrative of Threat

A justifcatory narrative of threat and protection is particularly apparent in the use of riot control technologies. This means that the contexts in which riot technologies are not used are just as revealing as the contexts in which they are used. For example, in the wake of the killing of George Floyd, Black Lives Matter (BLM) protesters were subjected to tear gas and other “non-lethal weapons” such as rubber bullets and stun grenades, wielded by police and federal forces clad in militarised riot gear, including face shields, external bullet-proof vests, and knee-high boots. In comparison, the armed white protestors who raided the US Capitol building on January 6, 2021, faced police who were not clad in riot technology and who did not engage in substantial force against them [58]. This stark and visible disparity in the use of violent control technologies serves a powerful signifying function: BLM protestors are dangerous but white protestors are not, even when engaged in a violent armed insurrection, the technologies of violence and suppression are necessary (and therefore justified) when interacting with BLM protestors, but not when interacting with majority white protestors [37]. Images of the police response to these different groups, replicated in media coverage of the protests, communicates and reinforces, even more effectively than words or political speeches, the criminalisation of blacknessFootnote 5 and the belief that people of colour (and those who support them) pose such a threat that they may justifiably be harmed or killed. The visual narrative that accompanies the use of these technologies thereby “symbolically excludes the citizens from the state” ([14], 114) and reflects a resurgence of the “escalated force” policy of “a dominant show of force” that governed police responses to anti-war and civil rights protestors in the 1960s (groups also characterised as threats to the state) ([29] 75).

2.1.2 Techno-Subjectivity and Moral Mediation

The “techno-subjectivity” ([42], 288) of these technologies (how it feels to deploy and wear them) feeds this narrative of threat and mediates the moral decision-making of those who wield them. There is substantial evidence that when police adopt military-style tactics and “start using weapons and equipment that were designed for soldiers in combat” ([14], 109), their perception of their role and their relationship with the community is altered, particularly in relation to communities of colour: “pacifying and defeating the enemy becomes more important than protecting and serving the public” ([14], 110. See also [37]). In the United States, the adoption of military technology also has a measurable impact on incidents of police killings. One study found that “more than twice as many civilians are likely to be killed by police in a county after its material militarization than before” ([14], 111). This risk is not distributed evenly among the community, however: “Risk is highest for black men, who (at current levels of risk) face about a 1 in 1,000 chance of being killed by police over the life course. The average lifetime odds of being killed by police are about 1 in 2,000 for men and about 1 in 33,000 for women … For young men of color, police use of force is among the leading causes of death” [15].Footnote 6 Thus, the deployment of riot control and other militarised technologies reinforces the association of blackness with criminality and directly contributes to the ongoing and pervasive vulnerability of people of colour to violent interactions with criminal justice system. The ready availability of these technologies combined with the contexts in which they are (and are not) deployed thereby creates an ongoing and embedded “feedback loop” that reinforces the belief that people of colour and their supporters represent a dangerous threat. This feedback loop is sustained through at least three mechanisms: the narrative of threat described above, the accompanying media circulation of visual images of riot technologies deployed against people of colour, and the phenomenological impact on police of wielding these technologies.

2.1.3 The Terrorist Impact of Riot Technologies

Riot control technologies not only communicate and reinforce the criminalisation of blackness and the moral exclusion of people of colour from the moral and political community; they have concrete traumatic effects that justify the claim that the deployment of these technologies is a form of terrorism. Firstly, the use of these technologies against peaceful protestors communicates a very real threat of physical violence that signifies to those subjected to them that they may be killed or harmed with impunity. Secondly, these technologies cause severe and lasting physical injuries, fear, and ongoing trauma [43]. The fact that these technologies are used disproportionately against people of colour and other groups deemed to be outside the moral and political community (such as anti-war protestors in the 1960s and 1970s) indicates that their use is ideologically driven. The ideological nature of these technologies is further evidenced by the origins of their use: “the so-called non-lethal crowd control weapons that are used to disperse protests today have their origins in colonial policing” [43], where there were used to violently reinforce white supremacist colonial regimes against resistance. As a scholar of the history of tear gas argues, these technologies (then and now) were “deployed to both physically and psychologically destroy people engaging in resistance” (quoted in [43]). The impact of these technologies and the way these technologies are deployed, therefore, clearly meets Rodin’s definition of terrorism as “the use of force against those who should not have force used against them” that serves an “ideological end” ([40], 753).Footnote 7 Give the role of these technologies in creating and sustaining the long-standing and deeply entrenched criminalisation of blackness and the vulnerability of people of colour to police violence, it is not a stretch to say that these technologies are part of a broader system of terrorist control of people of colour. This is also demonstrated by the use and development of tasers and stun guns.

2.2 Tasers and Stun Guns

2.2.1 The Narrative of Effectiveness and Humaneness

While the use of riot technologies is accompanied by (and reinforces) a narrative that focuses on threat, the narrative accompanying the development and use of stun guns and tasers by police appeals to the values of humanness and effectiveness, similar to the narrative that accompanied the development of new execution technologies. When tasers were first introduced as police control technologies, for example, they were touted as being “safe, effective alternatives to … lethal force” ([45], 421) that would solve the ongoing problem of the disproportionate use of excessive (sometimes lethal) force by police against people of colour. (Similar claims have been made about body cameras.) Yet, the problem of excessive force has not in fact diminished [24]. Instead, the availability of tasers (and stun guns) gave police officers an option they did not previously have, and one that was framed in morally positive terms as non-excessive and humane. But, just as describing new execution technologies as humane did not in fact make executions more humane, the framing of tasers as non-excessive did not in fact mitigate police of force.Footnote 8

This illustrates how describing tasers as a technological solution to the problem of excessive police violence implies that the problem of excessive force is a technological problem that requires a technological solution, and not a problem arising from the longstanding and well documented framework of racism that underpins and structures policing interactions with (and attitudes toward) people of colour in the US [53].

2.2.2 The Terrorist Impact of Tasers and Stun Guns

Those who defend the use of tasers and stun guns may frame them as technologies of non-lethal restraint and control that can (if properly used) “not appear cruel or beneath human dignity” ([38], 157). But the widespread acceptance and normalisation of the use of stun guns and tasers masks the history of these devices in the contexts of torture and animal control, a connection that is apparent to those who are subjected to these devices. From the victims’ perspective, the use of electric control technologies does not signify respect for their dignity, a reduction in force, or a humane method of control. As Lorna Rhodes relates, prisoners in Supermax prisons (where stun guns are used as control mechanisms), “speak of these technologies as particularly degrading both for their extreme intrusion into the body (they cause muscle weakness as well as pain) and for their association with the control of animals” ([39], 556). But, the victims’ experiences of these technologies as degrading, dehumanizing, and torturous is masked by the dominant narrative of efficiency and humaneness that frames their use. Thus, this narrative both reinforces and hides the true function of these technologies and privileges the perspectives of users above that of those who are subjected to them.

The association of tasers and stuns guns with torture (a long-standing method of state terrorism) is also clear from the history of these devices in the context of state torture. As Darius Rejali explains, stun guns and other electric devices are popular in states that use torture because, like other “modern” torture techniques (such as sensory deprivation), they “cause suffering and intimidation without leaving much in the way of embarrassing long-term visible evidence of brutality” ([38], 153). In the context of torture, the use of these technologies is not driven by a concern for human dignity, but by a desire to avoid charges of human rights violations. Given this history, the widespread acceptance and availability of electric control technologies in the context of law enforcement is astonishing. It represents “an incredible sociotechnical achievement, the work of corporations, politicians, and engineers who have woven this technology into the fabric of everyday life, creating instruments, markets, citizens, and consumers” ([38], 154–55). As with riot control technologies, those against whom this technology is wielded (who are disproportionately prisoners and people of colour, and those who threaten the state in other ways) are thus “marked out” as deserving or requiring such violent treatment. The use of these technologies (as with the deployment of riot control technologies) thereby operates as what Rejali calls “a civic marker” ([38], 154) delineating the moral boundaries of civic membership and moral concern through the infliction of instruments associated with terror and torture.

2.3 Implications

The above discussion has several implications for understanding the relationship between police control technologies and police use of force. Firstly, any ethical analysis of policing technologies must address how some technologies directly “encode” racial bias (as with facial recognition algorithms). Secondly, such an analysis must also recognise how the contexts in which these technologies are used, and the narratives accompanying their use, shape and constrain the moral decision-making of police officers (and policy makers) in ways that reflect and reinforce an underlying framework of racism. This means that the problem with riot control technologies, tasers, and stun guns is not a problem that can be solved by better training or new policies about the contexts of their application. As we have seen with the failure of body cameras and implicit bias training to reduce rates of police violence against people of colour [24], unless the deeply embedded racist structure of policing in America is confronted and addressed, police technologies will continue to be utilised in ways that reinforce that racist structure and terrorise and threaten the lives and welfare of people of colour. It is for this reason that the “defund the police” movement has gained traction over the last year—a movement that calls for moving state and federal funding and resources from the police and criminal justice system to (for example) social services, public education, mental health services, and affordable housing. This would, it is argued, not only reduce crime rates but increase the safety and wellbeing of all citizens, and particularly people of colour. Such a move is arguably justified not only economically [33] but also because it would also go some way to addressing the underlying issue (one I cannot address in detail here) that terrorist policing practices against people of colour undermine the very basis of the state’s authority to use force against its own citizens in a criminal justice context.Footnote 9

3 Drone Warfare

As with the case of police control technologies, the terrorist nature of drone warfare results from the combination of features of drone technology (the capacity for long-term surveillance and the use of algorithmic targeting decisions), the contexts in which drones are deployed, and the impact on those who are subjected to drone surveillance and targeting. This terrorist impact is masked by a narrative that frames the use of drones as morally neutral, even morally good. But whereas the narrative associated with police control technologies emphasised threat protection, control, and humaneness, the narrative that dominates military and political discourse about drones emphasises precision and discrimination.Footnote 10 As the Center for Civilians in Conflict reports, “as covert drone strikes by the United States become increasingly frequent and widespread, reliance on the precision capabilities and touted effectiveness of drone technology threatens to obscure the impact on civilians” ([19], 7). This narrative, and the features and context of drone use, thereby serve to “morally mediate” ([56], 2) the use of drones by constraining moral choices around drone use, shaping the moral perception of users, policy makers, and the public about the nature and justification of drone use, and “marking out” the targets of drone attacks as warranting the use of force against them.

This means that the terrorist nature of drone warfare only becomes evident when we shift our focus from the narrative and associated moral framework that dominates discussion of drones to the impact of the drone program on those who are subjected to it. First, however, we need to clarify the current scope of the US drone program.

3.1 The US Drone Program

The use of drones as a means of killing suspected and known members of Al Qaeda and other terrorist and militant organisations began under the Bush administration, expanded under the Obama administration ([21], 3–4), and expanded further under the Trump administration. According to one report, “As of May 18, 2020, the Trump administration had launched 40 airstrikes in Somalia in 2020 alone.” In contrast, “from 2007 through 2016, the administrations of George W. Bush and Barack Obama conducted 41 airstrikes in Somalia total.” [3]. Additionally, the Trump administration broadened the designation of “battlefields” to include areas of Yemen and Somalia, thereby loosening the restrictions on drone targeting in those areas [3]Footnote 11 and simultaneously “removing the reporting requirement for causalities outside of designated battlefields” [3]. This led to a dramatic increase in the numbers of civilian casualties of drone strikes: “In 2019, more Afghan civilians were killed in airstrikes than at any time since early 2002” ([11], 2). While the Biden Administration has introduced some restrictions on drone use, including temporarily suspending the use of drones outside war zones [41], it remains unclear what the scope of these changes will be or how, for example, targeting decisions within war zones will be made. This lack of clarity became evident with the release of the Pentagon’s investigation into the August 29, 2021, drone strike that killed 10 civilians (including seven children) in Afghanistan, that found that no laws were broken but that “communication breakdowns” occurred [47]. While much remains unknown about this strike, and the long-term intentions of the Biden administration regarding the use of drones, it seems clear that the drone program will be ongoing and there will continue to be little transparency about the impact of drone warfare on those most affected by it.

3.2 Drone Warfare as Terrorism

3.2.1 The Narrative of Precision and Discrimination

From their introduction drones have been heralded as “precision weapons” that allow war to be conducted in a more humane way:

US intelligence officials tout the drone platform as enabling the most precise and humane targeting program in the history of warfare. President Obama has described drone strikes as “precise, precision strikes against al-Qaeda and their affiliates.” Leon Panetta, Secretary of Defense, has emphasized that drones are “one of the most precise weapons we have in our arsenal,” and counterterrorism adviser John Brennan has referred to the “exceptional proficiency, precision of the capabilities we’ve been able to develop.” ([19], 35)

As a result of this narrative, “public concerns with civilian casualties in targeted killing campaigns—concerns that are generally weak or even nonexistent to begin with—are put to rest” ([55], 335).Footnote 12 As we saw with the language that accompanied the development of new execution technologies, this emphasis on precision conflates a technological value with a moral value (“humaneness” or “dignity”). The view that the technical capacity of drones to distinguish between targets is also a moral capacity is shared by some philosophers. Bradley Strawser, for example, argues that a drone’s capacity to discriminate between targets combined with the fact that drone use reduces the risk to the operator to essentially zero means that “we are morally required to use drones over … manned aircraft to prevent exposing pilots to unnecessary risk” ([52], 18).

However, conflating drones’ technical capacity for precision targeting with the moral distinction between combatants and noncombatants not only sustains and reinforces an unfounded complacency about the morality of drone strikes but also obscures the reality of who is targeted by drones and for what reasons. As Harry van der Linden notes, “precision in finding and hitting the target does not imply that there is precision in the selection of the target” ([55], 336, emphasis in original). John Kaag and Sarah Krepps make the same point: “The distinction between militants and non-combatants … is a normative one that machines cannot make” ([21], 134). Put simply, we cannot assume that the categories of combatant and noncombatant are either clearly defined or justly applied by drone operators and/or political and military decision-makers in the drone program. In fact, we have good reason to doubt that this is the case. For example, claims by US officials in the Obama administration that drones strikes caused very few civilian casualties ([7], 31) were complicated by the fact that these assertions were based on “a narrowed definition of ‘civilian,’ and the presumption that, unless proven otherwise, individuals killed in strikes are militants” ([7], 32). As I argue below, the assumption that the targets of drone strikes are chosen based on clear and justly applied categories of combatant and noncombatant is extremely problematic.

3.2.2 Bias and the Moral Mediation of Drone Technology

In Sect. 1.3, I explained how bias can be “built in” and reinforced by technology in multiple ways, from the design of algorithms and the physical features of technologies, to choices about when and against whom technologies are deployed. These forms of bias can become entrenched because of the normalising and self-justifying effects of repeated use of a technology in a specific context against specific groups of people, combined with the phenomenon of automation bias—the tendency of users and designers of technologies to assume that the “answers” provided by technological systems are both objective and correct [12]. In the cases of drones, bias is evident both in the algorithms that are used to select the targets of drone strikes and in how the class of acceptable targets (who are almost exclusively non-white people) has expanded far beyond any plausible definition of “combatant.” This bias is most apparent in the use of drones for signature strikes.

Unlike targeted strikes, where the identity of the target is confirmed before a strike is permitted, signature strikes may be initiated on the basis of perceived patterns of suspicious behaviour: “Signatures may encompass a wide range of people: men carrying weapons; men in militant compounds; individuals in convoys of vehicles that bear the characteristics of al-Qaeda or Taliban leaders on the run, as well as ‘signatures’ of al-Qaeda activity based on operatives’ vehicles, facilities, communications equipment, and patterns of behavior” ([7], 33). But the value of signature identifications depends on a host of normative and culturally biased assumptions about what counts as “suspicious” behaviour.Footnote 13 As Elke Schwarz argues, the use of algorithms to determine the targets of signature strikes “summon[s] the perception that patterns of normality (benign) and abnormality (malign) can be clearly identified” ([42], 288).

However, as we saw with the use of facial recognition algorithms in law enforcement, the success of such algorithms in correctly ascertaining and predicting malign intent is highly questionable.Footnote 14 Yet, when combined with the phenomenon of automation bias, the “output” of the algorithms used for signature strikes is unlikely to be questioned. This then further reinforces the belief that the mere presence of “suspicious” behaviour (defined based on culturally biased assumptions) provides sufficient evidence of malign intent to justify the use of lethal force. The decision to resort to lethal force is then framed as the “right” or most “logical” response to the perceived threat because “the drone can only execute a limited range of actions vis-à-vis a suspect (survey, pursue or kill). A suspect cannot surrender or persuade the technology of their non-liability to harm” ([42], 288). Thus, the combination of embedded bias in targeting algorithms and the limits of drone technology constrains and shapes the moral choices of users and alters the justificatory framework used to assess the morality of drone warfare. These moral choices and justificatory framework are then normalised via further use of drones combined with the narrative of precision and discrimination discussed above. In particular, this process reinforces and normalises the view that a person may be killed not because they are currently engaged in combat or are known to be part of a militant group, but merely because their behaviour resembles that of someone who might be a future threat. The technology translates “probable associations between people or objects into actionable security decisions” ([2], 52). This represents an extraordinary broadening of the concept of a combatant that has devastating consequences:

US experiences in Afghanistan illustrate the risks of targeting with limited cultural and contextual awareness. On February 21, 2010, a large group of men set out to travel in convoy. They had various destinations, but as they had to pass through the insurgent stronghold of Uruzgan province, they decided to travel together so that if one vehicle broke down, the others could help. From the surveillance of a Predator, US forces came to believe that the group was Taliban. As described by an Army officer who was involved: “We all had it in our head, ‘Hey, why do you have 20 military age males at 5 a.m. collecting each other?’... There can be only one reason, and that’s because we’ve put [US troops] in the area.” The US forces proceeded to interpret the unfolding events in accordance with their belief that the convoy was full of insurgents. Evidence of the presence of children became evidence of “adolescents,” unconfirmed suspicions of the presence of weapons turned into an assumption of their presence. The US fired on the convoy, killing 23 people. ([7], 47)

A similar process of assumptions about “suspicious” behaviour creating and then reinforcing the belief that a strike was necessary and that the targets were terrorists seemed to have also occurred in the August 29, 2021 drone strike. In the wake of the Pentagon’s investigation into the strike, the Air Force’s inspector general, Lt. Gen. Sami D. Said, “blamed a series of assumptions, made over the course of eight hours as U.S. officials tracked a white Toyota Corolla through Kabul, for causing what he called “confirmation bias”” [9]. The relatively high level of media coverage of the August 29, 2021 strike illustrates how little coverage there has been about previous cases of civilian deaths from drones. The killing of people based purely on biased and highly unreliable computer-predicted assumptions about the meaning of their behaviour is taken for granted to such an extent that it is rarely deemed worthy of comment. Indeed, the combination of the narrative of discrimination, drone technology, and the processes of moral mediation discussed above has created a situation where the ongoing killing and maiming of non-white people based on biased assumptions of threat has come to seem both morally acceptable and even necessary.Footnote 15 As Elke Schwartz explains, “set against a background where the instrument is characterised as inherently wise, the technology gives an air of dispassionate professionalism and a sense of moral certainty to the messy business of war” ([42], 88). This “moral certainty” is sustained and reinforced by the “high-tech” nature of drone operations and the narrative of precision and efficiency described above and effectively masks the reality of the terrorist impact of drones on the victims.

3.2.3 The Terrorist Impact of Drone Warfare

As discussed above, the use of signature strikes significantly increases the risk that noncombatants will be killed and wounded and reinforces the view that merely suspicious behaviour warrants the use of deadly force. But this is only one reason why the current drone program was, and likely remains, terrorist. Even if drone strikes only killed known targets,Footnote 16 the impact of living under drone surveillance affects everyone in the area under surveillance, whether they are targets or not. Unlike other long-range weapons systems, “only drone killing involves detailed surveillance of the target, including post-strike observation” ([55], 345–46).

The Civilian Impact of Drones report produced by the Center for Civilians in Combat and the Columbia Law School Human Rights Clinic outlines the traumatic effects of living under drone surveillance.Footnote 17 Firstly, drones engaged in surveillance are constantly visible and audible to all those being surveilled, regardless of whether they are targets or not. As van der Linden describes, “[e]veryone is swept up in the surveillance, and living under drones is living under constant fear since, even as a civilian, one may at given moment be wounded or killed” ([55], 351–52). In an important sense, then, “drones are in their psychological impact indiscriminate weapons” ([55], 351). This psychological impact is extremely traumatic. An interviewer for a UK charity spoke to a Pakistani man who “saw 10 or 15 [drones] every day. And he was saying at night-time, it was making him crazy, because he couldn’t sleep. All he was thinking about at home was whether everyone was okay. I could see it in his face. He looked absolutely terrified” ([7], 24).

Because of the secrecy of the drone program, those living under drone surveillance may have no idea who is being targeted or the basis on which targets are selected. This uncertainty compounds this constant fear that one (and one’s family and loved ones) may be killed or wounded:

With US targeting criteria classified, civilians in Pakistan, Yemen, and Somalia do not know when, where, or against whom a drone will strike. The US policy of ‘signature strikes’ … substantially compounds the constant fear that a family member will be unexpectedly and suddenly killed. A civilian carrying a gun, which is a cultural norm in parts of Pakistan, does not know if such behavior will get him killed by a drone. ([7], 29)

This perfectly illustrates the “intrusion of fear into everyday life” that Michael Walzer identifies as one of the key moral harms of terrorism [57].Footnote 18 The terrorism of drone warfare thus lies not only in the direct physical violence inflicted by drone attacks (which may often kill and maim noncombatants) but also in how drone warfare creates and promulgates a constant, indiscriminate, and terrifying fear of attack.

Compounding the harm of drone warfare is the fact that those who survive a drone attack will often have no way of discovering who attacked them. They are denied access to the norms of accountability: “For victims in particular, there is no one to recognize, apologize for, or explain their sorrow; for communities living under the constant watch of surveillance drones, there is no one to hold accountable for their fear” ([19], 24).

Despite the devastating toll of drone surveillance on those subjected to it, philosophers writing on drones rarely discuss or even mention this aspect of drone warfare.Footnote 19 For example, Mark Coeckelbergh explores the impact of conducting long-term surveillance on drone pilots’ ability to empathise with surveillance subjects [8] but doesn’t mention the experience of those living under surveillance. This focus on the experiences of drone operators rather than on the experiences of those who are subjected to the drone program is typical of most philosophical discussions of this topic. It is also characteristic of media depictions of drone warfare. Whereas media depictions of police riot control technologies make visible and reinforce the criminalisation of blackness that underpins the use of those technologies, media depictions of drones almost always show the aircraft themselves, or the cockpits. It is extremely rare that media images show the impact of drone attacks. Thus, viewers are constantly reminded of the technological “marvel” of these weapons and rarely confronted with what these weapons do to the people killed and wounded by them and those who must live under the near-constant threat of attack. This focus on drone pilots and drone technology further prioritises the perspective of users over those of victims of these technologies.Footnote 20

In sum, the US drone program meets Rodin’s definition of terrorism because it is an ideologically drivenFootnote 21 program that inflicts extreme and ongoing psychological and physical trauma on all those who are subjected to drone targeting and surveillance, whether they are the intended targets or not.Footnote 22 In the absence of clear evidence that the  targeting decisions and technological features of the US drone program will substantially change in the foreseeable future, the drone program will likely continue to be a terrorist program under the Biden administration.Footnote 23

4 Conclusion: Terrorism from the Victim’s Point of View

Terrorism, as characterised by Rodin as the use of force against those who should not have force used against them, is a morally abhorrent practice. The moral abhorrence of terrorism is shared by most writers on terrorism, including myself, and is reflected in common usages of the term. Yet, in this Chapter I have argued that two forms of state violence—police control technologies and drone warfare—are forms of terrorism, despite rarely if ever being described by that word. I have shown that the terrorist nature of these forms of violence is hidden by features of the technologies themselves, the subjectivity of their use, and by the dominant narratives accompanying them. The narratives of efficiency, neutrality, and precision masquerade as moral values and serve to normalise and justify these forms of violence and mark out those subjected to them as deserving of violent treatment. To understand the terrorist nature of these practices, therefore, we must reject the point of view that treats technologies of violence as neutral objects and shift our focus to the experiences of those who are subjected to them. This should always be our starting point when asking whether a practice is a form of terrorism. Such a victim-centred approach to terrorism would destabilise the power dynamics that privilege the perspectives of users and designers of technologies of violence and allow a better understanding of the nature of terrorism and the ways in which commonly accepted forms of state violence might themselves be forms of terrorism.