The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology is increasingly associated with the approaching dramatic change in the nature of warfare. What becomes particularly important and evermore intensely contested is how it becomes embedded with and concurrently impacts two social structures: ethics and law. While there has not been a global regime banning this technology, regulatory attempts at establishing a ban have intensified along with acts of resistance and blocking coalitions. This article aims to reflect on the prospects and limitations, as well as the ethical and legal intensity, of the emerging regulatory framework. To allow for such an investigation, a power-analytical approach to studying international security regimes is utilized.
This article aims to inquire into a highly topical and hotly debated contemporary security issue: autonomous weapon systems (AWS), alternatively known as weaponized artificial intelligence (AI) or, increasingly more so, as lethal AI. In particular, the focus is on the dynamics and prospects of global regulation, or rather proscription, of this emerging technology. At the core of this analysis are two social structures which AWS get embedded with and concurrently impact: ethics and law.
Currently in development, AWS—aka Killer Robots—can be differentiated from all other weapon categories by a unique combination of attributes. First, they are fully autonomous (Kastan 2013, p.49). This presupposes their ability to engage in autonomous (lethal) decision-making (Asaro 2012, p.690), autonomous (lethal) targeting (Sharkey 2012, p.787) and autonomous (lethal) force (Sharkey 2010, p.370). While still containing a considerable theoretical aspect, their autonomy may include the ability to operate without human control or supervision in dynamic, unstructured and open environments (Altmann and Sauer 2017, p.118). Second, they can be used as offensive autonomous weapons (FLI 2015). Last but not least, these are advances in AI that have paved the way for and characterize fully autonomous (lethal) weapon systems (O’Connell 2014, p.526; Walsh 2015, p.2). The Campaign to Stop Killer Robots features ‘the latest in a series of transnational advocacy campaigns in the area of humanitarian disarmament’ (Carpenter 2016, p.58). It has united more than a hundred non-governmental organizations (NGOs) based in various countries, gained broad public support and successfully dragged into its agenda a good number of state governments, thousands of experts, over twenty Nobel Peace Prize laureates as well as elements of the United Nations (UN) and the European Union (EU) (CSKR, n.d). The Convention on Certain Conventional Weapons (CCW) in Geneva has expectedly become the core global inter-governmental deliberation forum for addressing key challenges posed by emerging technologies in AWS (Altmann and Sauer 2017, p.132). The most hotly debated attribute of these technologies is machine autonomy with respect to life-and-death decisions. Given potential humanitarian risks, critics call for a preventive, or rather preemptive, global ban on their development, production and use (FLI 2015; HRW 2016). Others conversely argue that such risks might rather be associated with a blanket prohibition on advanced autonomy in weapon systems (Anderson and Waxman 2012, pp.39–45; Arkin 2018, p.321). Individual governments’ preferences are divided by a deep rift. Some have either endorsed a full-fledged ban or considered a limited ban, while others have been either sceptical to such restrictions or explicitly opposed them (CSKR 2019a).
There is a growing body of literature that seeks to study various aspects of security regulation in the age of AI and AWS. Among the issues that have been raised are current trends in government AI regulation vis-à-vis those in international AI regulation (Turner 2019); positive and negative implications of AI for the broader issue area of arms control (Din 1987); challenges of ‘framing’ of intelligent weaponry for related arms control practices (Wallach and Allen 2013); assessment of autonomous military vehicles and dangers associated with their use under the ‘criteria’ of preventative arms control and prohibition (Altmann 2009); challenges that AWS pose to current governance norms (Koppelman 2019); comparative insights into the likelihood of a complete ban on AWS and evaluation of alternative regulatory pathways (Crootof 2015; Rosert and Sauer 2020); and lessons for AI regulation from preceding arms control experiences and paradigms (Maas M 2018). However, the above literature lacks a comprehensive scrutiny of the dynamics and conditions that have determined the prospects of banning or, at least, effectively regulating AWS. Research on law and ethics has also intensified: both ethical and legal questions come in many shapes and hues in the AWS debate. This body of knowledge includes analyses of how AWS fit within existing international norms, how they challenge their implementation, how they shape normative expectations and whether the formulation of new norms is necessary (Bode and Huelss 2018). The discussion could be enriched by a systematic disaggregation and explanation of the ethical and legal intensity of the emerging regulatory framework.
This article seeks to fill these gaps. The paragraph below serves as a caveat to dismantle the existing empirical dichotomy. There are two clearly opposing parties: the vociferous and assertive pro-ban movement and the blocking coalition. The former is primarily represented by the Campaign to Stop Killer Robots, whose establishment and ‘ban killer robots’ mantra sparked a debate on AWS in political circles. The latter put together the world's leading countries in AWS research and development (R&D) that oppose a ban on AWS, despite still ambiguous policy statements, and can collectively prevent its passing at the CCW. There are other types of involved actors caught between the two extremes. Since the Campaign primarily put together a coalition of NGOs, there are other actors who, while aligned with its message and goals, do not ‘officially’ belong to it (Rosert and Sauer 2020, pp.15–16). Among such explicit, often active, supporters are like-minded governments (e.g. African states), experts (e.g. the International Committee of the Red Cross (ICRC), the Future of Life Institute) and bureaucrats in intergovernmental organizations (IOs) (e.g. the UN Secretary General (UNSG), the UN Human Rights Council (UNHRC), etc.). There are also silent sympathizers as, for example, public surveys sometimes reveal. The situation is similar to the blocking coalition. Though their opposition may be most pronounced, more actors are in fact involved in AWS R&D and focused on exploring the potential benefits of AWS. Closely aligned are academics, scientists and lawyers actively supplying expertise on strategic importance, military utility, ethical and legal compliance of AWS, as well as private companies working in close collaboration with governments in AWS R&D (e.g. in China). A degree of implicit support is also in place: there are countries (such as Italy) and arms producing companies that continue to invest in AWS research and development (R&D) but do not clearly position themselves in the debate (Haner and Garcia 2019, p.334; PAX 2019; CSKR 2019a). The range of different positions adds to the complex socio-political reality that we study, especially as a complete ban is not the only possible course of action. A regulatory treaty of some sort, domestic laws and policies, nonbinding resolutions, common understandings, professional guidelines and codes of conduct are possible (Crootof 2015, pp.1897–1903). The possibility of a political declaration has also been considered (Acheson 2017, p.2; 2018, p.1). While reflecting on these nuances, we utilize a dichotomy as a heuristic tool as the identified range does not fundamentally challenge the pro- vs anti-ban logic still dominating the AWS debate (Fig. 1).
Our framework of analysis draws on a power-analytical approach to studying international security regimes. It refines it to expound on the nexus between ethics and law and, more generally, on synergies among four conceptual types of power. Here, international security regimes are understood as complex aggregates of normative expectations and legal rules, conditioned by the relationships and interactions among different social actors. Even though there is no global regime formally banning, or even purposefully regulating AWS, this approach is well suited to reflect on attempts at establishing one and the existing resistance. More importantly, it allows the study of multiple discursive, normative, legal, cultural, humanitarian, institutional, political, economic, strategic, military and technological dynamics as well as various interstices between them as part of a joint investigation. This is because it cuts through the three waves of regime theorization through the conceptualization of power. Ultimately, it enables an understanding of the conditions, interests and processes that have shaped the prospects of a ban, as well as the ethical and legal intensity of the emerging regulatory framework. For a more nuanced assessment, parallels are repeatedly drawn between the selected case and other regulatory and prohibitory precedents, including respective efforts.
The structure of the article is as follows. It opens with an introduction to the current state of affairs with respect to the international regulation of AWS. In the following section, a power-analytical approach is introduced. What follows is an inquiry into the various aspects and a systematic explanation of the dynamics of global governance in the field of AWS. Informed by the power-analytical approach, it is conducted through the prism of four types of power and is structured accordingly. The next section summarizes and graphically synthesizes the findings (Fig. 2) as well as pondering possible ways forward. It also discusses the relevance of this analysis and suggests avenues for further research. The paper ends with a few concluding remarks.
2 Towards security regulation: legal underbrush and ‘Severe’ politicization
There is currently no global regime formally banning, or even purposefully regulating, fully autonomous (lethal) weapon systems. However, two tightly intertwined strands of developments are relevant for interpreting advancement on this front. First, this article sheds light on the legal underbrush concerning AWS security regulation. Second, the success of what might be termed the ‘severe politicization’ (Emmers 2013, p.137) of AWS is put under the microscope. With regard to the former, two aspects are of importance. It is certainly expected that AWS should fit within the readily established legal framework. Primarily, these are International Humanitarian Law (IHL), International Human Rights Law (IHRL) and the Charter of the United Nations (UN) that regulate the use of force, the principle of political responsibility and the legality of weapons (Liu 2012, p.638; Garcia 2015, p.60; Brehm 2017). Also falling to this category are the Draft Rules proposed by the ICRC which have proposed the prohibition of weapons with harmful, including uncontrollable and unforeseen, effects to the civilian population (Mathews 2001, p.992). The other relevant dimension of the legal backdrop is the emerging framework regulating the use of cyberweapons and its implications for AWS. Such key international mechanisms involve the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies as well as the Council of Europe Convention on Cybercrime (Stevens 2019, p.271). Both, inter alia, concern digital information, software, computers, computer networks and telecommunications (CoE 2001; WA 2018). AWS are often discussed with reference to cyberspace, in particular cyber-vulnerability, cyberwarfare, cybercrime or cyberterrorism (Liu 2012, p.648; Schmitt 2013; Noone and Noone 2015, p.33; Klincewicz 2015, pp.163–170; Sharkey 2017, p.183). This is because AWS similarly feature (networks of) computers or software-controlled systems operating across the digital and the physical (Dehuri et al. 2011, p.2; McFarland 2015, pp.1326–1328; Um 2019, pp.1–3).
Going beyond the prevailing legal background, the Campaign to Stop Killer Robots has actively lobbied for a complete, issue-oriented ban on AWS. Formed in 2012 as a coalition of NGOs, it has grown into a transnational political advocacy network, or a ‘global coalition’, as per their self-designation (CSKR, n.d.). Cutting through the thicket of legal, ethical, moral, philosophical, political-strategic, scientific and military discourses, their agenda has deeply penetrated the political scene and the domain of security regulation. Their thrust has spread across global, regional and local levels. Issue-specific and ad hoc calls, pledges, reports, open letters, directives, resolutions and declarations have been produced by (groups of) individuals with diverse expertise, NGOs, technology companies, segments of global and regional IOs and certain governments (e.g. NWI 2014; PAX 2014a; Conn 2018). Since 2013, AWS have had a place on the UN arms control agenda and the UN’s CCW framework has served as the principal venue for negotiating a legal instrument on AWS. After three informal expert meetings held to address AWS in 2014, 2015 and 2016, a Group of Governmental Experts (GGE) was established at the CCW. With the GGE meeting regularly since 2017, the AWS debate has gained a ‘formal status’. Although the pro-ban movement is dissatisfied with the ‘slow pace’ of the GGE' process, most state parties still see the CCW as the ‘appropriate venue’ for addressing AWS (Rosert and Sauer 2020, pp.12, 17). At this moment, any decision on banning or regulating AWS would be mainly based on GGE recommendations.
The aforementioned legal and political elements represent mere steps towards AWS security regulation but do not furnish one. Given the lessons from the case of cyberweapons, the applicability of more general international laws to regulate a ‘weaponized code’ may be severely contested (Stevens 2019, pp.287–288). The legal framework regulating cyberweapons is fragmented and only features piecemeal measures (Stevens 2019, p.271). At the same time, despite the great success of giving AWS a political footing, the issue has not become truly ‘securitized’. This is because securitizing actors have not yet been successful in convincing states to adopt extraordinary, issue-specific measures beyond standard political procedures (Emmers 2013, pp.133–135).
3 A power-analytical approach to international security regimes
In conceptual terms, this article builds upon the notion of an international security regime. It implies a set of ‘principles, rules, and norms that permit nations to be restrained in their behaviour in the belief that others will reciprocate’ (Jervis 1982, p.357). This conceptual framework allows us to study a regime as an aggregate of both ethical and legal building blocks. The power-analytical approach is deployed to dissect the dynamics and conditions that have determined the prospects and limitations of a regulatory, or even prohibitory, regime of this sort in the domain of AWS (Hynek 2018a). In line with this approach, the definition of an international security regime can be further refined as a complex aggregate of normative expectations and legal rules, conditioned by the relationships and interactions among different social actors.
This approach strikes a balance by cutting through the three waves of theorizing regimes within the discipline of IR, namely the neo-neo-convergence regime theory, the cognitivism and the radical constructivist/post-structuralist understanding (Hynek 2017). The consideration of the multiplicity of power relations is utilized as the means to bring them together and organize regime analysis. Barnett and Duvall’s (2005, p.48) conceptualization of a ‘taxonomy of power’, the most sophisticated existing theorization of power, is taken as the basis. It renders four types of power: productive, structural, compulsory and institutional. First, in relation to expressions of power, one can make a distinction between interaction and constitution; and second, in connection to the specificity of social relations of power, direct and diffuse relations can be identified (Barnett and Duvall 2005, pp.45–48).
As a form of constitutive and diffuse social relations, productive power permeates systems of meaning and knowledge. It is used to analyse the discursive and cognitive framing of AWS and related political subjectivities with a regulatory effect. Through the lens of structural power, which epitomizes constitutive and direct social relations, power hierarchies and associated interests are examined. This power configuration is utilized to define the key stakeholders in the domain of AWS in terms of the distribution of objectives, obligations and benefits. It helps to capture structural positions and capacities of the involved actors in direct relation to one another. Compulsory power captures a series of interactions through which actors come into direct relations with one other. It allows us to study how the discerned stakeholders in the issue area of AWS are able to control and potentially alter one another’s behaviour outside of the working institutional framework. Institutional power, in turn, offers insight into the mediatory power of involved institutions and the nature of diffuse interactions among actors. It allows us to inquire into the availability of venues and decisional rules as well as lines of involvement and responsibility within them and configurations of path-dependence. These are the factors that shape outcomes with respect to institutional limits on AWS and are crucial to consider (Barnett and Duvall 2005, pp.48–57; Hynek 2018a).
Bourne’s (2018, p.444) notion of a ‘topology of power’ is additionally utilized to avoid statism and capture how these forms of power inter-relate and cross-pollinate. This allows us to move from ‘critical, mutually exclusive, and exhaustive distinctions’ (Barnett and Duvall 2005, p.43) to ‘dynamism that underlies this diversity’ (Bourne 2018, p.444). We see this analytical move as crucial to grasping the linkage between ethics and law, embedded with complex, synergizing configurations of power. However, this study takes a further step in theorizing the intertwined workings of these four power configurations: it shows how they are not alike in terms of their explanatory value. Wider normative discourses, as well as their historical and structural conditionings, are seen as particularly relevant to regime analysis. To be more specific, this study illustrates how everything starts with productive power, in particular with the workings of ethical forces. It is necessary to clarify that forces within the broad ideational structure of ethics are contingent upon and prone to manifold cognitive and affective manipulation. The transposition from the structure of ethics into the legal regulatory realm (IHL, weapons regulation dynamics, etc.) is a matter of successful transfer and consequent legal encapsulation of discoursive ethical arguments. One of the key translating and activizing mechanisms between what are otherwise separate structures of ethics and legal regulation in the issue area of weapons regulation is the existing or prospective research, development and associated funding, all embedded within the ‘global arms transfer and production system’ (Krause 1992, pp.205–215). It epitomizes the real or imagined military materialization against which a mix of broad, ethically anchored and stigmatizing discourses as well as specific legal principles and norms can be utilized. This captures how motions within structural power might kick in as a trigger. In complementing stigmatization production and politics, there is a second-order mechanism that operates solely at the level of legal structures. It is the mechanism of legal ‘grafting’ (Price 1998, p.628) which allows for forging novel legitimizing interlinkages between emerging norms and previously institutionalized norms, themselves products of successful transposition and encapsulation of ethical discursive artefacts. This intellectual advancement accounts for the workings of institutional power in shaping the contours of productive power. Consequently, the constitutive power of ethically anchored discourses works on balance with oppositional narratives, also falling under the confines of productive power, to either maintain or reverse structural power. This interaction heavily relies on mechanisms of direct influence rendered by compulsory power. Such a complex power interplay may result in the production of normative ‘hooks’ and, at best, the modification of ethical artefacts into a set of more cogent and specific legal principles and norms. The latter come into being within the confines of institutional power, hence we argue that institutional power has, to a large degree, a synthetic quality despite its distinctive logics and processes. The proposed theorization allows us to trace and disaggregate the ethical and legal intensity of AWS regulation, to provide a robust explanation and to reflect on the prospects. All of this is graphically nuanced to order empirical analysis (Fig. 2).
4 Productive power: nascent stigmatization of a ‘Politicized’ category
This section focuses on the discursive and cognitive framing of AWS with a regulatory effect. Particular attention is paid to the role of ethics. The goal is to assess the constructive power and prominence of stigmatizing narratives on balance with competing narratives. Light is shed on the constitutive role of technologies and differences in their interpretation, legal conditions, production and transfer peculiarities, economic disparities, intellectual traditions and habitual practices, ethical and cultural standards.
The principal objective of pro-ban actors has been to portray AWS as illegitimate military instruments. Generally speaking, they build the case upon the convergence of deontology and consequentialism. From the perspective of the former, they denounce the automation of lethal decision-making as unacceptable by definition in moral and ethical terms (Asaro 2012, pp.708–709; Zawieska 2017, p.49). This deontological viewpoint is reinforced by multi-dimensional consequentialist reasoning. Such technologies are condemned by critics as ‘operat[ing] in a lawless zone’ (Kastan 2013, p.47). In the first place, they reputedly threaten to breach IHL and IHRL, including the principle of individual and state political responsibility and accountability for violations (Asaro 2008, 2012, pp.692–693). Similar and closely related concerns are raised with respect to the UN Charter norm of (non-)use of force, the principle of legal review of (new) weapons codified in the additional protocol to the Geneva Convention as well as the Martens Clause introduced in the second Hague Convention and the Geneva Convention additional protocols (Gubrud 2014, p.40). The premise of non-compliance in the case of AWS particularly lies with what is termed the ‘dehumanization of killing’ (Wagner 2014, p.1410) and the ‘depersonalization of war’ (Klincewicz 2015, p.163). The former implies depriving combat of healthy human emotions and judgement (Asaro 2012, p.701; Johnson and Axinn 2013, p.136; Wagner 2014, p.1415; Sharkey 2017, p.180). The latter stands for both the depersonalization of responsibility (Heyns 2016, p.12) and the objectification of enemy (non-)combatants (Sharkey 2012, p.788; Korać 2018, p.54). All of this is condemned as a threat to the fundamental values of human dignity and human life (Sparrow 2007, p.68; Heyns 2013, p.6, 2016, p.3; Johnson and Axinn 2013, p.134; Garcia 2015, pp.58–61). The violation of such values by lethal autonomous weapons has even been identified as an infringement on the ‘norm of civilization’ (Rosert and Sauer 2019, p.373; for the original and more general argument, cf. Price 1995).
Interestingly, the debate over the morality of drone warfare, especially in light of recent technological advances in facial recognition, has been framed in opposite terms as targeted killings, i.e. the (hyper-)personalization of war (Dunlap 2014, pp.109–111; Schulzke 2017, pp.56–57). The inability of AWS to conform to legal rules and normative standards is often attributed to the unpredictable character of such technologies. It derives from the impossibility of coding full-fledged intelligence in the first place, inherent software imperfections and vulnerability to cyber attacks (Lin et al. 2008, p.20; Asaro 2012, p.691; Klincewicz 2015, pp.167–170; Noone and Noone 2015, p.33). What exacerbates the case is a warning that autonomous weapons might evolve into a mass product or ‘the Kalashnikovs of tomorrow’ (FLI 2015). This technological turn is described with reference to a ‘revolution in warfare’ (FLI 2015), albiet the alarm is sounded that this one is different from earlier military revolutions (Heyns 2013, p.5) and even represents a radical departure from strategic affairs (Payne 2018, p.218). The most radical fears feature robots ‘destroying humankind’ (Lin et al. 2008, p.63).
To reinforce all these narratives, pro-ban actors have, to a large extent, anthropomorphized the debate on autonomous weapons. This is how Killer Robots have emerged as ‘a historical imaginary of the twenty-first century’ (Karppi et al. 2016, p.120). The very notion of Killer Robots relies on a horrifying representation of AWS as a machine version of human soldiers. It puts aside other uses of AI in networks and military systems, especially those hidden from the public eye, because these would not have the same appeal and would not resonate well with a large audience of non-experts and civilians. This is why the campaign has made the decision to use the image of humanoid robots killing people, even though the militarization of AI goes far beyond AWS in this narrow sense.Footnote 1 The underlying rationale is to convey ‘a simple and dramatic message’ (Rosert and Sauer 2020, p.18; for the general notion of short causal chains, cf. Keck and Sikkink 1998).
Given all the risks, those working to ban Killer Robots (CSKR, n.d.) have drawn a sharp dividing line between themselves and ban-resisting actors. Having portrayed the removal of meaningful human control over the use of lethal force as unethical, irresponsible, immoral and illegal (Asaro 2012, pp.695–709), they have particularly labelled this a moral red line (CSKR 2019b) or a moral threshold (HRW 2016). Discussing the ‘precedent’ of a preemptive ban, i.e. the one on blinding laser weapons (BLW), Human Rights Watch (2018) has recalled the notions of ‘civilized nations’ or ‘civilization’ and, on balance, that of ‘barbarity’. Resembling the fate of nuclear and chemical weapons, this discursively (re)produced rift again delineates the ingroup of ‘normals’ or the ‘civilized’ (Price 1995, p.95; Sauer and Reveraert 2018, p.438). With AWS portrayed also as an attribute of rich, elaborate economies, the stigmatization of the outgroup is further reinforced via fears of imbalanced and asymmetric warfare (Asaro 2008, pp.62–63; Garcia 2018, p.339).
Structural conditions of emergence have paved the way for the deep embeddedness of this discourse. The general condition of possibility is the traditional distinction between human and machine with respect to control over life and death (Karppi et al. 2016, pp.111–112). The deeply rooted history of technological advancement has produced a strong imaginary of weapons as ‘passive tools’ (Bourne 2012, p.142). It consolidates the dominant logic that ‘guns don’t kill people, people do’ (Latour 1999, p.174) and problematizes the hypothetically reversed logic that ‘guns don’t kill people, cyborgs do’ (Bourne 2012, p.141). The prohibitionary character of Killer Robots is also co-constituted by deeply rooted science fiction and popular culture images and stereotypes (Carpenter 2016). Countless sci-fi films and novels play a remarkable role in creating the image of a robot that could potentially become self-aware and disobey its programming (Krishnan 2009, pp.7–8). This imaginary mainly reflects western cultural representations of automata depicted in movies such as The Terminator (1984) or I, Robot (2004) (Karppi et al. 2016, p.111). What facilitates the stigmatization of AWS along these lines is that laws are often imperfect, incomplete and require human interpretation and judgement to apply to real-world situations (Asaro 2012, pp.700–705). The principle of human dignity and the right to life serve particularly well to divert attention from the military utility of AWS (Garcia 2015, pp.60–61; Heyns 2016, pp.8–10). These norms have become ever more prominent as a result of a macro-shift in the systemic ethical force after the end of the Cold War. This transformation entails the replacement of sovereignty by human rights as the central force of compassion. The concept of security is consequently fixed upon a human dimension previously within the sovereign purview of states (Krause 2002, p.260). Echoing the nineteenth-century humanitarian dispositif, there has occurred the (re)constitution of an individual as a referent object of security (Hynek 2018b).
A few precedents played a crucial role in embedding post-Cold War ethical forces and humanitarian dispositifs into weapons regulation dynamics. Bans previously imposed upon certain military instruments such as biological and chemical weapons, anti-personnel landmines (APL) and cluster munitions (CM) encouraged uncompromising ambitions in the area of humanitarian disarmament (Hynek 2018b). Though AWS rely on more complex, dual-use and strategically advantageous (combinations of) technologies,Footnote 2 the CCW Protocol IV preemptively banning BLW within the CCW set a particularly important precedent (HRW and IHRC 2015; HRW 2018). The limited advancement of legal control of cyberweapons, as detailed above, in turn, provides a starting point and precursor principles for AWS security regulation. The advocacy underwriting the case of cyberweapons even embeds and complements the same polemic on AWS. Since existing international laws are barely applicable to the (insecure) nature of code and the Internet, issue-oriented legal principles are seen as indispensable in addressing such transnational challenges (Stevens 2019, pp.272, 280, 287). The key role of precedents is in norm-setting: roughly the same set of legal, especially humanitarian, principles served as the basis for the Chemical Weapons Convention, the Mine Ban Treaty, the Convention on Cluster Munitions, etc. (Hynek 2018b). Though yet failing to score a global security regime, similar concerns are raised with respect to regulating cyberweapons (Stevens 2019). As we showed, the stigmatization of AWS is to a large extent stimulated by this repeated practice of employing similar foundations of legal arguments and humanitarian language.
Alongside the growing appeal of legal grafting, it is countries' acceleration and increasing investments in AWS R&D that triggered the pro-ban movement. Early indications of this trend were clearly evident at the beginning of the 2010s (Haner and Garcia 2019). At around the same time that it became increasingly conceived that AWS could be ‘developed within 20 to 30 years’, the Campaign to Stop Killer Robots appeared (HRW 2012). These developments came to be seen as especially worrisome against the background of a general rise of the economic within the political. Such a structural transformation revealed a tension between broader changes in ethics and other structural forces, namely conflict and economy (Hynek 2018b). The stratified global arms transfer system, involving domestic sales from companies to governments, (illicit) trade and co-production arrangements, grabbed particular attention (Krause 1992, pp.205–215; Stavrianakis 2010, chap.2). In view of the dual-use nature of AWS-enabling technologies (FLI 2015; Altmann and Sauer 2017, p.132), the context of civilian technologies becoming increasingly useful to the military adds to the overall picture (Smith and Udis 2003, pp.102–103). Such contextual conditions have also been thought of in connection with declining production costs, including recent technological advancements in 3D printing (Haner and Garcia 2019, p.331). Also, they have been associated with the global trend of turning code into a weapon (Stevens 2019, p.289). Similar concerns have already been raised in relation to drones supposedly creating ‘unethical relationships of civil-military technology sharing’ (Schulzke 2017, p.55).
Well thought-through, coherent and deeply embedded, the prohibition discourse has gained a strong foothold. It is fair to conclude that ‘a stigma is already becoming attached to the prospect of removing meaningful human control from weapons systems and the use of force’ (CSKR 2019b). However, the central underlying goal of a global prohibition regime for (lethal) autonomous weapons has not been achieved. Nor has any form of issue-oriented legal regulation been adopted. Which explanations might be discerned at the stage of scrutinizing the discourse? Though grounded in materiality and often likened to the cases of biological, chemical, gunpowder and nuclear weapons (Heyns 2013, p.5; FLI 2015; Singer 2009, pp.179, 203), Killer Robots are less a physical entity than a ‘politicized’ concept (Karppi et al. 2016, p.111). Except for precursor systems of various complexity, fully autonomous (lethal) weapons do not exist thus far since they have not yet been extensively developed and have never been deployed (Walsh 2015, p.2; Noone and Noone 2015, p.28; Horowitz 2016b, p.7; Sauer 2016). The very notion of Killer Robots renders the issue ‘futuristic’ (Rosert and Sauer 2020, pp.13, 18). It means that advocacy has overrun practice. This in no small part accounts for the failure to properly define the object of stigmatization at this moment (Horowitz 2016a). In general, the nature of autonomy in AWS derives from an AI-enabled process that can take any form, perform cognitive functions and carry out complex tasks without human intervention.Footnote 3 There are concerns that true autonomous weapons might even challenge the distinction between weapons, methods of warfare and warriors (Liu 2012, pp.628–629, 636–637; Heyns 2013, p.6). The situation also echoes and exacerbates a few setbacks on the way to cyberweapons governance. The dual-use, digital and unconventional (physical) nature of technologies may make their global prohibition or over-regulation neither desirable nor possible in terms of verification, compliance and enforcement (Stevens 2019, pp.272, 280, 288). Stigma politics apparently lacks a meaningful practical basis with respect to AWS, even though R&D is well underway (Bode and Huelss 2018, p.400). Besides that, the prohibition discourse seriously downplays the advantages of AWS. This triggers a thrust of firmly-grounded scepticism, sometimes even resistance. Many have drawn attention to the military utility and strategic importance of AWS, as well as illusions and biases guiding the prohibition discourse. One of the key arguments is that AWS might in fact be better than humans in satisfying ethical codes and legal principles (Arkin 2010, 2017, 2018; Anderson and Waxman 2012, 2013; Schmitt 2013; McFarland 2015, pp.1326–1327, 1338; Birnbacher 2016, pp.118–121). Bode and Huelss (2018) at the same time warned against overemphasizing ‘fundamental norms’ (human rights, responsibility, etc.) to the detriment of ‘procedural norms’ (efficiency, effectiveness, survival, obedience, etc.). All these nuances have so far prevented the prohibition discourse from becoming a ‘regime of truth’ (Foucault 1980, p.131).
5 Structural power: asymmetries in knowledge and material production
The following section analyses power hierarchies and associated interests defining the key stakeholders in the domain of AWS. In doing so, the structural position of pro-ban actors is carefully delineated in relation to ban-resisting actors. Based on the discovered distribution of benefits, obligations and objectives, inferences are made with regard to their structural weight.
The movement working to ban AWS is composed of (international) non-governmental organizations (I)NGOs and their various umbrellas, elements of global and regional IOs, (like-minded) states, epistemic communities of scientists, Nobel Peace Prize laureates and Peace Prize laureate organizations, faith leaders, hi-tech business companies and the mass media. Multiple power heterarchies, with all of the actors in different roles and capacities, are apparently involved (CSKR, n.d.). Structurally, an ‘effective and efficient’ relationship between non-governmental or civil society organizations and governments has been gradually set up in many liberal countries (Flora 1986; Gidron et al. 1992). The former have come to be considered even better than governments at much of what they do, including the promotion of greater public awareness of international issues (Chrétien 1995). With their functions limited mainly to consultation or observation, non-governmental entities have also penetrated the UN system of governance via either formal or informal mechanisms (Ruhlman 2015, pp.37–40). Regularly issuing statements and organizing side events, NGOs and think tanks with participation rights have been involved in inter-governmental deliberations on possible arms control options for AWS (Rosert and Sauer 2020, p.17). Some elements of global and regional IOs have praised the agenda. In the surge of lethal autonomous robotics, the report of the UNHRC's Special Rapporteur on extrajudicial, summary or arbitrary executions has brought human rights and ethics to the fore (Heyns 2013). Endorsing a ban on AWS, the UNSG has posed a challenge to the opposition (Guterres 2018). The OSCE Parliamentary Assembly (OSCEPA) and the European Parliament (EP) have also supported the ban proposal on the parliamentary level (EP 2018; OSCEPA 2019). In reaction to the CCW process, the statement on behalf of the EU has similarly underlined the importance of human control on life and death decisions (EEAS 2019). The European Commission (EC) and the proposed EU Coordinated Plan on Artificial Intelligence have also promoted a ‘human-centric’ approach to AI (EC 2018). Acknowledged to be an important frame of securitization (Vultee 2011), media around the world have generously favoured the case. Among those having covered the agenda are high profile outlets such as the BBC, The Independent, Forbes, The Guardian, The New York Times, The Washington Post, The Telegraph and the Thomson Reuters Foundation (e.g. CSKR 2018a). The case has also been intertwined with science fiction and fantasy assumed to be increasingly ‘invoked by policy elites in service of arguments about the real world’ (Carpenter 2016, p.53).
The main venue for global deliberations, particularly intergovernmental negotiations, has become the UN's CCW in Geneva (Altmann and Sauer 2017, p.132). Blocked by some treaty members, mainly great powers, the CCW format has previously failed to ban APL and CM (Abramson 2017; Hynek 2018b). This is because political elites are the main actors in the global decision-making process (Basu 2004, p.139). From the theoretical perspective, they are also believed to have an advantage over other actors in seeking to influence audiences and calling for the implementation of extraordinary measures (Emmers 2013, p.134). Almost thirty states, including Algeria, Austria, Brazil, Costa Rica, Ghana, Guatemala, Jordan, Morocco, Nicaragua, Pakistan, Uganda and Zimbabwe, have already objected to AWS (CSKR 2019a). Since this does not even constitute the majority, we offer a politico-economic insight into their relative structural weight with respect to AWS regulatory politics: the global arms transfer and production system are stratified. The barriers to movement between tiers in such a system remain relatively high, whereby dominant actors concentrate on the pursuit of power and sometimes also wealth in their arms exports (Krause 1992, pp.98, 206–210). Possibly for these reasons, less technologically developed states have endorsed a ban while the top world leaders in AWS development (primarily the US, China and Russia, but also the UK, France, Germany, Sweden, Italy, South Korea and Israel) have not done so (Haner and Garcia 2019, p.332–335; CSKR 2019a). It is worth elucidating, however, that a traditional circuit of ‘arms transfer dependence’ on the part of the recipients helps us to conceive of other less technologically developed states’ opposition to the ban (Kinsella 1998, p.19). Their motivation is clear: ‘technologically inferior and vulnerable’ states often concentrate on ‘the pursuit of security’ in their arms transfers (Krause 1992, pp.98). Therefore, opposition to the ban on AWS appears deeply embedded in the prevailing social order. Concurrently, those economically dominant actors largely belong to the Permanent Five (P5) and command the UN Security Council (UNSC), a body that has the primary responsibility of maintaining international peace and security (Ruhlman 2015, pp.37–38). Tailored upon the contours of business logic, their political domination recalls the destiny of cyberweapons and heralds that ‘decisive political will supporting prohibition is unlikely’ (Stevens 2019, p.289).
Though significant for a country’s economic competitiveness and export success, information technologies are among those primarily driven by commercial interests and private funds, often within transnational companies (Smith and Udis 2003, pp.102–103). There are indications that the commercial sector (mostly large technology companies within Western democracies) has detached from military robotics or so-called Killer Robots. For example, Tesla and DeepMind took part in the open letter; Google pledged to not develop AI for use in weapons, having also sold the controversial Boston Dynamics; Clearpath Robotics pledged its endorsement of the preemptive ban on AWS (Altmann and Sauer 2017, pp.132–133; Google 2018; CSKR, n.d.). While the discourse is settling, the practical commercial-military bond has become entrenched in the process of AI development in the West. Though well known at the time for its close ties to the US military, Boston Dynamics was acquired by Google for commercial purposes (Altmann and Sauer 2017, pp.132–133; Diakogiannis 2019). Google was also found to have had commercial ties with the US military drone programme, in particular in its contract with the Pentagon's Project Maven (Gibbs 2018; Wakabayashi and Shane 2018). While committed to its pledge, ClearPath Robotics has continued to work with its military clients (CSKR, n.d.). Some of the world’s largest arms producers and prominent defense contractors of the Pentagon—Lockheed Martin, Boeing, Northrop Grumman and Raytheon—are working on technologies relevant to AWS (PAX 2019, p.5–8). The idea that the private sector should detach from military robotics does not even have the same appeal in other countries. The obvious example is China. Its national strategy of ‘civil-military fusion’ blurs distinctions between the military and civilian domains and renders AI explicitly dual-use from the outset (Verbruggen 2019, p.340). The Russian government, particularly the Ministry of Defence, has also joined forces in AI development with key national corporations and companies such as Sberbank, Yandex, Rostelecom, Rostec, Gazprom Neft and Rosatom (President of Russia 2019). The group of companies that raise particularly high concerns about AWS includes AVIC and CASC (China), IAI, Elbit and Rafael (Israel), Rostec (Russia) and STM (Turkey) (PAX 2019, p.5). More generally, robotics as well as automation and complex analytics have deeply penetrated the profit-driven manufacturing industry, including tech giants such as GE, Siemens, Intel, Funac, Kuka, Bosch, NVIDIA and Microsoft (Walker 2019). Although civilian innovation is ‘no guarantee’ for the development and diffusion of AWS (Verbruggen 2019, p.338), both robotics and AI—technologies that pave the way for AWS—are dual-use (Altmann and Sauer 2017, p.124).
Often tightly interlinked with the previous cluster of actors, epistemic communities have also engaged in deliberations over AWS security regulation (e.g. CCW 2015). This category combines scientists, technicians, defence experts, military officials, international lawyers and even faith leaders. They are usually motivated by knowledge-based expertise and a desire to support the public interest or improve human welfare (Cross 2013, pp.150–158). The recognition of their field expertise or more general professional judgement may be thought of in terms of ‘authority’ (Schneiker 2016, chap.4). Framing a particular issue or problem and contextualizing it from various perspectives, they influence both governments and non-state actors (Cross 2013, p.138; Schneiker 2016, chap.4). They have also become crucial boosters of humanitarianism (Schneiker 2016, chap.4). In this sense, they feature ‘a major means by which knowledge translates into power’. Especially as transnational processes and transnational global governance are evolving, epistemic communities are becoming ever more significant (Cross 2013, 138–139). What is important is that members of such groups are also driven by their particular professional self-understanding (Schneiker 2016, chap.4). This allows for variation in internal cohesion within epistemic communities, as well as coexisting or conflicting epistemic communities (Cross 2013, pp.147–148). This hypothetical setup aptly depicts the case of AWS. Some experts have articulated support for the ban (Sharkey 2012, 2017; Asaro 2012; Garcia 2015; Klincewicz 2015) while others have pointed out either biases or weaknesses in their approach and have drawn attention to positive facets of AWS (Arkin 2010, 2018; Anderson and Waxman 2012, 2013; Schmitt 2013). There is a risk of their concerns being silenced since internal cohesion within epistemic communities is essential for exercising influence on policy outcomes (Cross 2013, p.138).
The ICRC deserves special attention. This neutral INGO with a legal personality and an IO-equivalent status engages in humanitarian diplomacy and occasionally prohibition-oriented policy advocacy as the cases of APL, CM and AWS testify to (Price 1998, p.632; ICRC 2016; Hynek 2018b). Its legal personality particularly implies its ‘legal status vis-a-vis states in international law’ (Mathur 2011, p.182). It also seeks to establish and strengthen ties with the private sector (ICRC 2018b). At the same time, the ICRC has emerged as the key humanitarian actor—in particular ‘humanitarian expert’ (Mathur 2017, p.19)—first and foremost driven by its guardianship of the Geneva Laws or IHL (Forsythe 2005, p.13; Mathur 2011, pp.182, 2017, pp.4, 95). It also possesses legal expertise in the field of IHL and IHRL, the progressive convergence of which has also entailed the convergence of their respective ‘guardian institutions’, namely the ICRC and the UN (Fortin 2012). Diplomatic facilitation of the pro-ban movement by the ICRC is thus of particular importance (ICRC 2016).
Admitted to have an impact on securitization processes (Salter 2008; Emmers 2013, p.134), the public also gets retracted into AWS-oriented deliberations. Generally, their stance remains fragmented, with only 61 per cent clearly endorsing the ban (IPSOS 2019). However, public opinion is contextual and varies based on the specifics of particular circumstances and scenarios (Horowitz 2016b). The strongest opposition to AWS comes from Turkey, South Korea, and Hungary; the strongest support for AWS from India and Israel. While most respondents oppose AWS in the US, the UK, Russia and China, there are still many supporters of their use in the war in each of these countries. The central concern raised against AWS is that they are immoral and unaccountable (IPSOS 2019). As a survey in the US has shown, the most common reason given in favour of AWS is ‘force protection’, i.e. the idea that such weapons can protect the lives of human soldiers (Carpenter 2013). We assume this line of thinking represents an overall trend as ‘[s]ending an army of machines to war—rather than friends and relatives—does not exact the same physical and emotional toll on a population’ (Wagner 2014, p.50).
Overall, while pro-ban actors might be broken down into (I)NGO advocacy, state and audience support, facilitation by (elements of) IOs, expert blessing and commercial sector endorsement, ban-resisting actors can be broken down into almost the same categories. This signifies a divided social stance in relation to the ban or severe restrictions on AWS. In terms of the structural distribution of power and associated interests, the current state of affairs does not favour the pro-ban movement. The three key clusters of stakeholders are crucial for consideration in this regard. States, especially those belonging to the P5, epistemic communities and the commercial sector are those that primarily exercise control over the sphere of security, knowledge production and technological know-how, finance and material production (Strange 1994, pp.24–32). Those most powerful states, which are concurrently the top world leaders in AWS development, oppose the ban on AWS. They particularly do so under the umbrella of knowledge-based rationale provided by experts and often in collaboration with commercial players. Their main clients in the global arms transfer and production system are also likely to remain supportive of their stance on AWS. This testifies to a disadvantageous structural position of the movement against AWS. Interestingly, many of its participants have been involved with successful APL and CM prohibition campaigns and with regulatory efforts in the case of small arms and light weapons (Marks 2008; Krause 2002, p.256; Hynek 2018b). Opposition to such humanitarian endeavours has in most cases also principally lied with ‘a handful of weapons-dependent governments’ (Johnson, n.d.). To be more specific, roughly the same few have also opposed severe restrictions on APL, CM, small arms and nuclear weapons (Krause 2002, p.256; Marks 2008; Hynek 2018b; Bourne 2019; Johnson, n.d.). The following section will show how structurally disadvantaged actors might be able to overcome gaps in politico-economic power. This is particularly through certain forms of authority ‘employed as a power resource to influence transnational outcomes’ (Hall 1997, p.591).
6 Compulsory power: hybrid moral thrust vs great power politics
This section examines efforts by ban-resisting actors to maintain their structural dominance and freedom to continue investing in AWS. Also, it analyses attempts by pro-ban actors to alter their policies. Particular attention is paid to how actors’ authority is translated into strategies of direct influence to either maintain or reverse power asymmetries. Empirically, we focus on the discursive and practical confrontation between the hybrid moral thrust generated by the pro-ban movement and the opposition produced by great power interests and politics.
The hybrid nature of the pro-ban movement might be evident, for example, with one of the UNGA First Committee side events held on 18 October 2016 (CSKR 2016). It brought together a person chiefly pertaining to the CCW context, a well-known scientist and representatives of the Campaign to Stop Killer Robots, Human Rights Watch and International Committee for Robot Arms Control. Various sources of power constituted this hybrid force, as interpreted above. They, inter alia, include field knowledge and expert authority, informal or formal (legal) standing and moral or political authority of (I)NGOs, especially the ICRC, and IOs, particularly the UN and its bodies. What may still be considered as the central and unifying source of authority on the part of norm entrepreneurs is ‘moral authority’ (Hall 1997, p.591). The recognized history of security regulation practice, most importantly within the CCW format, also plays a constructive role (Hynek 2018b).
Harnessing this leverage, pro-ban actors deployed various mechanisms to alter the behaviour of ban-resisting actors. To stigmatize opponents of their agenda and simultaneously put pressure on them, they named and shamed such countries as Russia, China, Israel, South Korea, the UK and the US (CSKR 2019a, b). Tabooization of AWS is also performed via imaginaries of urgency, consequential and deontological arguments, emotional appeals and speculative fantasies (Carpenter 2016, p.53). To give an example of the latter, frames from The Terminator and Battlestar Galactica have been featured in their media coverage (Carpenter 2016, p.53). Religious norms have also been called upon to assist in stigmatizing AWS (PAX 2014b). To facilitate tabooization, pro-ban actors often emphasize certain attributes while ignoring other relevant traits (Sauer and Reveraert 2018, p.438). Persuasion through normative and peer pressure on multiple fronts is also in their tool-kit. They bring their agenda to various conferences and events organized by governments, (I)NGOs, IOs and groups of individuals with various expertise (CSKR, n.d.). Open letters initiated by AI/robotics research communities have been broadly circulated for further signatures both locally and globally (FLI 2015; Kerr et al. 2017; Open Letter 2017a, 2017b). Another open letter, particularly targeting the CCW, has been produced by tech companies (FLI 2017). With a majority vote of participants at one of the specialized workshop, experts have issued a statement calling for a ban on AWS (ICRAC, n.d.). Audiences also get increasingly dragged into the process, since documents of this sort often call for their endorsement via signatures (FLI 2015, n.d.). Statistics on favourable public opinion are collected and reported in the form of opinion polls as a complementary mechanism of support stimulation (IPSOS 2019). In line with its traditional mission of urging governments to adapt IHL to changing circumstances, the ICRC has also called on states to set limits on autonomy in weapon systems (ICRC 2016). It has considered its position having also collaborated with the Campaign in issuing one of its International Review of the Red Cross journal editions (CSKR 2013). Elements of global and regional IOs also play a crucial role in steering national ambitions in the desired direction. The declaration by the OSCEPA and resolutions by the EP have urged their member-states to work towards a legally binding ban on AWS (EP 2018; OSCEPA 2019). Some states such as Austria, Cuba, Costa Rica, Pakistan or Guatemala are inclined towards AWS prohibition, thereby generating peer pressure on other states, especially within IOs (CSKR 2019a). Actor-oriented letters and reports have also been used to put or increase normative pressure on both commercial entities (Wareham 2018a) and individual governments (Wareham 2018b; Amoroso et al. 2018). Certain reactive guarantees on the part of both (private) economic actors and states broaden the scope of peer pressure (Google 2018; Maas H 2018; Peters 2019). Meanwhile, Campaigners regularly undertake media outreach to draw public attention (e.g. CSKR 2018a).
However, the pro-ban movement still lacks direct leverage over ban-resisting actors, as leverage maintained by politically and economically dominant states over all other actors in the equation prevails. Hall (1997, pp.594–597) was correct in pointing out that moral authority operates on balance with other ‘power resources’, mainly state economic and political(-military) capabilities. The legal status of the OSCE in relation to participating states is non-binding (OSCE, n.d.). Despite its increasing prominence, the EP has little influence over member-states in the domain of EU security and defence policy (EP 2019). The EU's political stance is still fragmented, with Austria calling for a ban while France, Germany and the UK remaining either sceptical towards the ban or opposing it (CSKR 2019a). Within the global system of governance, the UNSC stands out as ‘the most structurally unequal body’ (Ruhlman 2015, pp.37–38). Though enjoying the right of interference in internal affairs of states with humanitarian services and a broad right of initiative, the ICRC lacks credible enforcement mechanisms, particularly outside of conflict and crisis zones (ICRC 2011). The stratified global arms transfer and production system, which only emerges in the domain of AWS, usually enables dominant producers and suppliers to manipulate arms transfers and related dependencies for political ends (Kinsella 1998, p.19; Krause 1992, pp. 99–126, 206–210). With minor exceptions, strategic partnerships run through the Shanghai Cooperation Organization, the Gulf Cooperation Council, the North-Atlantic Treaty Organization, the OSCE, the EU and beyond also account for certain degrees of dependence and followership, particularly between smaller and greater states (Cooper et al. 1991). As the experience with nuclear weapons has shown and as now evident with Killer Robots, non-allied states tend to join humanitarian disarmament or prohibition agendas (Sauer and Reveraert 2018, p.442). In fact, the Non-Aligned Movement has taken an active role in lobbying for prohibitions and regulations in the issue area of AWS (CSKR 2019a). States may, to various extents, be in control of the commercial sector too. For example, China's national strategy of ‘civil-military fusion’ leaves little freedom for big companies to conduct business independently from the state (Verbruggen 2019, p.340). Another example is Russia. Most of Russia's national corporations and companies that are at the forefront of AI development are, to a considerable extent, under state ownership (President of Russia 2019). Both cases testify to the model of a ‘developmental state’ as understood by political economists (Woo-Cumings 1999).
To warrant and sustain their controversial stance on AWS, ban-resisting actors engage in what Sauer and Reveraert (2018, pp.446–447) call ‘stigma rejection’. They recognize the stigma but detach themselves from it. For example, both Russia and the US highlight the preservation of a meaningful human element in their AI-oriented military projects (TASS 2018; BBC 2019). The US furthermore insists that, even in the event of degraded or lost communications, tele-operated systems should not be capable of autonomously selecting and engaging targets (DoD 2012/2017). Following the same course, the UK has also underlined the significance of human direction for the application of lethal force (Country Statement 2018b). China has even called for a ban on the use of fully autonomous weapons (CSKR 2019a). The language of human ‘involvement’, ‘judgement’, ‘control’ and ‘responsibility’ has generally been agreed upon and formally codified at the CCW (CCW 2018; CCW 2019a). However, the question of norm compliance remains shrouded in ambiguity, chosen by ban-resisting actors as an effective strategy to reconcile normative pressure and preference for strategic flexibility. During CCW GEE discussions, Russia has repeatedly requested to cut down on unnecessary details in the final report, especially when there has been no consensus or sufficient evidence. The US has in turn expressed concerns at the rigid concept of ‘human control’ (Acheson 2018; Acheson and Pytlak 2019). China’s diplomatic position on AWS can also be characterized by a degree of ‘strategic ambiguity’. This is evident, for instance, in its recent position papers and its national strategy of ‘civil-military fusion’ (Kania 2018). At the same time, it opposes a ban on the development or production of AWS (CSKR 2019a). The situation is similar to private companies in the US, Russia, China, Israel, Europe and beyond. While explicit about how human control is ensured, some continue working on technologies relevant to AWS. Quite a few do so even without clear, at least clearly declared, policies concerning the question of human control (PAX 2019, p.5–8). On top of that, ban-resisting actors have simultaneously resorted to what might be termed ‘counter-stigmatization’ (Sauer and Reveraert 2018, pp.446–447). Often backed by knowledge-based expertise, they have tried to shift attention to the positive value of AWS (Country Statement 2018a, 2018b, 2019). Robert Work, former US Deputy Secretary of Defense, once argued: ‘AI will make weapons more discriminant and better, less likely to violate the laws of war, less likely to kill civilians, less likely to cause collateral damage’ (Fryer-Biggs 2019). A clear indication of expert support is, for example, that the Berlin Statement was not endorsed unanimously at the experts’ workshop in Berlin (ICRAC, n.d.). Referring to computer scientist Ronald C. Arkin, professor George R. Lucas also assumed that autonomous robotic technology might ‘render war itself, and the conduct of armed hostilities, less destructive, risky, and indiscriminate’ (Lucas 2014).
As demonstrated, sheer politico-economic power is not enough: gaps in this realm can to some extent be overcome by other structures of power (discursive tools, normative pressure, moral authority, etc.). Through the latter, norm entrepreneurs are able to generate normative ‘hooks’ in the configuration of power, and such ‘hooks’ are emerging to frame AWS. Manifestations of self-restraint on the part of ban-resisting actors testify to this. However, despite the broad reach and thriving appeal of the pro-ban movement, their discourses and practices cannot curb ban-resisting actors at this moment. Politically and economically dominant states still possess credible leverage grips to mobilize opposition to the stigmatization and prohibition of AWS.
7 Institutional power: diplomatic impasse and absence of legal encapsulation
In this part, it is shown how the aforementioned dynamics eventually play out in the production of legal commitments. This section serves to demonstrate that institutional power is largely, yet not exclusively, a product of other power relations. Conceptually, we offer insight into the mediatory power of involved institutions and the nature of indirect interactions among the key stakeholders. Decisional rules, lines of involvement and responsibility, as well as configurations of path-dependence are taken into consideration. In practical terms, the focus is on how pro-ban actors can influence or control other, especially ban-resisting, actors via institutional loci.
The CCW has firmly established itself as a humanitarian law instrument and an international forum for addressing the problematic of weapons, including AWS, regulation (UNODA 2014). The practice of successfully adopted protocols on BLW, explosive remnants of war, non-detectable fragments and other devices has set procedural and path-dependence characteristics of this negotiation format. Such prohibitive practices have a standard-setting function going far beyond a particular treaty (Rappert 2008; Acheson and Fihn 2013). Having broadened their scope to address AWS, CCW discussions were originally held in the format of informal expert meetings but gained a ‘formal status’ by moving to the GGE format (Rosert and Sauer 2020, p.17). Over a hundred nations are state parties to the CCW, including all five permanent members of the UNSC (CSKR 2018b). The UN's CCW framework has brought together its state parties and other states, UN agencies, (regional) IOs, the ICRC and other registered (I)NGOs, including the Campaign to Stop Killer Robots, to address AWS (UNODA 2014; CSKR 2018b). Campaigners acknowledge that it has been a 20-year practice of meaningful (I)NGO participation in such sessions (CSKR, n.d.). The history of the ICRC and the UN working together is even more deeply rooted (Fortin 2012). To clarify the position by the ICRC: it has underlined the importance of human responsibility over decisions to kill, thereby granting diplomatic support to the pro-ban movement within the UN (ICRC 2016). Informal meetings between civil society and diplomats have additionally been arranged under UN auspices (CSKR, n.d.). A close relationship between the UNHRC and the CCW is admitted in view of the linkage between IHRL and IHL (Bieri and Dickow 2014, p.3; CCW 2015, p.19). Experts, academics, researchers and analysts have also gained a firm seat in the CCW negotiation format as recent reports also testify to (e.g. CCW 2019a). Scientists, academics and political, military, legal, economic and technical specialists have also appeared among governmental representatives along with ministers, secretaries, ambassadors, counsellors and attachés (e.g. CCW 2019b).
Deeply embedded within this multi-voice institutional composition, the issue of AWS has not become institutionalized via a working international security regime. Progress in the CCW has largely been hampered by its tradition of consensus voting and lowest common denominator outcomes. Besides that, the GGE's mandate has generally been weak. Its principal goal has always been to discuss the matter and report to the CCW. The GGE has never been authorized to arrange formal negotiations for a new CCW protocol, even though it has recently been tasked to consider ‘aspects’ of a prospective legal framework on AWS in 2020 and 2021 (Rosert and Sauer 2020, p.17). There also lack enforcement mechanisms in the CCW format (Abramson 2017).
As a result, there is no legal instrument to govern the emerging field of AWS (as of 5 July 2020). The current GGE process might be characterized as ‘going slow and aiming low’ (Rosert and Sauer 2020, p.17). In line with the cases of APL and CM (Hynek 2018b), these are mainly dominant powers that have blocked the ban proposal (CSKR 2019a). As R&D and governmental spending accelerate in the field of AWS (Haner and Garcia 2019), little progress and even a degree of regress can be observed in the GGE. The diplomatic language of a few countries, including the US, Russia and Israel, is firmly fixed on the ‘benefits’ of AWS (Acheson 2018, p.1). Some states criticized the GGE process for the lack of ambition and urgency as early as in 2017 (Acheson 2017, p.4). The US, Russia and Israel have even tried to remove explicit references to ‘human control’—the central issue in the AWS debate since 2014 – from the GGE's latest reports (Acheson 2018, pp.7–8; Acheson and Pytlak 2019, p.5–6). While the phrase still appears repeatedly in the 2018 report, there are barely any references to it in the 2019 report. On balance, the most recent report introduces a new guiding principle: ‘[h]uman-machine interaction, which may take various forms and be implemented at various stages of the life cycle of a weapon’ (CCW 2018; CCW 2019a). This provision may implicitly create space for the lawful use of AWS: the key would be to ensure that the system effectuates human intent when using force.Footnote 4 More flexibility is given by the lack of a precise definition of AWS and a clear understanding of the relevant concepts and characteristics after six years of CCW discussions (CCW 2018; CCW 2019a).
Increasingly frustrated with the CCW format, pro-ban actors have already begun thinking about moving the process to another venue (Bieri and Dickow 2014, p.3). The UNGA or an alternative ad hoc forum seem to be ‘the only legitimate spaces where progress is possible’ (Acheson and Pytlak 2019, p.2). Other successful treaties have demonstrated that the UNSC resistance is not necessarily decisive (Johnson, n.d.). For example, the Arms Trade Treaty or the Nuclear Weapon Ban Treaty clearly testify to the utility of the UNGA in building up international regimes. The UNGA might also be an option for AWS, especially now that regular issue-oriented deliberations at the UNGA First Committee on Disarmament and International Security have complemented the CCW procedure (UNGA 2018). Recalling the Ottawa (APL) and Oslo (CM) processes, there is also a real possibility for lesser powers led by patrons, typically middle powers, to bypass conventional arms control for an ad hoc regime (Hynek 2018b). However, neither of these options is favourable in the case of AWS, at least as of now. The UNGA upholds the principle of equality, and—on top of that—the five permanent member-states of the UNSC maintain the right to veto decisions concerning international peace and security (Ruhlman 2015, pp.37–38). With only China inclined towards a limited ban at the most, all of the five oppose the all-embracing ban on the use, development and production of AWS (CSKR 2019a). However, the pro-ban movement lacks not only influential members but also state members in general. This makes it much harder for them to generate any meaningful legal outcomes at the UNGA or an alternative ad hoc forum. Their coalition also lacks active middle powers whose leading role proved significant in previous norm-setting processes: Sweden and France on BLW, Canada on APL or Norway on CM (Rosert and Sauer 2020, pp.17–18).
To sum up, pro-ban actors yet remain institutionally powerless in the face of competing (mainly great powers') interests. Pursuing the process via an alternative ad hoc forum is also highly demanding, especially when there is little support from diplomatically influential governments. Benvenisti and Downs (2007, pp.595–597) would interpret the case as the post-Cold War trend of institutions working along ‘functionalist lines’, making it more convenient for powerful states and more challenging for weaker states to bargain.
8 Discussion and a way forward
The article sheds light on the complex workings of power around the security issue of AWS. We drew attention to the legal backdrop and political advancements that jointly point towards international security regulation of AWS and discerned that the legal underbrush applicable for managing AWS is yet restricted. This is because it principally features general international laws of war, in particular humanitarian principles, and the piecemeal emerging framework regulating cyberweapons. We then demonstrated how, given these limitations, the issue-oriented prohibitionary agenda has deeply penetrated the political scene cutting across global, regional and local levels. The opening section concludes that, despite the success of ‘severe politicization’, AWS have not become truly ‘securitized’: the issue has neither moved beyond standard political procedures nor culminated in the adoption of issue-oriented, extraordinary measures. The power-analytical approach is then utilized and refined, with the nexus between ethics and law interpreted, to address the case. It serves to reflect on the dynamics and conditions influencing the prospects of outlawing AWS, as well as the ethical and legal intensity of the emerging regulatory framework. Attempts at establishing a global prohibition regime on AWS and the existing resistance to this are brought into sharp focus, and the main findings are graphically synthesized (Fig. 2).
This analysis begins with the notion of productive power and dissects discourses and counter-discourses related to the stigmatization of AWS. It shows how both deontological and consequentialist arguments are used to frame AWS as immoral, unethical, illegal and inhumane military instruments, and even as a threat to humankind. The imagery of Killer Robots is additionally utilized to build a horrifying, and withal graphic, representation of AWS. The standard of civilization is also recalled by pro-ban actors to assist in this framing. Not only does it help them to portray AWS as uncivilized, but political subjects are also differentiated on this basis into the ingroup of civilized actors and the outgroup of barbarians. We eventually argue that a nascent stigma is emerging in association with AWS, yet more a ‘politicized’ rather than an actual category of weapons at the moment. Building upon broader ethical forces and humanitarian dispositifs, it targets the removal of human control from the use of force. The tradition of legal grafting (progressively embedding ethical codes in more and more weapons treaties) is seen as spilling over to the studied case with an incentivizing and facilitatory effect. However, we conclude that the prohibition discourse has not become a dominant regime of truth, at least as of now. Powerful oppositional narratives exist in no small part due to the lack of a meaningful material basis under the stigmatization of Killer Robots. They feature positive strategic-military implications of AWS and reveal the flaws of the prohibition discourse. Since ethical forces and humanitarian dispositifs are an important general codition for issue-specific discourses, competing narratives maintain a positive ethical and legal profile of AWS. Ethical artefacts that emerge in such debates form the basis for subsequent legal arguments and, prospectively, new legal principles and norms. Acceleration and increasing investments in AWS R&D, embedded in the global structure of arms production and transfers, sets the whole process in motion.
Marginally touched upon in the previous section, structural power is our next analytical focus. Through its lens, we demonstrate how the structural position of pro-ban actors is disadvantaged in the prevailing systemic distribution of power and associated interests. To still emphasize a very high calibre of their composition, heterarchies of power underpinning the pro-ban movement are depicted. It becomes clear, however, that both pro-ban and ban-resisting actors enjoy state and public support, facilitation by (elements of) IOs, expert blessing and commercial sector endorsement. The dynamics of arms production and trade are found to have been particularly influential in shaping the profile of ban resistance. They have affected political decision-making on the part of weapons-dependent governments, including dominant producers and suppliers as well as arms recipients, and spotlighted the role of the commercial sector. It is highlighted that economically dominant actors largely correspond to politically dominant actors in the UN system of governance (the P5). The key finding of this analysis is that a divided social stance in relation to AWS has not favoured the pro-ban movement. While the latter commands knowledge production on par with ban-resisting actors, it cedes control over the sphere of security, technological know-how, finance and material production to them. The intensity of the gradient shade, which is even for the terrain of knowledge and uneven for the other ones, graphically nuances this (Fig. 2).
Next, attention is directed towards the working of compulsory power. Here, we analyse how ban-resisting actors maintain their structural dominance and freedom to continue investing in AWS, and how ban advocates may steer their behaviour in the desired direction. Having analysed how normative arguments are translated into normative strategies, we found that structurally disadvantaged actors can harness their moral authority and to some extent overcome gaps in politico-economic power. The key mechanisms of direct influence utilized by the pro-ban movement include tabooization of AWS, naming and shaming, as well as persuasion via normative and peer pressure on multiple fronts. We draw a clear connection between their hybrid moral thrust and the emergence of normative ‘hooks’ working to constrain the freedom of AWS R&D. While civil society engagement and increasing public awareness are positive developments, their efforts have so far proved not enough to change the present course of affairs. Their discursive and practical strategies still concede to resistant great powers' counter-strategies. Leverage maintained by the UNSC in the global system of governance is unparalleled, compared to any other intergovernmental architecture. What reinforces the position of its members is political support by other, typically smaller and less technologically developed, states characterized by certain degrees of politico-economic dependence and followership. The commercial sector may also not have much freedom to manoeuvere, especially if governmental agencies work in close coordination with private companies. The obvious example is China where there is hardly any room for dissent against the central government. To warrant and sustain their controversial stance on AWS, ban-resisting actors address the emerging stigma as inapplicable to them but often resort to strategic ambiguity to maintain flexibility. Backed by knowledge-based expertise, they also engage in counter-stigmatization to keep the door open—just slightly—for lawful uses of AWS.
What follows is an inquiry into mechanisms of institutional power. The diplomatic impasse at the CCW is discovered. We recognize that ban advocates have become deeply embedded with the UN system of governance, including the CCW. Non-state actors—including (I)NGOs, representatives of the Campaign, the ICRC, scientists, academics, as well as political, military, legal, economic and technical specialists—may access and influence institutional processes. What reinforces their institutional position is support by a group of (like-minded) governments, but also an endorsement by certain elements of IOs such as the UNHRC, the UNSG, the EP and the OSCEPA. However, the consensus rule within the CCW format, combined with the P5′s implicit power to influence outcomes, renders pro-ban actors virtually powerless in the face of competing, mainly great powers', interests. There are alternative options to pursue the process: as other weapons treaties have shown, success might be reached at the UNGA or an ad hoc forum. However, the UNGA principle of equality and blocking moves by UNSC veto-bearing members may be detrimental to further progress within the UN. The lack of influential governments and sufficient mass of aligned states on the side of the pro-ban movement keeps conditions unfavourable for bypassing conventional arms control fora at this moment. Restrictions adopted athwart the opposition by dominant institutional players—largely corresponding to dominant producers of AWS—may still appear to be ineffective. This is also due to the lack of direct leverage potentially available to influence or control such non-signatories outside of institutional loci. Certain degrees of (informal) norm adherence by ban-resisting actors might even further delay progress on legal commitments, and especially a ban, in the issue area of AWS. So can the inability to agree upon a clear definition of AWS for regulatory purposes.
However, any attempt at regulating or banning weapons still hinges upon a prevailing interpretation of strategic interests. Strategic interests and political positions are not immutable; they can evolve at times bringing about outcomes previously thought to be inconceivable. An example is the Chemical Weapons Convention signed, including by the P5, after a long history of using chemical weapons in armed conflicts (World War I, Vietnam War, Iran-Iraq War, etc.). So we cannot discard the likelihood of change in the current state of affairs on AWS, especially as a preemptive ban is just one among several options being examined. Meanwhile, the CCW can arguably continue to postpone any committal decision to another time or give states a mandate to negotiate in earnest some sort of regulation, not necessarily a ban, on AWS. Incidentally, we should not rule out at this juncture that, if CCW discussions do not produce effective results in a timely manner and a deadlock is reached by 2021, other forums or solutions may be more seriously considered to remedy the diplomatic impasse.Footnote 5 Some still insist the CCW should be given the resources to perform its originally intended function, and propose to reform its mode of decision-making (Rosert and Sauer 2020, p.20). Though a ban on AWS is unlikely to be enacted or effective (Crootof 2015, pp.1883–1891), an international legally binding instrument is still most actively pursued (Acheson 2017, p.2). However, an international ‘regime’ may be complemented by additional protocols, domestic laws, as well as other informal or non-binding mechanisms (Crootof 2015, pp.1897–1903). Alternatively, it may be set in motion by ‘interim’ steps, for instance, a political declaration, before legally binding measures are embraced (Acheson 2017, p.2). Since there are many pathways, the dividing lines drawn in this article should not be seen as static. They are dynamic and contingent on changes in world politics, strategic interests and technological developments, as well as the ability and willingness to trade-off.
Along with its empirical findings, this article considers how different forms of power inter-relate and cross-pollinate, and how they are not alike in terms of their explanatory value. The goal is to properly grasp how the nexus between ethics and law, embedded in a complex socio-political reality, plays out in the production of new legal principles and norms. We showed how everything starts with productive power, in particular with the workings of ethical forces. We also recognized how the workings of productive power might be triggered by motions in structural power and reinforced by legal conditions already embedded in institutional power. With agents portrayed as particularly crucial for the (re)production of ethical and legal arguments, this article captures complex mutual relationships between productive power and all the other power configurations. It also demonstrates how enduring power asymmetries ingrained in the operation of structural power get translated into episodic manifestations of compulsory power. However, we also illustrate how gaps in structural power might be overcome by the means of compulsory power, in particular certain forms of authority translated into strategies of direct influence. The paper culminates in showing how all these forces eventually play out to shape the contours of institutional power and the production of new legal commitments. However, the proposed trajectory connecting all types of power from ethics to legal principles and norms is not the only formula for weapons treaties. We hypothesize that, if lower strategic importance is originally attached to a weapon and a stronger stigma arises, the transposition from the structure of ethics to the legal regulatory realm may directly follow the trajectory of legal grafting (Fig. 2). An example is the ban on BLW, whose tactical advantage was almost immediately overshadowed by unequivocal bodily harm. Since there is still more to be explored, we propose it as an avenue for further research.
Besides providing further insights into complex power interplays and ways of studying ethics and law, this study testifies to the usefulness of the power-analytical approach in general. In a departure from ontological selectiveness and agent-based explanation common in the existing literature, this approach allows for the disaggregation of a socio-political reality into a set of ontological multiplicities. At the same time, we show how the analytical findings can subsequently be synthesized into a coherent interpretation of multiple, co-existing realities and the ways in which they get intertwined and interact. Its practical application is expected to go far beyond the selected case, and the importance of future research tailoring it to other cases in the area of global governance is two-fold. First, this approach might provide valuable insights into other weapons regimes, other humanitarian regimes (e.g. refugee regimes) or security regimes (e.g. counter-terrorism regimes) outside of the realm of arms control, as well as monetary, financial and trade regimes. Second, such investigations might help to further test the practical relevance of this approach and potentially sharpen its contours.
9 Concluding remarks
AWS feature a complex technological solution cultivated in civil-military synergies, and their very definition is immersed in terminological hurdles. For these reasons, AWS can hardly be equated with a perfectly identifiable weapon or device. The sci-fi imagery of Killer Robots, so beloved by the Campaign, further distances AWS from the reality and calms the sense of urgency. It is also hard to isolate their negative effects for respective regulatory efforts, especially as competing narratives praise the military utility and strategic importance of AWS. There have still been enormous leaps forward on the way to stigmatizing the removal of human control from the use of force. Normative ‘hooks’ are emerging to frame autonomy in (lethal) weapon systems, including the current dynamics of R&D, but a taboo on AWS is hardly conceivable. As we demonstrated, ethical forces are contingent on and prone to manifold cognitive and affective manipulation. The case of AWS makes clear that they may, ironically, serve to simultaneously underpin both the prohibition discourse and competing narratives. Despite the relative advances in norm-setting, chances for the adoption of a ban or severe legal restrictions are miserably low also for other reasons. The world's acceleration and increasing investments in AWS R&D not only fuel normative debates but also mobilize ban resistance. Efforts to curtail such projects via conventional arms control fora are likely to be hampered by the unfavourable institutional standing of their advocates. The conditions for establishing an ad hoc regime are unfavourable and deem it ineffective beforehand, as long as the world's largest producers of AWS are to be unbound by its provisions. The ever-increasing ubiquity of AI in all fields of knowledge and civil application will undoubtedly create a major verification challenge. This reveals a still significant gap between the ethical and legal intensity of the emerging regulatory framework. In fact, AWS do not fall outside of legal commitments altogether as international laws of war and a fragmented set of rules concerning cyberweapons are in place. However, these are disjointed and questionable in terms of their applicability to AWS. Be that as it may, it is difficult and premature to draw a definite conclusion. There is always room for new interpretations of their strategic interests by major military powers, trade-offs leading to some sort of a regulatory, possibly still prohibitory, treaty and interim measures.
The authors wish to thank one of the anonymous reviewers for highlighting the significance of this discussion.
The authors wish to thank one of the anonymous reviewers for pointing this out.
The authors wish to thank one of the anonymous reviewers for making this clear.
The authors wish to thank one of the anonymous reviewers for drawing our attention to this aspect.
The authors wish to thank one of the anonymous reviewers for clarifying these issues.
Abramson J (2017) Convention on Certain Conventional Weapons at a Glance. Arms Control Association, September. https://www.armscontrol.org/factsheets/CCW.
Acheson R (ed.) (2017) CCW Report 5(6). Reaching Critical Will, New York. https://www.reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2017/gge/reports/CCWR5.6.pdf.
Acheson R (ed.) (2018) CCW Report 6(11). Reaching Critical Will, New York. https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2018/gge/reports/CCWR6.11.pdf.
Acheson R, Fihn B (2013) Preventing Collapse: The NPT and a Ban on Nuclear Weapons. Reaching Critical Will, New York. https://www.reachingcriticalwill.org/images/documents/Publications/npt-ban.pdf.
Acheson R, Pytlak A (eds.) (2019) CCW Report 7(6). Reaching Critical Will, New York. https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2019/gge/reports/CCWR7.6.pdf.
Altmann J (2009) Preventive Arms Control for Uninhabited Military Vehicles. In: Capurro R, Nagenborg M (eds) Ethics and Robotics. IOS Press.
Altmann J, Sauer F (2017) Autonomous Weapon Systems and Strategic Stability. Survival 59(5):117–142
Amoroso D, Sauer F, Sharkey N, Suchman L, Tamburrini G (2018) Autonomy in Weapon Systems: The Military Application of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy. Heinrich Böll Stiftung Publication Series on Democracy 49. ARNOLD Group, Großbeeren.
Anderson K, Waxman M (2012) Law and Ethics for Robot Soldiers. Policy Review 176:35–49
Anderson K, Waxman M (2013) Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can. American University Washington College of Law Research Paper 2013–11; Columbia Public Law Research Paper 13–351.
Arkin R (2010) The Case for Ethical Autonomy in Unmanned Systems. Journal of Military Ethics 9(4):332–341
Arkin R (2017) A Roboticist’s Perspective on Lethal Autonomous Weapon Systems. UNODA Occasional Papers (30) Perspectives on Lethal Autonomous Weapon Systems. United Nations Publication, New York, pp 35–48
Arkin R (2018) Lethal Autonomous Systems and the Plight of the Non-combatant. In: Kiggins R (ed) The Political Economy of Robots. Palgrave Macmillan, Switzerland, pp 317–326
Asaro P (2008) How Just could a Robot War Be? In: Briggle A, Waelbers K, Brey P (eds) Current Issues in Computing And Philosophy. IOS Press, Amsterdam, pp 50–64
Asaro P (2012) On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making. International Review of the Red Cross 94(886):687–709
Barnett MN, Duvall R (2005) Power in International Politics. International Organization 59(1):39–75
Basu R (2004) The United Nations: Structure and Functions of an International Organisation. Sterling Publishers Private Limited, New Delhi
BBC (2019) US Seeks to Allay Fears Over Killer Robots. March 11. https://www.bbc.com/news/technology-47524768.
Benvenisti E, Downs GW (2007) The Empire’s New Clothes: Political Economy and the Fragmentation of International Law. Stanford Law Review 60:595–631
Bieri M, Dickow M (2014) Lethal Autonomous Weapons Systems: Future Challenges. In: Christian Nünlist (ed) CSS Analyses in Security Policy (164). Center for Security Studies, Zurich.
Birnbacher D (2016) Are Autonomous Weapons Systems a Threat to Human Dignity? In: Bhuta N, et al. (eds) Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge University Press, Cambridge, pp 105–121
Bode I, Huelss H (2018) Autonomous Weapons Systems and Changing Norms in International Relations. Review of International Studies 44(3):393–413
Bourne M (2012) Guns Don't Kill People, Cyborgs Do: A Latourian Provocation for Transformatory Arms Control and Disarmament. Global Change, Peace and Security 24(1):141–163
Bourne M (2018) Powers of the Gun Process and Possibility in Global Small Arms Control. International Politics 55(3–4):441–461
Bourne M (2019) Powers of the Gun: Process and Possibility in Global Small Arms Control. In: Hynek N, Ditrych O, Stritecky V (eds) Regulating Global Security: Insights from Conventional and Unconventional Regimes. Palgrave Macmillan, Cham, pp 143–168
Brehm M (2017) Defending the Boundary: Constraints and Requirements on the Use of Autonomous Weapons Systems under International Humanitarian and Human Rights Law. Academy Briefing (9). Geneva Academy, Geneva.
Carpenter C (2013) How Do Americans Feel About Fully Autonomous Weapons? Duck of Minerva, June 10. https://duckofminerva.com/2013/06/how-do-americans-feel-about-fully-autonomous-weapons.html.
Carpenter C (2016) Rethinking the Political/-Science-/Fiction Nexus: Global Policy Making and the Campaign to Stop Killer Robots. American Political Science Association 14(1):53–69
CCW [The Convention on Certain Conventional Weapons] (2015) Report of the 2015 Informal Meeting of Experts on Lethal Autonomous Weapons Systems. CCW/MSP/2015/3. https://undocs.org/pdf?symbol=en/ccw/msp/2015/3.
CCW (2018) Report of the 2018 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. CCW/GGE.1/2018/3. https://undocs.org/en/CCW/GGE.1/2018/3.
CCW (2019a) Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. CCW/GGE.1/2019/3. https://undocs.org/en/CCW/GGE.1/2019/3.
CCW (2019b) Provisional List of Participants Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. CCW/GGE.1/2019/MISC.1. https://undocs.org/ccw/gge.1/2019/misc.1.
Chrétien J (1995) Speech by Prime Minister Jean Chrétien to the National Forum on Canada's International Relations. Toronto, September 11.
CoE [Council of Europe] (2001) Convention on Cybercrime. Signed November 23, in Budapest, Hungary. https://www.europarl.europa.eu/meetdocs/2014_2019/documents/libe/dv/7_conv_budapest_/7_conv_budapest_en.pdf.
Coeckelbergh M (2011) From Killer Machines to Doctrines and Swarms, or Why Ethics of Military Robotics Is not (Necessarily) About Robots. Philosophy and Technology, Special Issue
Conn A (2018) AI Companies, Researchers, Engineers, Scientists, Entrepreneurs, and Others Sign Pledge Promising Not to Develop Lethal Autonomous Weapons. Future of Life Institute, July 18. https://futureoflife.org/2018/07/18/ai-companies-researchers-engineers-scientists-entrepreneurs-and-others-sign-pledge-promising-not-to-develop-lethal-autonomous-weapons/.
Cooper A, Higgott R, Nossal K (1991) Bound to Follow? Leadership and Followership in the Gulf Conflict. Political Science Quarterly 106(3):391–410
Country Statement (2018a) Humanitarian Benefits of Emerging Technologies in the Area of Lethal Autonomous Weapon Systems. United States of America, CCW/GGE.1/2018/WP.4. https://www.unog.ch/80256EDD006B8954/(httpAssets)/7C177AE5BC10B588C125825F004B06BE/$file/CCW_GGE.1_2018_WP.4.pdf.
Country Statement (2018b) Human Machine Touchpoints: The United Kingdom’s Perspective on Human Control Over Weapon Development and Targeting Cycles. United Kingdom, CCW/GGE.2/2018/WP.1. https://www.unog.ch/80256EDD006B8954/(httpAssets)/050CF806D90934F5C12582E5002EB800/%24file/2018_GGE+LAWS_August_Working+Paper_UK.pdf.
Country Statement (2019) Пoтeнциaльныe Boзмoжнocти и Oгpaничeния Boeннoгo Пpимeнeния Cмepтoнocныx Aвтoнoмныx Cиcтeм Boopyжeний [Potential Opportunities and Limitation of Military Uses of Lethal Autonomous Weapons Systems]. Russian Federation, CCW/GGE.1/2019/WP.1. https://www.unog.ch/80256EDD006B8954/(httpAssets)/B7C992A51A9FC8BFC12583BB00637BB9/$file/CCW.GGE.1.2019.WP.1_R+E.pdf.
Cross MKD (2013) Rethinking Epistemic Communities Twenty Years Later. Review of International Studies 39:137–160
Crootof R (2015) The Killer Robots are Here: Legal and Policy Implications. Cardozo Law Rev 36:1837–1915
CSKR [Campaign to Stop Killer Robots] (2013) ICRC on New Technologies and Warfare. August 6. https://www.stopkillerrobots.org/2013/08/icrc-on-new-technologies-and-warfare/.
CSKR (2016) Fully Autonomous Weapons and the CCW Review Conference. Side Event Briefing, October 18. https://www.stopkillerrobots.org/wp-content/uploads/2013/03/KRC_UNGAFlyer_18Oct2016–1.pdf.
CSKR (2018a) Convention on Conventional Weapons Group of Governmental Experts Meeting on Lethal Autonomous Weapons Systems and Meeting of High Contracting Parties United Nations Geneva November 2017. Report on Activities, February 26. https://www.stopkillerrobots.org/wp-content/uploads/2018/02/CCW_Report_Nov2017_posted.pdf.
CSKR (2018b) Five Years of Campaigning, CCW Continues. March 18. https://www.stopkillerrobots.org/2018/03/fiveyears/.
CSKR (2019a) Country Views on Killer Robots. August 21. https://www.stopkillerrobots.org/wp-content/uploads/2019/08/KRC_CountryViews21Aug2019.pdf.
CSKR (2019b) Minority of States Delay Effort to Ban Killer Robots. March 29. https://www.stopkillerrobots.org/2019/03/minority-of-states-delay-effort-to-ban-killer-robots/.
CSKR (n.d.) A Growing Global Coalition. Accessed October 1, 2019. https://www.stopkillerrobots.org/about/.
Dehuri S, Cho SB, Ghosh S (2011) Swarm Intelligence and Neural Networks. In: Cho SB et al. (eds) Integration of Swarm Intelligence and Artificial Neural Network. World Scientific, Singapore.
Diakogiannis A (2019) The Future of Manufacturing Technology. Forbes, August 6. https://www.forbes.com/sites/columbiabusinessschool/2019/08/06/the-future-of-manufacturing-technology/#400b25e8774c.
Din AM (ed) (1987) Arms and Artificial Intelligence: Weapon and Arms Control Applications of Advanced Computing. Oxford University Press, Oxford
DoD [Department of Defense] (2012/2017) United States Department of Defense Directive 3000.09. Adopted November 21, 2012; Modified May 8, 2017. https://fas.org/irp/doddir/dod/d3000_09.pdf.
Dunlap C (2014) The Hyper-Personalization of War: Cyber, Big Data, and the Changing Face of Conflict. Georgetown Journal of International Affairs 15:108–118
EC [European Commission] (2018) Coordinated Plan on Artificial Intelligence. COM(2018) 795 final. Announced December 7. https://ec.europa.eu/digital-single-market/en/news/coordinated-plan-artificial-intelligence.
EEAS (2019) EU Statement to the Group of Governmental Experts on Lethal Autonomous Weapons Systems Convention on Certain Conventional Weapons Geneva, 20–21 August 2019. https://eeas.europa.eu/headquarters/headquarters-homepage/66584/group-governmental-experts-lethal-autonomous-weapons-systems-convention-certain-conventional_en.
Emmers R (2013) Securitization. In: Collins A (ed) Contemporary Security Studies, 3rd edn. Oxford University Press, Oxford, pp 131–145
EP [European Parliament] (2018) European Parliament Resolution on Autonomous Weapon Systems. 2018/2752(RSP). https://www.europarl.europa.eu/doceo/document/TA-8–2018–0341_EN.pdf?redirect.
EP (2019) Common Security and Defence Policy. Fact Sheets on the European Union. https://www.europarl.europa.eu/ftu/pdf/en/FTU_5.1.2.pdf.
FLI [Future of Life Institute] (2015) Autonomous Weapons: An Open Letter from AI and Robotics Researchers. July 28. https://futureoflife.org/open-letter-autonomous-weapons/.
FLI (2017) Open Letter to the United Nations Convention on Certain Conventional Weapons. August 21. https://futureoflife.org/autonomous-weapons-open-letter-2017/.
FLI (n.d.) Lethal Autonomous Weapons Pledge. Accessed October 1, 2019. https://futureoflife.org/lethal-autonomous-weapons-%2520pledge/.
Flora P (ed.) (1986) Growth to Limits: The Western European Welfare States Since World War II. Vol. 1: Sweden, Norway, Finland, Denmark. Walter de Gruyter, Berlin.
Forsythe DP (2005) The Humanitarians: The International Committee of the Red Cross. Cambridge University Press, Cambridge
Fortin K (2012) Complementarity Between the ICRC and the United Nations and International Humanitarian Law and International Human Rights Law, 1948–1968. International Review of the Red Cross 94(888):1433–1454
Foucault M (1980) Power/Knowledge: Selected Interviews and Other Writings 1972–1977. Harvester Press, Brighton
Fryer-Biggs Z (2019) Coming Soon to the Battlefield: Robots that Can Kill. September 3. The Center for Public Integrity. https://publicintegrity.org/national-security/future-of-warfare/scary-fast/ai-warfare/.
Garcia D (2015) Killer Robots: Why the US should Lead the Ban. Global Policy 6(1):57–63
Garcia D (2018) Lethal Artificial Intelligence and Change: The Future of International Peace and Security. Int Stud Rev 20(2):334–341
Gibbs S (2018) Google’s AI is Being Used by US Military Drone Programme. The Guardian, March 7. https://www.theguardian.com/technology/2018/mar/07/google-ai-us-department-of-defense-military-drone-project-maven-tensorflow.
Gidron B, Kramer RM, Salamon LM (1992) Government and the Third Sector in Comparative Perspective: Allies or Adversaries? In: Gidron B, Kramer RM, Salamon LM (eds) Government and the Third Sector: Emerging Relationship in Welfare States. Jossey-Bass Publishers, San Francisco, pp 1–30
Google (2018) AI at Google: Our Principles. June 7. https://blog.google/technology/ai/ai-principles/.
Gubrud M (2014) Stopping Killer Robots. Bulletin of the Atomic Scientists 70(1):32–42
Guterres A (2018) Remarks at ‘Web Summit’ by United Nations Secretary-General. Lisbon, November 5. https://www.un.org/sg/en/content/sg/speeches/2018–11–05/remarks-web-summit.
Hall RB (1997) Moral Authority as a Power Resource. Intenational Organization 51(4):591–622
Haner J, Garcia D (2019) The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development. Global Policy 10(3):331–337
Heyns C (2013) Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions. A/HRC/23/47. https://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23–47_en.pdf.
Heyns C (2016) Autonomous Weapons Systems: Living a Dignified Life and Dying a Dignified Death. In: Bhuta N, et al. (eds) Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge University Press, Cambridge, pp 3–20
Horowitz MC (2016) Why Words Matter: The Real World Consequences of Defining Autonomous Weapons Systems. Temple International and Comparative Law Journal 30(1):85–98
Horowitz MC (2016b) Public Opinion and the Politics of the Killer Robots Debate. Research and Politics (January-March): 1–8.
HRW [Human Rights Watch] (2012) Losing Humanity: The Case Against Killer Robots. November 19. https://www.hrw.org/sites/default/files/reports/arms1112ForUpload_0_0.pdf.
HRW (2016) Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban. December 9. https://www.hrw.org/sites/default/files/report_pdf/arms1216_web.pdf.
HRW (2018) Heed the Call: A Moral and Legal Imperative to Ban Killer Robots. August 21. https://www.hrw.org/report/2018/08/21/heed-call/moral-and-legal-imperative-ban-killer-robots.
HRW and IHRC [International Human Rights Clinic] (2015) Precedent for Preemption: The Ban on Blinding Lasers as a Model for a Killer Robots Prohibition. Memorandum to Convention on Conventional Weapons Delegates. November. https://www.hrw.org/sites/default/files/supporting_resources/robots_and_lasers_final.pdf.
Hynek N (2017) Regime Theory as IR Theory: Reflection on Three Waves of ‘Isms’. Central European Journal of International and Security Studies 11(1):11–30
Hynek N (2018a) Theorizing International Security Regimes: A Power-Analytical Approach. International Politics 55(3–4):352–368
Hynek N (2018) Re-visioning Morality and Progress in the Security Domain: Insights from Humanitarian Prohibition Politics. International Politics 55(3–4):421–440
ICRAC [International Committee for Robot Arms Control] (n.d.) Statements: Mission Statement, Berlin Statement. Accessed October 15, 2019. https://icrac.net/ statements/.
ICRC (2011) Building Respect for the Law. May 1. https://www.icrc.org/en/doc/what-we-do/building-respect-ihl/overview-building-respect-ihl.htm.
ICRC (2016) Autonomous Weapons: Decisions to Kill and Destroy are a Human Responsibility. April 11. https://www.icrc.org/en/document/statement-icrc-lethal-autonomous-weapons-systems.
ICRC (2018b) Ethical Principles Guiding the ICRC's Partnerships with the Private Sector. March 1. https://www.icrc.org/en/document/ethical-principles-guiding-icrc-partnerships-private-sector.
IPSOS [Institut de Publique Sondage d'Opinion Secteur] (2019) Six in Ten (61%) Respondents Across 26 Countries Oppose the Use of Lethal Autonomous Weapons Systems. January 22. https://www.ipsos.com/sites/default/files/ct/news/documents/2019–01/human-rights-watch-autonomous-weapons-pr-01–22–2019_0.pdf.
Jervis R (1982) Security Regimes. International Organization 36(2):357–378
Johnson AM, Axinn S (2013) The Morality of Autonomous Robots. Journal of Military Ethics 12(2):129–141
Johnson R (n.d.) The United Nations and Disarmament Treaties. UN Chronicle. Accessed November 11, 2019. https://www.un.org/en/chronicle/article/united-nations-and-disarmament-treaties.
Kania EB (2018) China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems. Lawfare. April 17. https://www.lawfareblog.com/chinas-strategic-ambiguity-and-shifting-approach-lethal-autonomous-weapons-systems.
Karppi T, Böhlen M, Granata Y (2016) Killer Robots as Cultural Techniques. International Journal of Cultural Studies 21(2):107–123
Kastan B (2013) Autonomous Weapons Systems: A Coming Legal ‘Singularity’? University of Illinois Journal of Law, Technology and Policy 1:45–82
Keck ME, Sikkink K (1998) Activists beyond Borders: Advocacy Networks in International Politics. Cornell University Press, Ithaca
Kerr I, Bengio Y, Hinton G, Sutton R, Precup D (2017) Open Letter to the Prime Minister of Canada by Canadian AI Research Community. Announced November 2. https://techlaw.uottawa.ca/bankillerai.
Kinsella D (1998) Arms Transfer Dependence and Foreign Policy Conflict. Journal of Peace Research 35(1):7–23
Klincewicz M (2015) Autonomous Weapons Systems, the Frame Problem and Computer Security. Journal of Military Ethics 14(2):162–176
Korać ST, (2018) Depersonalisation of Killing: Towards A 21st Century Use Of Force ‘Beyond Good And Evil?’. Philosophy and Society 29(1):49–64
Koppelman B (2019) How Would Future Autonomous Weapon Systems Challenge Current Governance Norms? The RUSI Journal 164(5–6):98–109
Krause K (1992) Arms and the State: Patterns of Military Production and Trade. Cambridge University Press, Cambridge
Krause K (2002) Multilateral Diplomacy, Norm Building, and UN Conferences: The Case of Small Arms and Light Weapons. Global Governance 8(2):247–263
Krishnan A (2009) Killer Robots: Legality and Ethicality of Autonomous Weapons. Ashgate Publishing, Farnham
Latour B (1999) Pandora’s Hope: Essays on the Reality of Science Studies. Harvard University Press, Cambridge, MA
Lin P, Bekey G, Abney K (2008) Autonomous Military Robotics: Risk, Ethics, and Design. California Polytechnic State University, San Luis Obispo
Liu HY (2012) Categorization and Legality of Autonomous and Remote Weapons Systems. International Review of the Red Cross 94(886):627–652
Lucas GR (2014) Automated Warfare. Stanford Law and Policy Review 25:317–339
Maas H (2018) Speech by Foreign Minister of Germany at the General Debate of the 73rd General Assembly of the United Nations. September 28. https://new-york-un.diplo.de/un-de/20180928-maas-general-assembly/2142290.
Maas M (2018) Two Lessons from Nuclear Arms Control for the Responsible Governance of Military Artificial Intelligence. In: Coeckelbergh M, Loh J, Funk M (eds) Envisioning Robots in Society – Power, Politics, and Public Space. IOS Press, Amsterdam, pp 347–356
Marks P (2008) Anti-Landmine Campaigners Turn Sights on War Robots. New Scientist, March 28. https://www.newscientist.com/article/dn13550-anti-landmine-campaigners-turn-sights-on-war-robots/.
Mathews RJ (2001) The 1980 Convention on Certain Conventional Weapons: A Useful Framework Despite Earlier Disappointments. International Review of the Red Cross 83(844):991–1012
Mathur R (2011) Humanitarian Practices of Arms Control and Disarmament. Contemporary Security Policy 32(1):176–192
Mathur R (2017) Red Cross Interventions in Weapons Control. Lexington Books, Lanham
McFarland T (2015) Factors Shaping the Legal Implications of Increasingly Autonomous Military Systems. International Review of the Red Cross 97(900):1313–1339
Noone GP, Noone DC (2015) Debate Over Autonomous Weapons Systems. Case Western Reserve Journal of International Law 47(1):25–35
NWI [Nobel Women’s Initiative] (2014) Nobel Peace Laureates Call for Preemptive Ban on Autonomous Weapons. May 12, https://nobelwomensinitiative.org/nobel-peace-laureates-call-for-preemptive-ban-on-killer-robots/?ref=204.
O’Connell M (2014) 21st Century Arms Control Challenges: Drones, Cyber Weapons, Killer Robots, and WMDs. Washington University Global Studies Law Review 13(3):515–533
Open Letter (2017a) Belgian Scientists Letter on Autonomous Weapons by Belgian AI and Robotics Research Community. Announced December. https://docs.google.com/document/u/1/d/e/2PACX-1vQU8W-mpdjBqLHlA4Xgbe1BhKI4scm2UyQg3cPpylpjnOVF81OmPSE7QmzaXNDfqBeLGrNFS4ozRL8-/pub.
Open Letter (2017b) Open Letter to the Prime Minister of Australia by Australian AI Research Community. Announced November 2. https://www.dropbox.com/sh/ujslcvq7224c1gw/AADADLoJV_NCbwcOsfI9n6wba?dl=0&preview=7+Nov+AI+Letter.pdf.
OSCE [Organization for Security and Co-operation in Europe] (n.d.) Who We Are. Accessed October 20, 2019. https://www.osce.org/whatistheosce.
OSCEPA [Organization for Security and Co-operation in Europe Parliamentary Assembly] (2019) Luxembourg Declaration. Adopted July 4–8, 2019 in Luxembourg. https://www.oscepa.org/documents/annual-sessions/2019-luxembourg/3882-luxembourg-declaration-eng/file.
PAX [PAX for Peace] (2014a) Interfaith Declaration. February 1. https://www.paxforpeace.nl/stay-informed/news/interfaith-declaration.
PAX (2014b) Religious Leaders Call for a Ban on Killer Robots. November 12. https://www.paxforpeace.nl/stay-informed/news/religious-leaders-call-for-a-ban-on-killer-robots.
PAX (2019) Slippery Slope: The Arms Industry and Increasingly Autonomous Weapons. November. https://www.paxforpeace.nl/publications/all-publications/slippery-slope.
Payne K (2018) Strategy, Evolution, and War: Apes to Artificial Intelligence. Georgetown University Press, Washington, DC
Peters W (2019) Letter to Mary Wareham, Coordinator of Campaign to Stop Killer Robots. May 1. https://www.stopkillerrobots.org/wp-content/uploads/2019/05/NZ-Peters-Response.pdf.
President of Russia [Presidential Executive Office] (2019) Excerpts from Transcript of Meeting on the Development of Artificial Intelligence Technologies. May 30. https://en.kremlin.ru/events/president/news/60630.
Price R (1995) A Genealogy of the Chemical Weapons Taboo. International Organization 49(1):73–103
Price R (1998) Reversing the Gun Sights: Transnational Civil Society Targets Land Mines. International Organization 52(3):613–644
Rappert B (2008) A Convention Beyond the Convention: Stigma, Humanitarian Standards and the Oslo Process. Land-mine Action, London. https://brianrappert.net/images/downloads/publications/Rappert2008-A_convention_beyond.pdf.
Rosert E, Sauer F (2019) Prohibiting Autonomous Weapons: Put Human Dignity First. Global Policy 10(3):370–375
Rosert E, Sauer F (2020) How (Not) to Stop the Killer Robots: A Comparative Analysis of Humanitarian Disarmament Campaign Strategies. Contemporary Security Policy. https://doi.org/10.1080/13523260.2020.1771508
Ruhlman MA (2015) Who Participates in Global Governance? States, Bureaucracies, and NGOs in the United Nations. Routledge, New York
Salter MB (2008) Securitization and Desecuritization: A Dramaturgical Analysis of the Canadian Air Transport Security Authority. Journal of International Relations and Development 11:321–349
Sauer F (2016) Stopping ‘Killer Robots’: Why Now Is the Time to Ban Autonomous Weapons Systems. Arms Control Association, October. https://www.armscontrol.org/act/2016–09/features/stopping-‘killer-robots’-why-now-time-ban-autonomous-weapons-systems.
Sauer T, Reveraert M (2018) The Potential Stigmatizing Effect of the Treaty on the Prohibition of Nuclear Weapons. The Nonproliferation Review 25(5–6):437–455
Schmitt MN (2013) Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics. Harvard National Security Journal Feature.
Schneiker A (2016) Humanitarian NGOs, (In)Security and Identity: Epistemic Communities and Security Governance. Routledge, New York
Schulzke M (2017) The Morality of Drone Warfare and the Politics of Regulation: New Security Challenges. Palgrave Macmillan, London
Sharkey NE (2010) Saying ‘No!’ to Lethal Autonomous Targeting. Journal of Military Ethics 9(4):369–383
Sharkey NE (2012) The Evitability of Autonomous Robot Warfare. International Review of the Red Cross 94(886):787–799
Sharkey NE (2017) Why Robots Should not be Delegated with the Decision to Kill. Connection Science 29(2):177–186
Singer PW (2009) Wired for War: The Robotics Revolution and Conflict in the Twenty-first Century. Penguin Books, New York
Smith R, Udis B (2003) New Challenges to Arms Export Control. In: Levine P, Smith R (eds) The Arms Trade, Security and Conflict. Routledge, London, pp 94–110
Sparrow R (2007) Killer Robots. Journal of Applied Philosophy 24(1):62–77
Stavrianakis A (2010) Taking Aim at the Arms Trade: NGOS. Global Civil Society and the World Military Order, Zed Books, New York
Stevens T (2019) Global Code: Power and the Weak Regulation of Cyberweapons. In: Hynek N, Stritecky V, Ditrych O (eds) Regulating Global Security: Insights from Conventional and Unconventional Regimes. Palgrave Macmillan, Cham, pp 271–296
Strange S (1994) States and Markets, 2nd edn. Pinter, London
Tannenwald N (2016) Normative Strategies for Disarmament. In: Hynek N, Smetana M (eds) Global Nuclear Disarmament: Strategic, Political, and Regional Perspectives. Routledge, New York, pp 107–121
TASS [Russian News Agency] (2018) Russia’s Okhotnik Attack Drone to Become Prototype of Sixth Generation Fighter. July 20. https://tass.com/defense/1014154.
Turner J (2019) Robot Rules: Regulating Artificial Intelligence. Palgrave Macmillan, London
Um JS (2019) Drones as Cyber-Physical Systems: Concepts and Applications for the Fourth Industrial Revolution. Springer, Singapore
UNGA [United Nations General Assembly] (2018) First Committee Weighs Potential Risks of New Technologies as Members Exchange Views on How to Control Lethal Autonomous Weapons, Cyberattacks. GA/DIS/3611. https://www.un.org/press/en/2018/gadis3611.doc.htm.
UNODA [United Nations Office for Disarmament Affairs] (2014) Convention on Certain Conventional Weapons. UNODA, Geneva. https://unoda-web.s3-accelerate.amazonaws.com/wp-content/uploads/assets/publications/more/ccw/ccw-booklet.pdf.
Verbruggen M (2019) The Role of Civilian Innovation in the Development of Lethal Autonomous Weapon Systems. Global Policy 10(3):338–342
Vultee F (2011) Securitization as a Media Frame: What Happens When the Media ‘Speak Security’. In: Balzacq T (ed) Understanding Securitisation Theory: How Security Problems Emerge and Dissolve. Routledge, New York, pp 77–93
WA [Wassenaar Arrangement] (2018) Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies. Compiled December. https://www.wassenaar.org/app/uploads/2019/consolidated/WA-DOC-18-PUB-001-Public-Docs-Vol-II-2018-List-of-DU-Goods-and-Technologies-and-Munitions-List-Dec-18.pdf.
Wagner M (2014) The Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapon Systems. Vanderbilt Journal of Transnational Law 47(5):1371–1424
Wakabayashi D, Shane S (2018) Google Will Not Renew Pentagon Contract That Upset Employees. The New York Times, June 1. https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html.
Walker J (2019) Machine Learning in Manufacturing – Present and Future Use-Cases. Emerj, October 23. https://emerj.com/ai-sector-overviews/machine-learning-in-manufacturing/.
Wallach W, Allen C (2013) Framing Robot Arms Control. Ethics Inf Technol 15:125–135
Walsh JI (2015) Political Accountability and Autonomous Weapons. Research and Politics 2(4):1–6
Wareham M (2018a) Letter to S. Brin, President of Alphabet Inc. and S. Pichai, Chief Executive Officer of Google. March 13. https://www.stopkillerrobots.org/wp-content/uploads/2018/04/KRC_LtrGoogle_12March2018.pdf.
Wareham M (2018b) Letter to Florence Parly, Minister for the Armed Forces of France. April 3, https://www.stopkillerrobots.org/wp-content/uploads/2018/04/KRC_LtrFrance_3April2018.pdf.
Woo-Cumings M (ed) (1999) The Developmental State. Cornell University Press, Ithaca
Zawieska K (2017) An Ethical Perspective on Autonomous Weapon Systems. UNODA Occasional Papers (30) Perspectives on Lethal Autonomous Weapon Systems. United Nations Publication, New York, pp 49–56
This work was supported by Charles University in Prague under Grant UNCE/HUM/037 ‘Human-Machine Nexus and Its Implications for International Order’. We are especially grateful to the journal editor and anonymous reviewers for their valuable comments and suggestions for improving the paper. Any remaining omissions and errors are our own.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hynek, N., Solovyeva, A. Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects. AI & Soc 36, 79–99 (2021). https://doi.org/10.1007/s00146-020-01048-1
- Artificial intelligence
- Autonomous weapon systems
- Campaign to stop killer robots
- International security regimes
- Power analysis