Introduction

The development of the so-called autonomous weapon systems (AWS) has been the subject of intense discussions for years. Numerous political, academic and legal institutions and actors are debating the consequences and risks that may arise with these technologies, in particular their ethical, social and political implications, and many have called for strict regulation or even a global ban [1,2,3].

In these public debates, the attribute “lethal” is sometimes added to the term AWS, underlining the potential severity of the consequences this technology entails. Surprisingly, and despite the urgent need to deal with “Lethal Autonomous Weapon Systems” (LAWS), it is often unclear which technologies the term (L)AWSFootnote 1 primarily refers to, or even in what sense these systems can be characterised as “autonomous” at all. The associated definitions describe a range of phenomena, from landmines to combat drones, from close-in weapon systems (CIWS) to humanoid robot soldiers or purely virtual cyber weapons. Besides this terminological ambiguity, it is inherently unclear in what sense or to what degree these systems can be characterised as “autonomous” at all. Even though the development of automatic or semi-autonomous capabilities is generally advancing, fully autonomous weapons that are completely beyond human control—which is the reason why they are feared by many—largely represents a conceptual possibility at present rather than an actual military reality (“Technical definitions of autonomy and autonomous weapons systems” section).

While the current debate around the possibility and functionality of AWS is certainly not a novel phenomenon but one that has also been highly influenced by fictional works of the past [4], it has regained prominence in recent decades with technological advancements in artificial intelligence (AI), especially with accelerating machine learning (ML) data processing capabilities. Civil societal initiatives [5, 6], scientists [7, 8] and political bodiesFootnote 2 have raised political concerns about emerging “intelligent” and “autonomous” weapon systems with lethal capabilities that go beyond human control. As much as the debate has been guided by the agendas of different stakeholders pursuing (de-)regulation, the discourse around AWS has developed alongside other genres such as doomsday stories in journalism, Hollywood cinema or science-fiction literature, which exploit the idea around looming “killer robots”. Besides promoting a certain idea of what AWS are and what they are capable of, they also intensify the political debate by adding a high degree of urgency.

As will be argued, the conflicting interpretations of AWS are largely the result of diverse meanings that are constructed in political discourses. They convert a specific understanding of AI into strategic assets and, as a political consequence, hinder the establishment of common international ethical standards and legal regulation. Hence, the perspective we present not only reveals AWS to be powerful signifiers of political culture but also shows how they are instruments employed to foster political legitimacy or to spark deliberate confusion and deterrence between rival states.

In particular, this article looks at the publicly available military AI strategies and position papers by China and the USA and, informed by sociotechnical imaginaries [9, 10], analyses how this technology is politicised to serve particular national roles and interests. The ways these two nations showcase their AI-driven military prowess sends out unmistakable messages about national dominance and a desired geopolitical order. The ways in which nation states portray themselves as part of a global AI race, competing over economic, military, and political advantages, become obvious. This especially holds true for China and the USA, since they are regarded, and regard themselves, not only as international hegemons, but also as antagonists, promoting competing self-conceptions that are apparent in their histories, political doctrines and identities.

In turn, the analytical focus on these hegemonic powers will inform European debates on AWS, since these discussions are far from representing one unified stance. Identifying the similarities and differences between China and the USA makes it possible to recognise prototypical patterns, which at the same time puts the multitude of different AWS positions among European nations into a larger global perspectiveFootnote 3. The analysis explicitly focuses on military strategy documents in an effort to complete the picture of national AI aspirations and more general public discourses. Specifically, this subdomain of AWS imaginaries was chosen because it brings to the fore the deliberate meanings voiced by military actors in order to utilise them as part of political communication.

The article first dissects the current academic debate regarding a definition of AWS that would be sufficiently unambiguous for regulatory or military contexts; key issues in this debate have been concepts such as “autonomy”, “degree of human control” or a “functional understanding of AWS” (“The challenges of defining autonomous weapon systems” section). It is the meaning of these AWS-related concepts that, among other dimensions, constitutes the reference point in the geopolitical arena between the USA and China. They not only provide information about technical details but can be utilised to fulfil specific functions in asserting national interests. In order to be able to approach and analyse AWS from this realpolitik perspective, we introduce the concept of the “sociotechnical imaginary” (SI) as the theoretical frame (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section). The “Methods” section follows (“Methodology” section), where we showcase the empirical material, consisting of position papers taken from the debate at the United Nations (UN) Convention on Certain Conventional Weapons (CCWFootnote 4) and standpoint papers published by the executive ministries of both nations. The analysis sections portray AWS as geopolitical signifiers and approach the strategies as a form of political communication that is pursued as part of military AI imaginaries (“Military doctrines, autonomous weapons and AI imaginaries” section). AWS are a central element of the goals both nations pursue in the realm of geopolitical communication. Differing definitions and normative understandings of AWS are deliberately employed to serve national interests and, consequently, make it more difficult to reach a UN regulatory consensus (“Technological definitions and normative understandings of AWS” section).

The challenges of defining autonomous weapon systems

The different approaches to defining AWS constitute an arena of competing interpretations of what the technology is capable of and, above all, which reference points to consider in order to regulate specific capabilities. While the current debates on autonomous weapon systems mainly focus on regulatory questions, military simulation games or political and tactical scenarios, the power of interpretation over what AWS are and what capabilities they comprise remains contested. These questions neither simply refer to a problem in engineering nor are they of a purely conceptual nature but also borrow from the realm of fiction. It is essential to acknowledge that the prerogative of shaping the meaning of the technology creates both semantic and political dominance—and states take advantage of this opportunity.

In order to narrow down a comprehensible understanding, three different approaches can be roughly distinguished: The first focuses on the attribute of “autonomous”, which evokes a wide array of traditional associations with the concept of autonomy; the second approach takes into account different degrees of human control over the automated processes and in doing so addresses questions of human/machine interaction. While it is obvious that both definitional approaches are directly interwoven—in a complimentary fashion even, since the more autonomous the machines are, the less human control can be exercised—they still refer to distinct conceptual meanings and traditions. The third and most recent strategy promotes a primarily functional understanding of AWS that focuses on the actual capabilities and seeks to transcend essentialist definitions that are more concerned with the innate conceptual qualities of the technology.

Technical definitions of autonomy and autonomous weapon systems

One possible way of defining the concept of autonomy is to look at it as a technically determining and distinguishing feature; indeed, this already seems self-evident from the attribute “autonomous” alone. In this sense, an “autonomous” weapon system is one that, “based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging targets” [11]. While automated systems are only “triggered”, in this understanding, such systems can independently “select” and “engage” different targets, based on case-specific information.

The concept of autonomy is widely used in philosophy, psychology, human cognition and other disciplines and carries (often contested and contradictory) meanings that range from anthropocentric understandings to political contexts or aesthetics [12,13,14]. It has become a quite commonplace term in AI discourses, where it commonly evokes clear associations with characteristics such as independence, intelligence, self-governance, the ability to learn and adapt (e.g. orientation in unknown, unstructured and dynamic environments) or the execution of self-determined decisions. Its ubiquitous use, however, which also shapes non-expert debates on AI, has contributed to the erosion of its semantic qualities.

Even when one narrows down the concept to a more specific technical sense, ambiguities persist. Bradshaw et al. emphasise that there are two different understandings of autonomy in the context of machines: “In the first sense, it denotes self-sufficiency—the capability of an entity to take care of itself. The second sense refers to the quality of self-directedness or freedom from outside control. [...] It should be evident that independence from outside control does not entail the self-sufficiency of an autonomous machine. Nor do a machine’s autonomous capabilities guarantee that it will be allowed to operate in a self-directed manner. In fact, human-machine systems involve a dynamic balance of self-sufficiency and self-directedness”. At the same time, since no entity can be seen as completely independent of its environment, the term autonomous system would in a strict sense even count as a “misnomer” [15].

Furthermore, the different interpretations of machine autonomy in the context of AWS are usually embedded in either optimistic or dystopian discourses, which in turn firmly shape the understandings of autonomy as well, in particular the sense of “what autonomous machines can and cannot do” [16]. It is exactly this interpretative openness that make AWS an important reference point in the politico-strategic interactions of rivalling states, which are continuously struggling for a clear definition. A consensus on what can be regarded as an autonomous weapon is seen as a first step towards the legally binding regulation of these technologies.Footnote 5

The discussions on these semantic issues are held at the regular (annual or biannual) meetings that take place between participating state parties on the protocols of the CCW that was adopted in 1980 (cf. “Methodology” section) [17]. Politically, the terminological ambivalence and polysemy opens the door for disagreement at the CCW on how to define “autonomy” (cf. “Technological definitions and normative understandings of AWS” section). This, as a direct consequence, has also led to the failure to regulate autonomous weapons [18]. Paradoxically, even a common terminology can make the discourse on AWS more complicated, “when the terms involved lack consistent interpretations”. The often metaphorical use of “autonomy” and its ambiguities creates uncertainty when military robots are treated as black boxes. Only when understanding human decision-making processes in the design, production and programming of autonomous machines, questions of agency and responsibility can intelligibly be discussed [19].

This is why solely looking for ways to determine AWS in terms of the concept of autonomy cannot be sufficient, as the label “autonomous” evokes a whole spectrum of meanings that nonetheless does not present us with finite categorical distinctions. Even the more precise term of the so-called technical autonomy refers to a continuum, a point that becomes obvious by the necessity to employ auxiliary vocabulary such as “semi-autonomous”. In short, the term “autonomous” alone—even when defined technologically and hence relatively unequivocally as the “capabilities” of AWS—is not enough to grasp its complexity, since the weapon must necessarily also be understood in the ways it presents itself in manifold contexts.

Definitions focusing on the degree of human control over supposedly autonomous systems

Another approach to defining AWS involves determining the degree of human control over a weapon system that remains unaffected despite a higher degree of automation. In particular, it was the notation of in, on and out of the loop—emphatically used not in the sense of an inherent technical property, but in relation to human agency—that gained prominence in the debate. “In-the-loop” refers to directly executed control by humans (an action must be initiated), “on-the-loop” refers to systems whose actions can be prevented or aborted by human intervention, and, finally, “out-of-the-loop” is the term commonly used for systems that no longer require human control but whose processes are, most of the time, nonetheless still monitored by human agents.

According to this approach, weapon systems are to be called autonomous if they reduce the possibility of human intervention to a minimum, up to the point where they no longer require or even allow human control at all. It reflects a relational understanding of autonomous weapons in terms of the possibility of human intervention and agency and hence can be seen as part of a broader model conceptualising human/machine relationships.

In practice though, the focus on a relational understanding of agency and automation still comes with terminological challenges. One of these challenges refers to the vague distinction between automation and autonomy. As Sauer notes: “After all, automatic systems, targeting humans at borders or automatically firing back at the source of incoming munitions, already raise questions relevant to the autonomy debate” [20]. Similarly, defining the degree of human control as a continuum is at best a measurement metric, as the complex interactions cannot always be clearly attributed to either the human or the machine [21]. Further complicating this approach, this distinction says little about the “autonomy” of the system itself, but at best classifies the possibilities for curtailing it [11]. In other words, even a weapon system that could be called autonomous in a technical sense (cf. “Technical definitions of autonomy and autonomous weapons systems” section) can easily fall short of these expectations and functional properties if it is deliberatively limited and curtailed in a context that is controlled by humans (see “Technological definitions and normative understandings of AWS” section for a detailed analysis of the terminology used in US national strategy papers regarding AWS). The questions remain whether it makes sense to regard it as “autonomous” and even whether the attribute conveys a useful meaning at all. As Ekelhof comments, “any consensus among states, academia, NGOs, and other commentators involved in diplomatic efforts under the auspices of the CCW ... seems to be grounded in the idea that all weapons should be subject to “meaningful human control” (or a similar standard). This intuitively appealing concept immediately gained traction, although at a familiar legal-political cost: nobody knows what the concept actually means in practice” [22] (see also “Technological definition: United States of America” section).

Functional approaches to what “autonomous weapon systems” can and cannot do

The terminological vagueness partly explains more recent endeavours to find a functional definition of AWS. As we will see, however, these task-specific approaches rearrange and combine the above-discussed conceptual and relational understandings and engender their own problems, even though they are trying to break them down to actual functionalities in practical settings.

The most common way to a functional understanding of autonomous weapons at present is a task-based focus on “selecting” and “engaging” a target, which reframes the above definitions but puts stronger emphasis on what these functions comprise and entail in specific practical settings. The US Department of Defense (DoD) has defined an AWS as a “weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation” [US.PosP1] (see “AWS as geopolitical signifiers: Strategies in political communication in China and the United States of America” section for a detailed analysis). This approach is gaining traction and political acceptance. The International Committee of the Red Cross defines AWS as “any weapon system with autonomy in the critical functions of target selection and target engagement”. That is a weapon system that can select (i.e. detect and identify) and attack (i.e. use force against, neutralise, damage or destroy) targets without human intervention [23], with commentators emphasising that the “adoption of the ICRC’s definition—or one like it—” was “strongly advisable” paired with a call for “concerted response by the international community” to the continued developments of these kinds of weapons [24].

Ekelhof notes that the “main focus within this definition lies on the so-called critical functions of target selection and attack and the absence or lack of human intervention in relation to the system’s autonomy” [25]. Both target selection (sometimes meaning the mere distinction between combatants and non-combatants, sometimes referring to larger planning processes) and attack (raising the questions of what constitutes an individual attack or when exactly it starts and ends), in the end, bear their own ambiguities, albeit in a less obvious manner [26].

Even efforts to define AWS by focusing on specific tasks fail to establish a common ground that would clearly distinguish them from previous weapon systems while at the same time meeting the expectation of unambiguously pinpointing their functionality. Both “autonomy” and “meaningful human control” are volatile signifiers. The same, however, applies to automated tasks that are interpreted as constitutive of autonomous weapons, since these tasks are embedded in military practices, infrastructures and concrete situations that eventually determine the effects and degrees of autonomy. In other words, the contexts produce the conditions under which the agency of an autonomous weapon is determined.Footnote 6

Hopes that a functional, task-oriented definition of AWS (specifically singling out target selection and engagement) would neatly solve the ambiguity problem are bound to be disappointed. Even the more precise terminology is subjected to political discourses, in which different actors deliberately utilise diverging meanings, interpretations and definitions to pursue particular political and geostrategic interests. This picture is complicated even further by voices from outside the political realm, which claim that the current AWS technologies are not sophisticated enough to reasonably draw conclusions regarding their practical, legal or ethical consequences [27].

Both the conceptual and the task-centric approaches lead into a semantic recursion, as in all cases—irrespective of the level of theoretical abstraction—the necessity to agree on a static meaning of the terms cannot be met. One important issue usually neglected in these debates is the challenge of translating these terms back and forth between languages that are situated in vastly differing terminological and conceptual traditions (Bächle TC, Champion SC: Autonomous weapon systems. Journalistic discourses in China, forthcoming)Footnote 7. These cultural differences manifest themselves in larger imaginaries, promoting specific expectations, hopes and fears around new technologies. They are promoted by fictional texts but also by public discourses. For AWS, the attribute “lethal” is a case in point here. By the addition of the L in LAWS, the term comes to emphasise that these technologies are in line with expectations associated with the so-called killer robots, evoking specific cultural images. These images foreground the potential harm that is associated with autonomous weapons outside of human control, extending to fears of looming destruction of all humanity. The following section particularly addresses the role of larger sociotechnical imaginaries that shape and determine the ways AWS become meaningful technologies.Footnote 8

Approaching autonomous weapons embedded in sociotechnical imaginaries

Continuously re-semanticising or bluntly denying the mere possibility of a reasonable discourse on AWS and their effects are two ways that are used to drag out the efforts to find effective regulation. At the same time, AWS are only one of the many fields that shape the AI race between state actors and are rhetorically embedded in larger sociotechnical imaginations that are actively politicised. This becomes especially apparent when we look at the two self-proclaimed superpowers, China and the USA, both of which are striving for global dominance. In both instances, the national discourses around AWS act as signifiers that reveal projections of social, cultural and institutional imaginations. Arguably, these discourses not only function as meaningful narratives but also as effective instruments of geopolitical power (e.g. with the intention of deterrence) to enforce specific interests grounded in realpolitik.

The contradictory and contested meanings that are associated with and at the same time constitutive of AWS are embedded in larger narrative structures that in this article are regarded as an expression of vivid “sociotechnical imaginaries” [10]Footnote 9. In a well-known and influential understanding of “sociotechnical imaginaries”, Jasanoff defines them as “collectively held, institutionally stabilised, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology” [28]Footnote 10. In the continuation of this definition, the “desired futures” are juxtaposed with the “shared fears of harms that might be incurred through invention and innovation”—imaginings between utopia and dystopia—perfectly align with the discursive positions guiding the debates on AWS.

A vast body of research in the wake of Jasanoff’s initial coining of the concept has shown that imaginaries powerfully set boundaries to our futures, “shaping terrains of choices, and thereby actions” [29]. The diversification in approaches and research objects associated with the concept shows that SIs must always be understood as an open, contested and dynamic field influenced by a multitude of discursive arenas and players [10, 29, 30]. For example, AWS imaginaries are often influenced by popular culture, fiction or images used in journalism and inspired by more general assumptions about AI (Bächle TC, Bareis J, Ernst C (eds): The realities of autonomous weapons, forthcoming). The utopian and dystopian frames of reference for AI portray it as a kind of superintelligence with the potential to exceed (human) biology and unleash beneficial effects [31] (e.g. see the Chinese employment of “evolution” in “Technological definitions and normative understandings of AWS” section in the context of AWS), while the rise of technological agency poses grave ethical challenges [32]. AI can be seen as “a key sociotechnical institution of the twenty-first century” with state actors playing a pivotal role in shaping the images in which it is portrayed [33]. AI is strongly associated with specific meanings—and myths—about technological futures [34].

Sociotechnical imaginaries (SIs) mediate between the contested realms of fact and fiction and “allow actors to move beyond inherited thought patterns and categories and into an as if-world different from the present reality” [35]. This also applies to AWS and the foregrounding of science-fiction inspired technologies such as robots, which are promoted on the basis that they will play a vital part in future warfare [36, 37]. Today’s “military-entertainment complex” [38] is increasingly blurring the lines between the realities of war and its representation in popular culture (such as war games, which include tactics or threat scenarios). Drones, for example, have become emblematic of a specific type of warfare that has become mediated, remoted, networked, decentred and de-personalised. The particular “aesthetics” of drone images is represented in the arts, literature and film, and in this form, they also enter the public discourse, reifying a particular visual aesthetics of war [39]. This is a continuation of a type of consumable war that is televised, providing live images to the home viewer [40], a type of mediated war whose most recent iterations focus on cyberwars or the “weaponisation of social media” [41].

Paradoxically, it is exactly in this context of uncertainty—in which reality, imagination, possibility and fiction are conflated—that AWS become highly momentous, in particular when political or military decision-making comes to be based on potential or virtual scenarios [42, 43]. The debates around autonomous weapons usually focus on their legal, political or ethical ramifications. The foundation of these works is (at least in part) also based on those potential or virtual scenarios [44]. An ethical problem contributes to constructing, disseminating and maintaining a specific understanding of “(lethal) autonomous weapons” in popular culture, politics, journalism or research [45, 46]. Ethical debates are a major arena for imagining AWS, controversially situated between positions that argue that warfare could even become more “humane” (by more effectively adhering to international law and respecting human rights), when the actual acts of war are left to machines [3, 5] and voices of AI and robotics researchers warning of dire consequences [7].

Approaching AWS as part of the AI imaginations that are deliberately promoted by nation states, it becomes obvious how countries actively portray themselves as part of a global technology race, competing over economic, military and geopolitical advantages. These AWS meanings are part of larger narratives of national identity, interwoven with specific ideologies, ideas of military self-assurance and pride, which in turn are utilised with the communicative goals of deterrence towards political adversaries.

Comparing the USA and China in this regard is particularly fruitful and demonstrative, as they not only locate themselves in the geopolitical arena as rivals with their own interests, but also fundamentally oppose each other in their self-portrayal. This spans from guiding principles in state doctrine, political systems or general canons of values to the origin myths of these nations, representing competing self-conceptions that are apparent in their diverging histories and political identities.

Schematically, the USA’s hunger for greatness, exceptionalism and aspiration to take the role of a global hegemon contrasts with China’s confidently proclaimed ideal of a harmonised and stable society. AI is in both cases regarded as a means to realise these socio-political ideals, with supremacy achieved by technological prowess being a shared theme for both. The conceptual ambiguity of autonomous weapon systems makes their representation and interpretations a flexible tool in political communication. AWS can be seen as a proxy for the respective understanding of the world by China and the USA, a form of national self-assurance through technology.

Methodology

In this paper, we focus on the AWS strategies of China and the USA. Obviously, this selection of countries is not exhaustive, but as discussed above, it lends itself to overtly competing, even antagonistic stances of ideological, institutional and historical narratives of the two nations. These differences become particularly apparent in the military guidelines for reaching their respective ambitions. Both China and the USA position themselves as global leaders that articulate their geopolitical interests in the AI race, be it in the form of “hard” or “soft” power. Despite their position in the world, the striving for military advantage and global regulation of AWS involves many other nations, especially Russia, Israel, South Korea, the UK, Australia, Germany and France. These countries also harbour companies that are leading in robotic military innovation and their governments actively engage in or are confronted with geopolitical tensions and conflicts.

As discussed above (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section), sociotechnical imaginaries encompass broad concepts such as social order and nationhood. For this reason, the empirical material we refer to in the analysis necessarily reflects only a fraction of a multitude of cultural texts that fuel particular meanings of AWS. In this context, our objective is to specifically focus on those imaginations around AWS promoted in state military contexts and hence we pertain to two main discursive arenas: Firstly, the negotiation process at the CCW represents the international regulatory forum of the UN, with talks taking place in Geneva since April 2013 [47]. Here, the USA and China have issued multiple position papers via the Group of Governmental Experts (GGE) on LAWS regarding the ongoing negotiations. They give their stance on definitional issues, the role of technical features and human intervention with a view to agreeing on a final and unanimously agreed upon UN protocol. The negotiations are still ongoing in 2022 and have been characterised by tedious definition struggles and gridlocks in the past. In a joint effort, Germany and France have proposed to conclude the CCW negotiations with a legally non-binding declaration [48], trying to mediate between two groups of countries that either strictly oppose a ban or call for effective and binding regulation [49]. With the recommendation of the 2019 GGE on LAWS, eleven guiding principles were adopted by the 2019 Meeting of the High Contracting Parties to the CCW. In 2021–2022, the CCW is aimingFootnote 11 to convert these voluntary principles into a “normative and operational framework” [50], but given that the CCW decision making requires consensus, it is estimated that “the probability of this forum producing a framework with unanimous agreement is very low” [51].

Secondly, we refer to position papers, directives, guidelines or decrees addressing AWS published by ministries, executives, higher secretaries or party assemblies of both nations that are publicly accessibleFootnote 12. National standpoints towards tech policy are not limited to one condensed official document or even one type of medium alone. Documents that receive the status of a strategy paper vary in medium and form of presentation, being themselves subject to differing political cultures. Clearly, China and the USA have different institutional traditions in announcing political agendas, due to opposing governmental systems and doctrines, e.g. CCP party rule in China vs. executive presidency in the USA. Further, these tech policy documents are not set in stone but are subject to substantive updates, adjustments or even radical dismissals and reorientations in light of new states of affairs in global politics, changes of ruling governments or the implementation of new doctrines. In sum, the empirical body (Table 1) comprises all relevant CCW standpoint papers of the USA and China that have been published since the start of the negotiations in 2013 and incorporates governmental documents addressing AWS [(or synonymously military (use of) AI)] since the year 2011, when the USA, as a first government, published a comprehensive DOD directive on autonomy in weapon systems (introduced in “Functional approaches to what “autonomous weapon systems” can and cannot do” section).

Table 1 Overview of published CCW standpoint papers and governmental documents concerning LAWS of the USA and China 2011–2022

As a typology, the position papers offer various levels of analysis. First and foremost, the documents stemming from these two discursive arenas provide technical and definitional details on LAWS, showing many similarities to the academic debate (“The challenges of defining autonomous weapon systems” section). But beyond that, these position papers contain additional modi and layers of political communication. On the one hand, they act as self-assurances in the assessment of the current national security situation in the world and their own position in it. On the other hand, these documents can be instrumentalised to serve realpolitik interests. They set orientation points and geopolitical goals, identify threats and forge counter-strategies. Both countries are well aware of the signalling power of these documents for past, existing or emerging partners and adversaries. Further, apparently technical documents can offer strategic opportunities to escape definite LAWS regulation, or they can be used to deliberately provide a breeding ground for ongoing confusion in agreeing upon the regulatory object (see also “Technological definitions and normative understandings of AWS” section below).

AWS as geopolitical signifiers: strategies in political communication in China and the USA

China and the USA employ different strategies to put their AI-driven military dominance on display. Matter-of-fact tech policies and national strategies alternate with messages of national superiority. This section focuses on this particular realm of political communication and employs a comparative analysis of both countries, dissecting how LAWS as AI imaginaries are employed as geopolitical signifiers of national particularities. It analyses them in terms of the military doctrines and AI imaginaries they promote (“Military doctrines, autonomous weapons and AI imaginaries” section) and the definitions of autonomous weapons they establish (“Technological definitions and normative understandings of AWS” section) which both cater to certain goals in political communication.

Military doctrines, autonomous weapons and AI imaginaries

Foreign geopolitics is embedded in military doctrines, serving as a signalling landmark for military forces, the reallocation of strategic resources and technological developments. The empirical material at hand offers layers of analysis hinting at national SIs that put AWS in broader frameworks. These frameworks inform the populace, allies and adversaries about national aspirations, while presenting military self-assurance as a tool to look into a nationally desired future (see “Approaching autonomous weapons embedded in sociotechnical imaginaries” section). Here, AWS act as an empty and hence flexible signifier, a proxy for a society that exhibits different national idealisations of social life, statehood and geopolitical orders.

Military doctrine: The United States of America

In January 2015, the Pentagon published its Third Offset Strategy [US.PosP2]. Here, the current capabilities and operational readiness of the US armed forces are evaluated in order to defend the position of the USA as a hegemon in a multipolar world order. The claimed military “technological overmatch” [ibid.], on which the USA’s clout and pioneering role since the Second World War is based, is perceived as eroding. The Pentagon warns in a worrisome tone: “our perceived inability to achieve a power projection over-match (...) clearly undermine [sic], we think, our ability to deter potential adversaries. And we simply cannot allow that to happen” [ibid.].

The more recently published “Department of Defense Artificial Intelligence Strategy” [US.PosP5] specifies this concern with AI as a reference point. Specific claims are already made in the subtitle of the paper: “Harnessing AI to Advance Our Security and Prosperity”. AI should act as “smart software” [US.PosP5, p 5] within autonomous physical systems and take over tasks that normally require human intelligence. Especially, the US research policy targets spending on autonomy in weapon systems. It is regarded as the most promising area for advancements in attack and defence capabilities, enabling new trajectories in operational areas and tactical options. This is specified with current advancements in ML: “ML is a rapidly growing field within AI that has massive potential to advance unmanned systems in a variety of areas, including C2 [command and control], navigation, perception (sensor intelligence and sensor fusion), obstacle detection and avoidance, swarm behavior and tactics, and human interaction”.

Given that such ML processes depend on large amounts of training data, the DoD announced its Data Strategy [US.PosP11], harnessed inside a claim of geopolitical superiority, stating “As DoD shifts to managing its data as a critical part of its overall mission, it gains distinct, strategic advantages over competitors and adversaries alike” (p 8). In the same vein and under the perceived threat to be outrivalled, “the DoD Digital Modernization Strategy” [US.PosP7] lets any potential adversaries know: “Innovation is a key element of future readiness. It is essential to preserving and expanding the US military competitive advantage in the face of near-peer competition and asymmetric threats” [US.PosP7, p 14]. Here, autonomous systems act as a promise of salvation of technological progress, which is supposed to secure the geopolitical needs of the USA.

Specified with LAWS, the US Congress made clear: “Contrary to a number of news reports, U.S. policy does not prohibit the development or employment of LAWS. Although the USA does not currently have LAWS in its inventory, some senior military and defense leaders have stated that the USA may be compelled to develop LAWS in the future if potential US adversaries choose to do so” [US.PosP12, p 1].Footnote 13

Remarkably, the USA republished the very same Congress Paper in November 2021, just by a minor but decisive alteration. It changed “potential U.S. adversaries” into “U.S. competitors” [US.PosP14]. While it remains unmentioned (and presumably deliberately so) who is meant by both “senior military and defence leaders” and so named “U.S. competitors”, this minor change hints at a subtle but carefully orchestrated strategic tightening of rhetoric, sending out the message that the US acknowledges a worsening in the geopolitical situation with regard to the AWS development. In reaction, the USA continue to weaken their own standards for operator control over AWS in the most recent 2022 Congress Paper (as of May 2022), reframing human judgement: “Human judgement [sic!] over the use of force does not require manual human “control” of the weapon system, as is often reported, but instead requires broader human involvement in decisions about how, when, where and why the weapon will be employed” [US.PosP16]. Certainly, the rhetorical “broadening” of the US direction lowers the threshold to employ AWS in combat, evermore distancing the operator from the machine.

This stands in stark contrast to the US position in earlier rounds of the CCW process; here, the USA not only claims that advancements in military AI are of geopolitical necessity but also portrays LAWS as being desirable from a civilian standpoint, identifying humanitarian benefits: “The potential for these technologies to save lives in armed conflict warrants close consideration” [US.CCW3, p 1]. The USA is listing prospective benefits in reducing civilian casualties such as help in increased commanders’ awareness of civilians and civilian objects, striking military objectives more accurately and with less risk of collateral damage, or providing greater standoff distance from enemy formations [US.CCW3]. Bluntly, the USA tries to portray LAWS as being not only in accordance but being beneficial to International Humanitarian Law and its principles of proportionality, distinction or indiscriminate effect (see also “Technological definition: United States of America” section). While such assertions are highly debatable and have been rejected by many [1, 5, 7, 8], they do shed a very positive light on military technological progress, equating it with humanitarian progress.

In a congress paper on AWS, published in December 2021, these humanitarian benefits are once more mentioned but only very briefly, while a sharpening of the rhetoric is clearly noticeable. The paper also summarises the CCW positions of Russia and China, implicitly clarifying who is meant by “U.S. competitors” (see above). China, even though only indirectly, is accused by invoking that “some analysts have argued that China is maintaining “strategic ambiguity” about its position on LAWS” [US.PosP15, p 2]. This is the first time the USA overtly expresses in a position paper that it understands the AWS negotiations as a political power play, instead of serving the aim of finding an unanimously agreed upon regulatory agreement.

In sum, the USA claims a prerogative as the dominant and legitimate geopolitical player in a multipolar world order, who is under external threat. The ability to defend military supremacy against lurking rivals is portrayed as being in a dependent relationship with the level of technological development of the armed forces, specified with LAWS. The USA claim to hegemonial leadership may only be secured through maintaining technological superiority.

Military doctrine: China

The doctrinal situation in China is more complex and ambivalent. In 2003, the Chinese Communist Party (CCP) and the People’s Liberation Army (PLA) announced the concept of the “Three Warfares”, a military guideline for enforcing Chinese geopolitical interests that has been systematically embedded in the PLA’s military doctrine in recent years [52]. This concept promotes the objective of framing key strategic arenas of foreign policy in one’s favour, so that kinetic (physical military) interventions appear irrational to opponents. This framing, also known as “information warfare” [53], insinuates that international conflicts are less decided by armies carrying off the victory but rather by the media narratives that have the upper hand in interpreting the events.

The concept of “Three Warfares” has been discussed by numerous authors [52,53,54,55,56], encompassing the following dimensions: the so-called psychological warfare aims to influence or disrupt an opponent’s ability to make decisions. This includes practices that deter, shock or demoralise competitors. Media warfare, on the other hand, aims at influencing and manipulating national and international public opinion in order to generate support for China’s military interventions. This entails constant and insistent media exposure, which aims to influence the perception and attitudes of the domestic or enemy population. The third dimension focuses on the legal dimension (“lawfare”). Creative distortions and omissions, conceptual vagueness and loopholes in regulations and international legal conventions serve the purpose of expanding one’s own operational possibilities while simultaneously thwarting opponents in their scope of action. This instrumentalisation of the legal framework should be understood as a means of a “rule by law not rule of law” [54].

The strategic orientation of the “Three Warfares” also reflects a concession to the current military and geopolitical supremacy of the USA. While the USA claims its global leadership with rhetorical boldness, China sketches a military SI of an “underdog”, focussing on tactics of asymmetric warfare. This enables it to avoid direct military confrontation on all fronts and deploy a policy of “shashoujian” (杀手锏), which should be translated as “trump-card” approach [57,58,59]. Instead of competing in all strategic arenas with the USA, this doctrine targets a selective approach, fostering military technology that “the enemy is most fearful of”, including the call that “this is what we should be developing” [60].

However, in recent strategy papers, China has presented itself more confidently. As with the US, AI now plays a crucial role as a “cutting-edge” technology in China’s foreign policy aspirations [61,62,63,64,65].

The AlphaGo win over professional Go player Lee Sedol in 2016, which received a lot of media attention in China (280 million live viewers) was coined by some authors a Chinese “Sputnik moment” [66, 67], hence a wake-up call, which may well have contributed to the massive increase in spending in tech industry and research. Certainly, with the 2017 “new generation artificial intelligence development plan” the CCP also embraces these bold AI ambitions rhetorically by emphasising the need to “grasp firmly the strategic initiative of international competition during the new stage of artificial intelligence development [and] create new competitive advantage” [CH.PosP4, p 2]. The CCP decisively calls for a technological superiority that is equipped “to build China’s first-mover advantage in the development of AI” [CH.PosP4, p 1].

Such new confidence and ambitions are similarly met with a multilateralist appeasement and peacekeeping positioning [CH.PosP9]. China claims full sovereignty and strict non-interference in questions of national interest and security. This relates to, among other things, the one-China unification principle (e.g. directed to Taiwan “China must be and will be reunited”) or territorial claims (e.g. “safeguard China’s maritime rights and interests”). Beyond this sphere of the national interest the CCP pictures a military SI of a global hegemon without expansive aggressions (“Never Seeking Hegemony, Expansion or Spheres of Influence”). Sources of instability are located elsewhere, namely, in local “separatism” and foreign aspirations with “order [...] undermined by growing hegemonism, power politics, unilateralism and constant regional conflicts and wars”. At the same time, the USA is blamed directly for posing a threat to “global strategic stability” [CH.PosP9].

In sum, China’s military SI depicts a global player that has caught up on its rivals at a military level. The CCP adjusts its doctrines and strategies pragmatically, from an underdog position to an assertive hegemon, clearly addressing geopolitical claims and means to get there. Military doctrines are clearly linked, as with the USA, to modernist narratives of technological progress, incorporating intelligent weaponry as AWS as a means to an end to outrival competitors. The technological race for supremacy in this key strategic technology is perceived as open, with China claiming legitimate ambitions.

Technological definitions and normative understandings of AWS

The USA and China have published national strategy papers as well as position papers at the CCW that are of a technical nature, aiming to define AWS. These documents have to be read against the backdrop of the larger SIs as introduced above (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section), motivating and legitimating the state’s strategic interpretative flexibility in creating and promoting AWS definitions. Hence, these documents not only inform which understanding—and technological variation—of autonomous weapon systems is to be prioritised, but further raise the question to what greater ends these specific interpretations are pursued. For example, in much the same way as the US American definitions of AWS, the Chinese “lawfare objectives” keep the backdoor open for developing automated weapons that escape the poor attributions of autonomy found in the AWS documents, with many military applications remaining legally and politically unaffected. A closer look at the national AWS definitions in the following sections will illuminate this issue.

Technological definition: United States of America

The DoD Directive 2012/2017 [US.PosP1, emphasis added] provides seemingly unequivocal definitions:

“Autonomous weapon system. Targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”

(...)

Semi-autonomous weapon system. A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator.”

A first problem with the US definition arises with the role of the human operator as a defining criterion for autonomy. As discussed in “Definitions focusing on the degree of human control over supposedly autonomous systems” section, conceptually, the USA advocates a relational approach to autonomy, linking it to the human presence. But the essential question of what an autonomous system comprises cannot simply be addressed by determining whether a human is in the loop or not. The degree of human intervention may give us advice on how to use such weaponry, but it does not help much in defining what it is. As Crootof clarifies: “If a weapon system has the capacity to independently select and engage targets, whether there is a human supervisor or whether it is operated in a semi-autonomous mode is a question of usage—and thus regulation—and not of autonomy” [11]. Very powerful weapons can be controlled by an operator and restrained such that their fire power (e.g. operational speed, fire range or power of devastation) is actually rarely fully in use. But from this observation, we can hardly deduce that we have arrived at the very essence of what the weaponry actually is and what it is capable of. While the role of human intervention in AWS is ethically and politically a much-needed debate, but not a debate without pitfalls as discussed by various authors regarding “meaningful human control” [24, 68,69,70,71,72], it simultaneously raises further confusion if it is regarded as an appropriate characteristic in defining AWS.

More problematically, making a definition of AWS dependent on human intervention creates new loopholes in escaping effective legal regulation. The fundamental problem with the DoD definition stems from the fact that the standards for autonomy are simply very low—actually, it does not do justice to the term autonomy at all. The definition does not engage with the complexity of the term, clarifying what is really meant by autonomy. Should autonomy be rather understood as self-sufficiency, or as self-directedness, and hence as independence from outside control [73] (see “Technical definitions of autonomy and autonomous weapons systems” section)? Also, as problematised above, operation under pure autonomy as the DoD document suggests is a myth, as any technical device is influenced by external factors such as technical infrastructure, terrain etc.

In essence, the DoD reduces the term autonomy to a process of automation: Any (non-) trivial system—either mechanical or algorithm-based—that, once activated, automatically processes (hence, without further human intervention) tasks and interacts with an environment would meet this criterion. Following the US reasoning, it is extremely hard to differentiate between advanced and very rudimentary mechanical or algorithmic systems, as literally any of them can be reduced to processes of automation. Thus, reducing autonomy to a process of automation introduces the notion of a continuum, making a clear differentiation between ubiquitously labelled “intelligent” weaponry impossible and the distinction between full or only semi-autonomy ever more complicated (cf. “Definitions focusing on the degree of human control over supposedly autonomous systems” section).

Take, for example, the case of radar detection systems, which have been in use for decades and which are capable of identifying, selecting and targeting enemy objects without the necessity for human intervention. The only difference between such systems and AWS would be the capability of automatically engaging with these targets. But weapon systems that fulfil such additional criteria have existed for years already, with the best example maybe being the Phalanx system [74]Footnote 14, which has been in use since the 1980s, and hardly raised any regulatory concern back then [75]—especially not from the US side.

Problematically, the DoD definition cannot account for military advancements in fire power or complex machine behaviour such as adoption enabled through new data processing capabilities in machine learning—leading to a new myriad of problems such as unpredictability [76, 77] or opacity [78, 79] of machine behaviour, which are connected to safety, incomprehensibility and accountability issues well known from the civil AI regulatory debate. These phenomena in turn raise the fundamental question of whether deploying LAWS violates the Geneva Convention of IHL. If machine behaviour becomes ever more unpredictable, opaque and complex, it is debatable if the Geneva principles of the IHL distinction, proportionality and accountability in hors de combat can be met at all [80,81,82].

The USA has never claimed to retain from developing LAWS; in fact, it even cherished its advantages (see “The United States of America” section [US.CCW3]) and, as discussed above, threatens adversaries to “develop LAWS in the future if US competitors choose to do so” [US.PosP15]. This statement is, if one takes the DoD definition as a reference, strictly speaking, false. As discussed in relation to the Phalanx system, the USA have used LAWS in the past already and still do so todayFootnote 15 [Us.PosP12] [83, 84].

Conclusively, the DoD definition has the problematic effect of levelling down so many weapon systems under one category that critical advancements in weapon abilities that are now underway cannot be accounted for (making compliance with the Geneva principles more challenging). With such a vague and all-encompassing definition, effective legal regulation is ever more complicated, ensuring that national advances in the development of LAWS are not impeded.

Technological definition: China

China’s contributions to the discussions at the CCW are rather limited, but serve well to understand China’s ambivalent stance on AWS, echoing its international normative positioning (as introduced in “Military Doctrine: China” section). Their ambiguity helps to keep a strategic backdoor for optionality open. In the 2017 CCW negotiations, China adopted a positive stance on international regulation, favouring preventive arms control: “The international community should follow the concept of universal security on the basis of existing international law, carry out preventive diplomacy, check the trend of an arms race in the high-tech field and maintain international peace and stability” (12th December 2017, p 5). This is in accordance with the multilateralist stance voiced in the general AI policy trajectory of the country (“Actively participate in global governance of AI (...), Deepen international cooperation in AI laws and regulations, international rules (...) and jointly cope with global challenges” [CH.PosP4, p 25] [85]).

Such a preventive regulatory stance was regarded more critically in 2018. Here, China states that “(...) the impact of emerging technologies deserve objective, impartial and full discussion. Until such discussions have been done, there should not be any pre-set premises or prejudged outcome, which may impede the development of AI technology” [CH.CCW2, p 2]. This rather innovation and military friendly policy reveals clear reservations against a precautionary principle that would regulate LAWS restrictively and prevent an AI arms race. The ambivalence seems even more striking when looking at the Chinese LAWS definition presented at the CCW:

Definition [CH.CCW2, p 1, enumeration added by authors for better overview]

According to the Chinese view, “LAWS should include but not be limited to the following 5 basic characteristics”: (1) Lethality, “which means sufficient pay load (charge) and for means to be lethal”; (2) Autonomy, “which means absence of human intervention and control during the entire process of executing a task”; (3) Impossibility for termination, “meaning that once started there is no way to terminate the device”; (4) Indiscriminate effect, “meaning that the device will execute the task of killing and aiming regardless of conditions, scenarios and targets”; (5) Evolution, “meaning that through interaction with the environment the device can learn autonomously, expand its functions and capabilities in a way exceeding human expectations”.

Conceptually, these LAWS criteria display a pick-and-mix approach, with the first stating the obvious, with the second showing strong similarity to the US definition (with its discussed pitfalls), with the fourth showing compliance to the Geneva Principles of IHL, and with the fifth hyperbolising, picking a fancy term “evolution” (hence lending imagination from a biological domain and maybe even evoking fantasies of an organic, autopoetic and reproductive machinery creating awe by exceeding human capabilities) to label adoption in machine learning processes.

The real crux lies in the third of these criteria, which hypothesises that once started, there is no way to terminate a device. In essence, this scenario describes a universally destructive, actually ludicrous idea, which is nothing but absurd. Machines are not perpetuum mobiles but rely heavily on infrastructure, supervision, context, etc.—so, clearly, machinery self-sufficiency is a myth (see “Technical definitions of autonomy and autonomous weapons systems” section). Strictly speaking, these criteria depict sensational doomsday fiction, once more proving the hybridity of the entire AWS discourse, where realpolitik, imagination, possibility and fiction are conflated [86]Footnote 16 (“Approaching autonomous weapons embedded in sociotechnical imaginaries” section).

It is exactly these unrealistic criteria for autonomous weapons that maintain the idea of promoting seemingly less dangerous—only “automatic”—weapon systems, undermining national or international legislation efforts. Where the US definition has set the benchmark for AWS too low, the Chinese set the benchmark for AWS too high, rendering their existence near science fiction. Hence, demands to ban AWS following these criteria can largely be understood as a political gesture of purely symbolic value. Implicitly, the development of autonomous and semi-autonomous weapon systems is not only tolerated but by definition appears as a legitimate course of action. This perfectly voices the objectives laid out in so-called asymmetric lawfare (see “Military doctrines, autonomous weapons and AI imaginaries” section): The legally vague, even bland criteria applied in the description and definition of LAWS have the intended effect of not curtailing one’s own political scope of action.

Conclusively, both countries are against a complete ban on AWS, and with the definitions they promote at the CCW, they certainly do leave a backdoor open for further development and use.

Conclusion

This paper reveals the ways in which (lethal) autonomous weapon systems (AWS) are used as flexible reference objects in political communication. It shows how the USA and China embed AWS in their military doctrines and uncovers idealisations of geopolitical orders. The analysis navigates between different theoretical disciplines in order to deconstruct these national quests, which are interpreted as competing sociotechnical imaginaries (SIs). Both nations employ semantic manoeuvres in the realm of LAWS to enforce their military interests. The chosen approach—which involved considering AWS as geopolitical signifiers of national particularities—reveals both similarities and differences. This is hardly a surprise, since SIs are strategically deployed as part of political communication: only by making the motifs mutually decipherable while at the same time stressing differences can both sides ensure an intelligible back and forth in communication.

The main objective shared by both sides is the attempt to cater to certain goals in political communication. In particular, the two nations use the term AWS as a semantic means of deterrence in hybrid warfare. More recent political developments illustrate an escalating rhetoric that also points to the function of military technology as a semantic vessel. On the US side, subtle terminological changes (such as substituting “potential U.S. adversaries” for “U.S. competitors”) have been accompanied by an increasingly transparent and conscious unmasking of the CCW negotiations as an arena of rhetorical contest. The worsening of the international security situation has motivated the USA to lower its standards of human control over AWS, which makes the employment of AWS more likely. Such endeavours are undermining international humanitarian efforts at establishing binding and supranational rules to regulate AWS. On the Chinese side, the doctrine of overt lawfare and media warfare have been obvious since the PLAs announcement in 2003. Recently, this self-portrayal has painted the picture of a transformation from an “AI underdog” to an assertive hegemon by means of AI superiority.

In another conspicuous similarity, the military doctrines of both countries are clearly linked to narratives of technological progress, with the USA and China emphasising that intelligent weaponry can be used to safeguard their respective geopolitical goals (especially regarding disputed territories and spheres of influence). AI technologies are tied to overt efforts to enforce legitimacy for military technology advancements and aggressive military strivings. Technological superiority is elevated to a sublime status and portrayed as indispensable to secure national orders in a perceived arena of fierce international competition (AI weapons race). The emphasis of national resilience to defend military hegemony (US), or to catch up and achieve a pole position (China), brings to the fore larger national imaginaries that articulate idealisations of world orders and their respective value foundations. AWS informed by SI, especially in a broader context of AI, articulate visions of national pride that are sought in technological advancement and achievement, even if at times they are hidden behind the smokescreen of international collaboration.

Major differences are apparent in the linguistic manoeuvres by which the USA and China achieve their goals. The US military definitions of AWS—which are also a conceptual blueprint for many other institutions and organisations—operate on a conceptual continuum, mainly reducing autonomous qualities to processes of automation. Taken together with the relational understanding of autonomous systems (which always necessarily involves human agency), this effectively creates a hybrid understanding of automatic and/or autonomous (weapons) systems. This blurring makes it all the more challenging to find legal parameters for the regulation of AWS. As an effect of this indeterminacy, national ambitions with regard to the development of novel weapon technologies remain unaffected: this lack of clarity allows for a historical perspective, focusing on functions such as target selection and engagement, which draws a continuous line from CIWS systems to today’s elaborated systems. Innovative technological features, which include machine learning operations and for this reason enable unprecedented adaptive qualities and unpredictable behaviour, remain largely unaccounted for in the AWS definition by the USA.

The understanding of AWS promoted by China at the CCW has intentionally fostered an ambiguity in defining AWS that helps to keep the strategic backdoor for the development of “intelligent” weapons open, despite the publicly displayed efforts to curtail their development and use. This is on the one hand achieved by taking an ambivalent stance to preventive measures against novel technologies and on the other by promoting a wildly contradictory and bizarrely unrealistic understanding of AWS. It is the latter in particular that helps to legitimise the use of automatic weapons, which are indirectly portrayed as the much less worrisome technology.

On an international level, the semantic ambiguities of both states, which employ value-laden concepts such as machine autonomy and (human) control in the context of AWS, are deliberately exploited in order to usurp efforts for their effective regulation. Effectively, both nations are undermining global efforts to prevent an AI weapons race—even if they are simultaneously promoting a rhetoric of appeasement and collaboration. If autonomous systems are understood as a relational quality that is always interwoven with external factors, the difference between them and “only automatic” systems is blurred. This means that novel military technologies seem fully legitimate as they are presented as a mere continuation of the weapon systems of the past, which did not spark a lot of controversy back then. If, on the other hand, autonomy and autonomous systems are defined as entities that operate completely independently of external factors such as infrastructure, energy supply, human oversight or decisions, the portrayal of AWS crosses the boundary into the realm of what is conceptually impossible. Regulating AWS becomes a vain endeavour since these technologies do not exist. In an effort to undermine much needed international regulation, it is exactly this paradoxical double-bind that ensures that states can continue the development of highly automatic and destructive weaponry.

The European actors have not contributed to an effective regulation of LAWS either. Neither Germany nor France as powerful EU nations are listed as countries that call for a prohibition on fully autonomous weapons by the Campaign to Stop Killer Robots, even though they are both active in the CCW process [87]. Their efforts for a voluntary regulatory framework can be perceived as less affirmative than other countries that strictly oppose a ban on LAWS, but this just seems to be another manoeuvre to circumvent tight regulation. The USA has happily exploited the German and French initiative as a model for “alternative approaches to manage LAWS” and is now advertising its own “nonbinding Code of Conduct to “help States promote responsible behaviour and compliance with international law” [US.PosP15]. Effectively, these declarations should be understood as a fig leaf strategy that mobilises a more humane rhetoric while striving for legitimacy for a soft LAWS regulation approach.

From a theoretical and analytical standpoint, a multidisciplinary lens is pivotal in the effort to make sense of the complex interdependence of conceptual frameworks, technological applications and a performative rhetoric. This lens also significantly sharpens our understanding of how they contribute to the present and future development of weapons technologies and the meanings attributed to them. It has the potential to inspire much needed research on the different political, legal and cultural (semio)spheres to further illuminate the functions and effects of AWS embedded in SIs.

When such momentous technologies are at issue, it is of paramount importance to defend the valence of concepts such as autonomy, accountability and responsibility. It is an imperative to prevent these values from being watered down as a consequence of power plays in the political arena.