Introduction

Negotiations over a ban on lethal AWS have been ongoing at the Convention on Certain Conventional Weapons (CCW) in Geneva since 2014. In parallel, the problematic impact of AWS, as well as tele-operated drones, has been critically debated in academia (Gregory 2011, Bhuta et al. 2016, Suchman and Weber 2016, Weber 2016) as well as by non-governmental and investigative journalist organizations, i.e., such as the Campaign to Stop Killer Robots, Code Pink, the Bureau of Investigative Journalism, the International Committee for Robot Arms Control (ICRAC). The massive violations of human rights by tele-operated drones were documented and discussed while the possible severe humanitarian consequences of the deployment of AWS were outlined.

Nevertheless, the efforts of human rights and arms control advocates have had little effect in the public and political arena so far. Against all warnings of the devastating consequences of these systems for civilians - while also ignoring the problem that these systems serve as a trigger for a new global arms race -, drones are staged as ‘precise and clean’ and a remedy to save the lives of one’s own soldiers, by the military, the defense industry, as well as politicians. Against this background, AWS have at the same time been staged for many years  as a problem that will only become reality in the far future or that they will always be controllable through a human on the loop or through ‘responsible AI’ (Scharre 2018).

The interest in AWS is rising in many nations as loitering munition (so-called kamikaze drones) is increasingly discussed in the media as the decisive game changer for military advantage, for example already in the Nagorno-Karabakh war (Deutsche Welle 2021) and today especially in the Russian-Ukraine war (Hambling 2023). And again, there is little debate about the tremendous consequences of these weapons for civilians as AWS turn into a permanent threat from above: ‘Some next-generation military drones rely on artificial intelligence to circle over an area, pick out enemy units and destroy them. In the coming years, drone technology will improve, and the cost of drones will decline. As they do, the frightening truth is that troops and civilians in future conflicts will find fewer and fewer places to hide from the gaze of both man and machine’ (Kingsbury 2022). The recently started Israel-Gaza war is a terrifying example for this development (Davies et al. 2023).

Against this background, several legal, social science and humanities scholars as well as journalists and activists claim that a ‘new human right to protect the freedom to live without physical or psychological threat from above’ (Grief et al. 2018) are required. This proposed new human right has been developed in the Airspace Tribunal hearings in London, Sydney, Toronto and Berlin in the last years.

Contested imaginaries of AI

According to some AI experts, an enormous obstacle to a realistic debate about the potentials of AWS is the sociotechnical resp. pop-cultural imaginaries of AI shaped by Hollywood blockbuster films such as the Terminator series, ‘Ex Machine’, or ‘I, Robot’. These films often stage autonomous AI dramatically as a conscious, evil super-intelligence striving for the erasure of the human race. While the power of the technology is massively overstated, it is at the same time neutralized through this overstatement: The popular discourse revolves around the question of whether AI can gain consciousness, while the concrete effects of applied AI, such as the loss of meaningful human control (Sharkey 2016) and the reconfiguration of human–machine relations (Suchman and Weber 2016), have been mostly overlooked. ‘We have witnessed high-level defense officials dismissing the risk on the grounds that their “experts” do not believe that the “Skynet thing” is likely to happen. Skynet, of course, is the fictional command and control system in the Terminator movies that turns against humanity. The risk of the “Skynet thing” occurring is completely unconnected to the risk of humans using autonomous weapons as WMDs or to any of the other risks […]. If even senior defence officials with responsibility for autonomous weapons programs fail to understand the core issues, then we cannot expect the general public and their elected representatives to make appropriate decisions’ (Russell et al. 2018).

‘Enough to kill half a city’

Arms control advocates—from peace researchers to computer scientists—aim to foster a more realistic imaginary of AWS and to illustrate their deadly potential. The Future of Life Institute and well-known AI expert Stuart Russell together with a professional film team developed short YouTube videos to make the consequences explicit for a broader audience. The first video called ‘Slaughterbots’ went viral after its release in 2017. It had received more than two million clicks within a few days, even though it was not an SF trailer but a science communication video. The Slaughterbots video starts with a typical CEO presentation in which the protagonist demonstrates the capabilities of the emergent drone swarms, released in hundreds or thousands from an aeroplane, which allow according to the CEO an ‘airstrike of surgical precision … A 25-million-dollar order now buys this … Enough to kill half a city, the bad half’, because it ‘allows you to separate the good guys from the bad’ (Slaughterbots 2017). The drones are equipped with face recognition software to follow and kill selected targets—according to their social media profiles, for example. With this new weapons system, the CEO claims, ‘nuclear is obsolete’ (ibid.). The rest of the video develops two main scenarios in which critical members of parliament and hundreds of politically engaged students are lethally attacked by drone swarms. Stuart Russell warns of the problems and effects of autonomous weapons: ‘What we were trying to show was the property of autonomous weapons to turn into weapons of mass destruction automatically because you can launch as many as you want’ (ibid.). The video impressively sketches the potential for mass destruction of AWS—also in the civilian context. Russell makes clear that the dangerous capabilities of AWS shown in the film are not decades away (as often claimed by some countries at the CCW talks in Geneva), but the ‘results of integrating and militarizing technologies that we already have’ (ibid.)—and that this development needs to be stopped: ‘Allowing machines to choose to kill humans will be devastating to our security and freedom. We have an opportunity to prevent the future you just saw, but the window to act is closing fast’ (ibid.). In the meantime, the first autonomous drones seem to operate. In 2020, an UN report stated that the first autonomous drones have been used in Libya (United Nations, Security Council Report on Second Libyan Civil War 2020, 17/548) but this statement was never confirmed. In 2023, the Ukrainian drone company Saker proudly announced the use of autonomous drones (Hambling 2023).  

Autonomy

One of the key issues of understanding AWS is the complex and multiple meanings of autonomy. While in the humanities and social sciences as well as everyday life, autonomy is associated with a free and self-aware subject which acts self-determinedly and consciously. Even though this Kantian concept has been challenged by well-known theorists from Karl Marx to Judith Butler, it still predominates in realms such as ethics, law, economics and everyday life. The concept of autonomy used in AI and robotics has a very different meaning: It follows a cybernetic concept of purposeful behavior in the sense of a pragmatic physiological automated mechanism, like the target-seeking mechanism of a torpedo. Today’s control mechanisms in AI systems are much more sophisticated than traditional servomechanisms but nevertheless AWS do neither  follow their ‘own’ rules nor  are they capable of decision-making in a wider, (self-)reflective sense. They are determined by norms, values and categories which were programmed into the software by computer scientists—and although the complexity of software layers might lead to unpredictable effects, these are not intentional (Suchman/Weber 2016). The Slaughterbots drones can find and follow targets. For example, they use social media profiles to find and follow the people to be killed. But this behaviour is preprogrammed. This sophisticated entanglement of autonomous and preprogrammed behavior in autonomous systems makes it so difficult to understand the challenges they pose.

The arms control imaginary: WMDs

In the Slaughterbots video, it becomes obvious that autonomous drone swarms are not self-determined and self-conscious intelligent ‘organisms.’ The ‘Slaughterbots’ are programmed to select their targets via data analytics according to pre-given criteria: For example, identifying, searching and targeting leftist students engaged in an anti-corruption NGO via their social media profiles. The bots seek their targets using facial recognition and to kill them with explosives. The Slaughterbots may show coordinated, flexible behavior to perform their tasks (avoiding obstacles, following humans, etc.), but these swarms are neither conscious nor capable of setting their own agendas and develop their own goals. The arms control imaginary strives to show the decisive difference between a Hollywood imaginary of the self-conscious, intelligent, autonomous AI and a more realistic Slaughterbots imaginary of AI as a collection of smart software programs.

The imaginary the Slaughterbots’ video emphasizes that today’s AI makes it possible to automate sophisticated and sensible tasks that are normally performed by humans. These software programs are not intelligent in themselves. Nevertheless, adaptive, coordinated drones as well as drone swarms can easily be turned into WMDs.

The arms control advocates’ Slaughterbots video is in my view an important step toward the development of a new AI imaginary that is not build on the old trope of the evil, almighty wrongdoer, but which makes the eminent questions of arms control of AI-based systems and the dimension of lethal autonomous weapon systems as weapons of mass destruction visible. Today autonomous drones are already in operation though many activists are working towards the ban of such systems. This ban is desperately needed as  an important contribution to ending the fear of being tracked and targeted from above as demanded by the Airspace Tribunal.

Notes

  1. 1.

    This essay partially relies on an earlier paper: ‘Artificial Intelligence and the Sociotechnical Imaginary: On Skynet, Self-Healing Swarms and Slaughterbots.’ In: Kathrin Maurer, Andreas Immanuel Graae (Eds.): Drone Imaginaries and the Power of Vision. Manchester: Manchester University Press 2021

  2. 2.

    Autonomous weapons are defined as systems ‘that, once activated can track, identify, and attack targets with violent force without further human interaction’ (Sharkey 2016, 3)

  3. 3.

    For the concept of the imaginary see Jasanoff and Kim (2009, 2015), Mager and Katzenbach (2021), McNeil et al. (2017)

  4. 4.

    Slaughterbots, directed by S. Sugg, written by M. Wood, YouTube (2017), last accessed 10.10.2022, www.youtube.com.