There are various ways in which AWS could be used to create a deception in the sense of some of the examples of military strategy described earlier. AWS are autonomous once launched, but the military can still be involved in decisions about when and where to deploy them. This is the case even if the weapons are set up to automatically launch when incoming missiles are detected—a decision has still been made by humans to set them up in this way. AWS could be launched in an area as a feint: mounting an attack in an area to distract an adversary from an attack being prepared elsewhere. They could be launched as a demonstration, or show of force, attacking buildings or locations in order to create the impression of technological superiority. Of course, terrorists, non-state actors, and insurgents could also make use of AWS in a similar manner.
The more serious humanitarian risk is that deceptive strategies could be used against AWS to disrupt the behaviour of the machines. Humans are endlessly inventive and creative, and there is little reason to expect that the human targets of AWS will passively wait to be killed. Terrorists, insurgents, non-uniformed combatants and non-state actors are going to invent ways of deceiving and derailing AWS. Johnson [39] details the many adaptations and innovations of the Taliban in Afghanistan in the asymmetric warfare conducted there. Al Qaeda are known to make use of denial and deception strategies [40], and in 2013 Al Qaeda counter-drone manuals were discovered in Mali, detailing 22 steps for avoiding drone attacks [41]. Bolton [42] states ‘People are too messy, unpredictable, clever and tricky to meet the assumptions programmed into military technology’. He gives as examples the ways that Vietnamese communist soldiers spoofed the electronic detectors dropped from US warplanes onto their paths through the jungle: ‘they sent animals down the trail, placed bags of urine next to so-called “people sniffers’, and played tapes of vehicle noises next to microphones—prompting computerized bombers to unload explosives onto phantom guerrillas’.
It is easy to underestimate the technological ingenuity of low-tech actors. A good example was the US military capture of Shia militants with laptops containing many hours of video footage taken from US drones. They had used software called Skygrabber, available on the internet for $26 dollars, for downloading music and video [43].
Hezbollah carried out similar operations against Israeli forces as far back as 1996 when they used photographic evidence taken from an Israeli drone of an Israeli attack. Hezbollah also claimed that they had used analyses of Israeli drone footage to plan ambushes, such as the “Shayetet catastrophe” in which 12 Israeli commandos were killed (the method they used to hack the drone signals remains unknown) [44].
An important motivation, for those likely to be subject to an attack from AWS, would be to cause them to select targets that reduced the risk of harm to either combatants or civilians. For instance, if they were deceived into attacking dummy buildings instead of military installations, expensive fire power could be drawn and exhausted. Similarly, it would be advantageous to find ways of camouflaging military targets from sensors such that they were shielded from attack. A deception that caused AWS to target neutral, or protected installations such as hospitals could create an effective public relations coup for a terrorist group (although it is not clear who should or would be held responsible in such a case). Of course, some forms of deception could have unwanted humanitarian consequences. For instance, if it was known that AWS were programmed to attack vehicles with the heat signature of tanks, efforts could be made to modify the tanks’ heat signature to resemble that of buses or lorries. But the unwanted consequence of this could be a subsequent modification to the AWS sensors so that they targeted vehicles with the signature of buses or lorries, leading to wider devastation. This would be an example of the ‘monkey’s paw’ effect discussed in military accounts of deception, whereby seemingly effective deceptions result in unintended harmful side effects.
How could AWS be subject to deception? Once launched, they are dependent on their sensors and image recognition systems to detect the targets they have been programmed to attack. These are unlike human sensing systems and can be disrupted in ways that humans cannot even sense such as by means of high frequency sounds, bright lights, 2D images or even small dots that are entirely meaningless to us (see e.g. [45]).
There is growing awareness of the limitations of image recognition systems [46] and the risks that they could be unintentionally deceived or mistaken. For instance, problems with the sensors and image recognitions systems of autonomous cars have resulted in several Tesla crashes. Known objects in unexpected positions, such as a motorcycle lying on the ground, may not be recognised [47]. Self-driving cars and their sensors and software are known to have difficulties with rainy and snowy conditions [48]. In 1983, the sensors on Soviet satellites detected sunlight glinting on clouds and the connected computer system misclassified the sensor input as the engines of intercontinental ballistic missiles. It warned Lieutenant Colonel Petrov of an incoming nuclear attack—an unintentional deception [12].
Existing limitations are likely to be magnified by intentional efforts to mislead and confuse those sensors and image detection programs. The seemingly sophisticated sensors of AWS might be able to penetrate camouflage designed to fool human sensors. But, at the same time, available knowledge about the properties and limitations of the sensors used in computer control and classification could make it easy to hide from and misdirect AWS in ways that a human would not even notice.
There is a great deal of interest in the development of adversarial images. These are images that confuse image recognition systems trained using machine learning. For example, an adversarial image that to a human eye looks like a turtle can be perturbed by adding visual noise so that it is recognised by an image classification system as a rifle instead [49]. Adversarial images have been discussed in the context of autonomous cars—in one example, stickers added to a ‘Stop’ sign led to it being classified as a 45 mile speed limit sign [50]. There is research into ways of making classification systems resilient to adversarial images, for instance by training them to recognise them as adversarial, but it is not clear how successful this would be, and constant retraining would be needed to respond to new adversarial developments.
Another form of deception that risks AWS being directed to the wrong targets is ‘spoofing’ by sending a powerful false GPS signal from a ground base. This could cause AWS to mislocate and be guided to crash into buildings. Image classification systems could also be misled through the use of perfidious markers—such as placing a red cross on a military vehicle to prevent it from being targeted. In the light of adversarial images, it might be possible to mark such vehicles in a way that was picked up by an AWS yet was undetectable by the human eye. Other forms of perfidy are possible: for instance, if it were known that AWS would not target funeral processions, military manoeuvres could be disguised as funeral processions. Or if they were programmed to avoid targeting children, combatants could walk on their knees.
Of course, human eyes might also be deceived by such disguises, but AWS are dependent on their programming, the limitations of the sensors and on the programmers having anticipated a deception from the infinite number possible in conflict. Humans can be susceptible to deception, but they also have an understanding of human social situations and would be able to interpret a social gathering in the way that a weapon could not. Also, in the fog of war, there is the possibility that humans might intuit that something was amiss. When Lieutenant Colonel Stanislov Petrov was on duty in 1983 at the Russian nuclear detector centre, he decided not to trust the repeated warnings from the computer about an incoming nuclear attack from the US. He did not initiate a retaliatory nuclear strike and correctly reported it as a false alarm [11, 12].
This problem of the lack of human intuition and understanding by AWS, robots, and computer systems stems from the same technological (or metaphysical) shortcomings as those that result in the inability to adhere to the principles of discrimination and proportionality. It is another manifestation of the limitations of programmed or trained algorithms. There is a risk that AWS will be misled by the inputs they receive, and that they will have already attacked before any human has had the opportunity to sense that something is not right and that a mistake is about to be made. The speed at which such weapons are likely to operate (an argument sometimes used in their favour) exacerbates this risk and means that even when humans see that something is going wrong, they will be powerless to stop them.
Another problem with AWS, mentioned earlier, is the danger of unpredictable interactions between different algorithms. As explained by Sharkey [11, 35], the algorithms controlling AWS will be kept secret from the enemy. That means that it is impossible to know what will happen when two or more top-secret algorithms from opposing forces meet each other. Apart from the unknown interactions, the algorithms could be programmed for deceptive strategies such as feinting or sensor disruption.
Not only are there reasons to fear the unpredictable effects of different algorithms interacting, there is also the problem of unexpected interactions between the programming of the AWS and unanticipated environmental situations. In software testing, it is well known that bugs and errors will remain in code. As programmed entities it is impossible to test AWS for their behaviour in all of the unanticipated circumstances that can arise in conflict. And it is impossible to ensure that their behaviour will not be catastrophic in an environment of deceptive strategies.