Introduction

While in many private and public sector domains AI solutions are becoming an essential tool driving change and development, progress in the use of AI for military purposes has been hindered by a number of important ethical questions for which answers have been lacking. These questions primarily concern autonomous military platforms, which typically center on the use of lethal autonomous weapon systems (LAWS)Footnote 1 and the potential risk of nuclear escalation.Footnote 2 A recent literature review on data science and AI in military decision-making found that most of the studies examining these topics originate in social sciences. As a result, the debate about the use of AI for military purposes, although of high strategic importance, appears to be limited in terms of its scope and perspective. Additionally, the use of data science at operational and strategic level seems to be largely under-examined in current literature (Meerveld & Lindelauf, 2022). In this paper, we argue that the ethical discussion on the use of AI in military operations should re-shift its focus from so-called ‘killer robots’ and the concept of fully autonomous AI applications to solutions that remain subject of (meaningful) human control. As argued by various researchers [e.g., Tóth et al. (2022)], the use of Lethal Autonomous Weapon Systems (LAWS) is generally considered to be illegal and immoral, despite potentially decreasing risks to military personnel. There is also consensus among policy makers that AI cannot fully replace human decision-making. However, it is necessary to examine both the opportunities and risks of military AI in a broader context and to explore how AI technology can be controlled, supervised and potentially assimilated into force structure and doctrine (Johnson, 2020a, b), either strengthening or complicating deterrence (Johnson, 2019, 2020a, b). In line with the consequentialist approach towards the ethics of military AI, we argue that in discussing the responsibility of AI-based decision support techniques, military effectiveness and the entire decision-making chain in military operations should be taken into account. For example, certain types of military AI robots subjected to human control and judgment may be permissible for self-defense purposes, human-AI teaming could lead to faster and more appropriate decision-making under pressure and uncertainty, and AI systems could be broadly used for adaptive training of military personnel, thereby helping to mitigate decision-making biases [e.g., by means of detecting drowsiness or fatigue from neurometric signals in the brain (Weelden et al. 2022)]. In Fig. 1 we visualize the current debate on responsible AI in a military context and its focal points (i.e., the lower right quadrant, Machine Weakness (MW) and the endpoint of the MDMP). In what follows, we first elaborate on the military decision-making process (MDMP) that in large part precedes lethal target engagement on a battlefield. Next, we present some examples of potential use of AI solutions in the MDMP together with their benefits and infer the issue of the (ir)responsibility of military AI.

Fig. 1
figure 1

Characterization of the debate on responsible AI in a military context. The red dashed lines indicate the focus of current literature, while the ideal scope of the debate is represented by the blue dashed lines. (Color figure online)

AI in support of the military decision-making process (mdmp)

Military decision-making consists of an iterative logical planning method to select the best course of action for a given battlefield situation. It can be conducted at levels ranging from tactical to strategic. Each step in this process lends itself to automation. This does not only hold for the MDMP, but also for related processes like the intelligence cycle and the targeting cycle. As argued in Ekelhof (2018), instead of focusing on the target engagement as an endpoint, the process should be examined in its entirety. To illustrate this point, we visualized the preferred scope with the blue circle in Fig. 1. Below, we first briefly describe the MDMP. Subsequently, we explore the potential advantages of AI in decision-making and provide some examples of how AI can specifically support the MDMP at several different (sub-)steps.

The MDMP and its challenges

The US Army defines seven steps in the MDMP: (1) receipt of mission, (2) mission analysis, (3) course of action (COA) development, (4) COA analysis, (5) COA comparison, (6) COA approval, and (7) the order production, dissemination, and transition (Reese, 2015). The level of detail of the MDMP depends on the available time and resources, as well as other factors. Each step in the MDMP has numerous sub-steps that generate intermediate products. Examples include intelligence products developed during the intelligence preparation of the battlefield (IPB) that are used to indicate COAs and decision points for commanders or geospatial products from terrain analyses that can include recommendations on battle positions and optimal avenues of approach. The intelligence cycle, per NATO standard consisting of four steps (Direct, Collect, Process, and Disseminate) (Davies & Gustafson, 2013), is the separate but relating sub-process by which these intelligence products are created. Other examples of sub-processes in the MDMP are the targeting cycle, as explained by Ekelhof (2018), or the continuous lessons learned process in order to incorporate best practices and lessons learned into military doctrine (Weber & Aha, 2003), which ultimately forms important input in, for example, the COA development phase.The MDMP and its related processes entail many labor-intensive, handcrafted products. This has two important consequences. First, due to the complexity of the information space, the MDMP is hugely susceptible to cognitive biases. These can be both conscious and unconscious and may result in suboptimal performance. An example of a cognitive bias is groupthink which is a problem typically encountered during the analysis and assessment phase of the Intelligence Cycle (Parker, 2020). Another example is the anchoring bias when decisions are made based on initial evidence (the anchor) (Heuer, 1999), as exemplified in a scenario where a group of aviators need to determine the optimal location of battle positions after having received an initial list of good locations during helicopter mission planning. Even though intuitive decision-making in the MDMP may be effective, it is well known that both intuition and uncertainty can lead to faulty and erroneous decision outcomes (Van Den Bosch & Bronkhorst, 2018). Because our human cognitive mechanisms are ill-equipped to convert information from a high volume of data into valuable knowledge (Cotton, 2005), the susceptibility to cognitive biases increases with the exponential growth of data volume (Heuer, 1999). It is expected that the challenge of information overload will only increase, since modern military operations increasingly rely on open-source data (Ekelhof, 2018). Second, labor-intensive processes tend to be time-consuming. The contemporary digitized environment results in a proliferation of various data sources in different formats (i.e., numerical, text, sound, and image) and intelligence requires their fusion and interpretation (Van Den Bosch & Bronkhorst, 2018). In most military situations, it is of high importance to design efficient and streamlined planning processes, avoiding labor-intensive sub-steps, when possible, to ensure that no time is lost (Hanska, 2020). After all, the aim is to outpace the opponent’s OODA-loop (i.e., Observe, Orient, Decide, Act) (Osinga, 2007) and AI-based automation can be an important driver of such efficiency gain. In addition, time pressure can further increase the chance of a cognitive bias [e.g. (Roskes et al., 2011) and (Eidelman & Crandall, 2012)]. In sum, human decision-making mechanisms appear to be deficient in many military circumstances given a limited capacity to process all potentially relevant data and a limited amount of time. The value of AI is found in the capacity to support human decision-making, which optimizes the overall outcome (Lindelauf et al., 2022). In the next section, we address the opportunities offered by AI in more detail by presenting examples of automation of (sub-) elements in the MDMP.

The added value of AI for military decision-making

Given the limitations of human decision-making, the advantage of (partial) automatization with AI can be found both in the temporal dimension and in decision quality. A NATO Research Task Group for instance examined the need for automation in every step of the intelligence cycle (NATO Science & Technology Organization, 2020) and found that AI helps to automate manual tasks, identify patterns in complex datasets and accelerate the decision-making process in general. Since the collection of more information and perspectives results in less biased intelligence products (Richey, 2015), using computer power to increase the amount of data that can be processed and analyzed may reduce cognitive bias. Confirmation bias, for instance, can be avoided through the automated analysis of competing hypotheses (Dhami et al., 2019). Other advantages of machines over humans are that they allow for scalable simulations, conduct logical reasoning, have transferable knowledge and an expandable memory space (Suresh & Guttag, 2021), (Silver, et al., 2016).An important aspect of the current debate about the use of AI for decision-making concerns the potential dangers of providing AI systems with too much autonomy, leading to unforeseen consequences. A part of the solution is to provide sufficient information to the leadership about how the AI systems have been designed, what their decisions are based on (explainability), which tasks are suitable for automation and how to deal with technical errors (Lever & Schneider, 2021). Tasks not suitable for automation, e.g., those in which humans outperform machines, are typically tasks of high complexity (Blair et al., 2021). The debate on responsible AI should therefore also take human strengths (HS quadrant) into account. In practice, AI systems cannot work in isolation but need to team up with human decision-makers. Next to the acknowledgment of bounded rationality in humans and ‘human weakness’ (viz. lower left quadrant in Fig. 1; HW), it is also important to take into consideration that AI cannot be completely free of bias for two reasons. First, all AI systems based on machine learning have a so-called inductive bias comprising the set of implicit or explicit assumptions required for making predictions about unseen data. Second, the output of machine learning systems is based on past data collected in human decision-making events (machine weakness, MW, viz. lower right quadrant in Fig. 1). Uncovering the second type of bias may lead to insights regarding past human performance and may ultimately improve the overall process.

Examples of AI in the MDMP

It is important to examine the risks of AI and strategies for their mitigation. This mitigation, however, is useless without examining the corresponding opportunities at the same time (MS quadrant in Fig. 1). In this paragraph, therefore, we present some examples of AI applications in the MDMP. In doing so, we provide an impetus for expanding the debate on responsible AI by taking every quadrant in Fig. 1 into account.An example of machine strength is the use of AI to aid the intelligence analyst in the generation of geospatial information products for tactical terrain analysis. This is an essential sub-step of the MDMP since military land operations depend heavily on terrain. AI-supported terrain analysis enables the optimization of possible COAs for a military commander, and additionally allows for an optimized analysis of the most likely enemy course of action (De Reus et al., 2021). Another example is the use of autonomous technologies to aid in target system analysis (TSA), a process that normally takes months (Ekelhof, 2018). TSA consists of the analysis of an enemy’s system in order to identify and prioritize specific targets (and their components) with the goal of resource optimization in neutralizing the opponent’s most vulnerable assets (Jux, 2021). Examples of AI use in TSA include automated entity recognition in satellite footage to increase the information position necessary to conduct TSA, and AI-supported prediction of enemy troop locations, buildup and dynamics based upon information gathered from the imagery analysis phase. Ekelhof (2018) also provides examples of autonomous technologies currently in use for weaponeering (i.e., the assessment of which weapon should be used for the selected targets and related military objectives) and collateral damage estimation (CDE), both sub-steps of the targeting process. Another illustrative example of the added value of AI for the MDMP is in wargaming, an important part of the COA analysis phase in the MDMP. In wargames AI can help participants to understand possible perspectives, perceptions, and calculations of adversaries for instance (Davis & Bracken, 2021). Yet another example is the possibility of a 3D view of a certain COA, enabling swift examination of the terrain characteristics (e.g., potential sightlines) to enhance decision-making (Kase, et al., 2022). AI-enabled cognitive systems can also collect and assess information about the attentional state of human decision-makers, using sensor technologies and neuroimaging data to detect mind wandering or cognitive overload (Weelden et al., 2022). Algorithms from other domains may also represent value to the MDMP, such as the weather-routing optimization algorithm for ships (Lin et al., 2013), the team formation optimization tool used in sports (Beal et al., 2019), or the many applications of deep learning in natural language processing (NLP) (Otter et al., 2020), with NLP applications that summarize texts (such as Quillbot and Wordtune) decreasing time to decision in the MDMP. Finally, digital twin technology (using AI) has already demonstrated its value in a military context and holds a promise for future applications, e.g., enabling maintenance personnel to predict future engine failures on airplanes (Mendi et al., 2021). In the future, live monitoring of all physical assets relevant to military operations, such as (hostile) military facilities, platforms, and (national) critical infrastructure, might be possible.

Conclusion

The debate on responsible AI in a military context should not have a predominant focus on ethical issues regarding LAWS. By providing a characterization of this debate into four quadrants, i.e., human–machine versus strength-weakness, we argued that the use of AI in the entire decision-making chain in military operations is feasible and necessary. We described the MDMP and its challenges resulting from the labor-intensive and handcrafted products it involves. The susceptibility to cognitive biases and the time-consuming character of those labor-intensive processes present limitations to human decision-making. We conclude that the value of AI can, therefore, be found in the capacity to support this decision-making to optimize its outcome. Ignoring the capabilities of AI to alleviate the limitations of human cognitive performance in military operations, thereby potentially increasing risks for military personnel and civilians, would be irresponsible and unethical.