The UK is superbly placed to take advantage of developments in AI, including in the area of national security. In recent years it has begun to chart a way ahead, notably by developing an AI strategy for defence and national security. Much work is underway within defence and more broadly in government, industry, and academia. The military is experimenting with new autonomous platforms and with the doctrines and concepts that might allow their effective employment. Aerial drone swarms, pilotless “loyal wingmen,” unmanned submersibles and tactical ground robots—all are part of the British military’s ongoing work. New partnerships with industry and academia have been developing. Autonomous systems are already at work in data processing and intelligence analysis. And across defence and in wider society, lively debates are underway about the ethical implications of using AI in national security, including in decisions about the employment of lethal force. Today, the pace of change is accelerating. New organisational structures are coming into being; new dedicated career streams are mooted; military education increasingly incorporates the study of AI; and, of course, new military systems, including weapon systems, are coming online.

But there are several challenges ahead. There is considerable uncertainty about the future development of AI. Equally there are concerns about its application to defence—not only from an ethical standpoint, but also in terms of its performance. How robust will AI systems be to adversarial countermeasures, including electronic and cyber warfare? How susceptible might AI systems be to bias; how brittle might their performance be in novel situations of the sort they might encounter in battle?

Such concerns are not unique to the British national security sector. But what makes them particularly challenging are Britain’s longstanding aspiration to retain full spectrum military and intelligence capabilities, to operate at global reach, and to do so whilst undertaking a significant technological transformation. The British defence budget is large and growing, but so too are its aspirations, and commitments. It will be a formidable challenge to develop new technologies, including AI (but also others, like new hypersonic missiles, satellites, sixth generation fighter aircraft and a new generation of nuclear submarines) while still maintaining its broadly constituted armed forces.

More broadly there are wider concerns about the economic and political environment in which these changes will occur. The next few years will bring marked economic challenges from high inflation and slow growth, a combination not seen since the 1970s. Again, these are not unique difficulties, but in Britain they are exacerbated by uncertainties following Brexit, and by the UK’s low productivity. These headwinds impact the economy and society beyond immediate defence budgeting. Eventually, however, those broader issues will feed into defence via their impact on the UK’s research base, or its attractiveness to inward investment and high-skilled migration.

1 Thinking About Defence AI

The UK’s recent Integrated Defence and Security review stressed the significance of AI for national security. The Review charged the government with establishing “a leading (global) edge in critical areas such as Artificial Intelligence” (HMG 2021a: 7). Responding to the Review’s top-level direction in a “Command Paper” the Ministry of Defence (MoD) described AI as transformative and so “essential to Defence modernisation.” The paper stressed the need to move quickly on AI and envisaged applications “from the battlespace to the back office” (MoD 2021: 42).

A year later, in 2022, the Department published a detailed AI Strategy, produced following engagement with industry and academia (MoD 2022a). The new strategy added much more detail to the vision for defence AI outlined in the MoD’s 2019 Defence Technology Framework, where it was bracketed alongside materials science, electronics, robotics, power storage and other rapidly developing technologies (MoD 2019: 18–20). In the years since, AI expanded in importance such that the current head of the Army referred to it in a 2021 speech as the “one ring to rule them all” (Sanders 2021). The new, expansive strategy paper sets the Ministry of Defence’s bold ambition to become, in terms of AI, “the world’s most effective, efficient, trusted and influential Defence organization or our size” (MoD 2022a). These are all somewhat subjective benchmarks—but perhaps rather than taking them literally, or dismissing them as corporate boilerplate, they can best be seen as indicative of genuine ambition and organizational drive to reform.

Among the many salient points raised in the Strategy were significant organisational changes, intended to boost the scale and pace of AI adoption across defence. AI would be jointly managed by a strategic-level Defence AI and Autonomy Unit (DAU) and a Defence AI Centre (DAIC). While the former sets the overall direction and policy framework, the latter oversees research and development and technical issues. In addition, the Strategy called for upskilling of military and civil service personnel and the creation of new AI-focussed career pathways. There was emphasis on the need to build a wider and deeper collaborative network with other actors, signalling the departments enthusiasm to invest in AI technologies. And there was an important distinction between what the MoD calls “AI Now”—technologies that are reaching maturity, and able to be instrumentalised as practical systems for Defence; and “AI Next,” cutting-edge research that might perhaps deliver utility in years ahead. On AI now, the Ministry is eager to speed the process of experimentation, validation, and adoption of useful AI.

Lastly, of note is the Strategy’s emphasis on the continued role of human decision-makers, amidst rapid technological change. It stresses the need to develop effective “human machine teams,” and to assess and mitigate the risks of AI systems. There will, the authors note, always be human political control of the UK’s nuclear weapons.

1.1 The UK’s Definition of AI

Much discussion of AI in UK national security, as elsewhere, focusses on kit, especially weapon systems that can operate autonomously and demonstrate intelligent decision-making. That is understandable; equipment is visibly striking, as with swarming drones and crewless ships. More than that, attention often concentrates on weapon systems—the final part of the so-called “kill chain,” which delivers lethal force. That is reflected in the reams of analysis on the ethics of “killer robots.” There is, of course, some technology like that in service with the UK, and much more in the pipeline, some of which features below.

But AI in national security extends much further than this. AI is a general-purpose technology or becoming one. Some analysts compare its likely influence to electricity, or the internal combustion engine, but even these comparisons miss something of the quality of the technology involved. AI is better seen as a decision-making technology, or rather technologies. As such, it is applicable across a broad range of activities, many with national security implications. This makes less visible from the outside, and so it can be challenging to analyse the extent and quality of any AI transformation. That is especially so where the information is classified, as it often is, and when change is happening at pace. Any reflections on defence AI in the UK, as elsewhere, are liable to miss important details.

In its new AI strategy, the MoD describes AI as something that can “supplement or replace human intelligence.” It borrows its overarching definition from the overall UK National AI Strategy, which in turn defined AI as “machines that perform tasks normally requiring human intelligence, especially when the machines learn from data how to do those tasks” (HMG 2021b: 4). It is a broad definition, and this breadth is a mixed blessing. On the one hand, its flexible enough to accommodate a range of underlying computer technologies, architectures, and systems, performing a wide range of tasks. On the other hand, this flexibility and wide range of possible defence-related activities it encompasses could compromise the coherence and focus of reform efforts.

Moreover, the UK definition might be critiqued as overly focused on human intelligence as a yardstick. AI can perform tasks that are far beyond human intelligence in some domains, as with lightning-fast exploitation of vulnerabilities in adversary computer systems. AI allows qualitatively different decision-making to the human variety: It is not that AI replaces or supplements human decision-making but does something entirely different.

1.2 AI Demands New Concepts, British Thinking Is Nascent

While much attention is spent on physical platforms, the story of AI in defence is far broader. UK doctrine distinguishes between physical and conceptual components of “fighting power,” and the conceptual is likely to be every bit as important as the physical, with new possibilities for combined arms warfare. Work here is nascent, with little formal doctrine yet. There is existing conceptual writing on human-machine teaming, which stresses the continued importance on human judgment in military activities; a theme that emerges frequently in military discussions of defence AI (MoD 2018a). AI features in passing in other conceptual work too, as with the Royal Air Force’s (RAF) doctrine on “unmanned” aerial systems, dating from 2017, which states:

The UK does not possess armed autonomous aircraft systems and it has no intention to develop them. The UK Government’s policy is clear that the operation of UK weapons will always be under human control as an absolute guarantee of human oversight, authority, and accountability. Whilst weapon systems may operate in automatic modes there is always a person involved in setting appropriate parameters. (MoD 2017b: 14)

That remains the most explicit doctrinal statement on autonomous lethal weapons, but it is increasingly strained by technological advances over the last half decade, notably swarming. The MoD has, we will see, developed further views on AI ethics, but the essential problem of “meaningful” human control remains.

More broadly than ethics, the next generation of doctrine will need to wrestle with how best to employ AI—exploring the ways in which it might alter combined arms warfare. To inform this conceptual thinking, there is considerable experimental work underway in the armed services. Some of this is explored below. More broadly, the UK is home to a small number of specialists in industry, academia, and the wider armed forces, all engaged in thinking through the practical and conceptual dimensions of warfighting AI. Focal points include the UK’s Defence Academy and the Royal United Services Institute. Doctrine and concepts typically originate and evolve in the context of the UK’s close Alliance relationships, especially with the US. There is a long history of shared intellectual endeavour, and AI thinking in both countries is developing along broadly similar lines. As of yet, there is limited formal evidence of common approaches to AI, although in several areas parallels are emerging.

1.3 AI Ethics, the UK Debate

Ethical debate in the UK over AI weapons has thus far been largely limited to small groups of concerned specialists and activists, rather than the wider public. There is some evidence that this is changing. In 2021, the BBC’s high profile annual lecture series, the Reith Lectures, featured a prominent British computer scientist, Stuart Russell, discussing some of the ethical challenges of AI, including in warfare (Russell 2021). Algorithms were implicated in the controversial awarding of high school student grades when exams were impossible during the covid pandemic. There were concerns about personal health data collected by the National Health Service being accessed by technology companies. And stories about the surveillance capabilities of intelligence agencies periodically make the national press, as with the extensive coverage of the Snowden leaks from the US National Security Agency (NSA). At the moment though, there is little evidence of widespread, deeply held or sustained public engagement with AI issues. It does not yet, for example, feature explicitly in polling of public concerns.

Ethical debate happens inside Defence too, including in defence legal circles. The Defence AI Strategy makes frequent mention of ethics and the need to develop AI in line with the UK’s democratic values. It notes that adversaries are likely to use AI in ways that the UK would consider unethical. And the Ministry published, in conjunction with its Strategy, a separate policy paper on the “ambitious, safe, responsible” use of AI. That paper insists:

there must be context-appropriate human involvement in weapons which identify, select and attack targets. This could mean some form of real-time human supervision, or control exercised through the setting of a system’s operational parameters. (MoD 2022b: 3)

Further, it outlines some key challenges, including AI bias and unpredictability, and it sets out some ethical principles, not least of which is “human-centricity.” The MoD has also convened an AI ethics advisory committee to offer informal input on its approach, and to act as a forum for engaging wider views. It is a serious effort to grasp some tricky issues—though the MoD certainly would not claim to have solved them. What, for example, is meant by “context appropriate?”

As AI becomes more pervasive, perhaps it will become part of a larger public discourse in the UK. There will certainly be Parliamentary scrutiny, perhaps even a dedicated select committee. There may be scope for AI commissioner, along the lines of the UK’s Information Commissioner. The challenge for the UK will be to maintain its current lead in AI, including in national security, while ensuring that the changes AI spurs are sympathetic to the broader norms of wider society. The MoD’s paper grasps that much, at least.

2 Developing Defence AI

AI research is currently advancing more rapidly than its adoption by defence. Earlier AI systems allowed rudimentary autonomy and were well suited to some military applications—defensive weapons, for example, like the Royal Navy’s shipboard Sea Viper anti-missile system. For the last decade, research progress has been remarkable in machine learning, where computers improve optimisation through repeated exposure to training data. It is this generation of AI, especially its “deep learning” subset, modelled loosely on biological brain cells, that is currently driving UK defence applications—whether of autonomous aircraft, sophisticated language translators, or intelligence analysts. All these applications are part of the UK approach to AI.

AI now enables offensive weapon systems that can proactively identify and attack targets, and some are already in service. The UK’s Brimstone air-launched missile from MDBA is a good example of a weapon that can scour a search area as it flies, looking for a pre-set target-type. Newer weapon systems will be able to “loiter” over the battlefield before striking targets of opportunity selected from a pre-set list of target types. The UK has begun to experiment with such weapons, including the US manufactured Switchblade—but it has not yet acquired these in significant numbers. Nor does it possess an offensive weapon system that can integrate reconnaissance and strike functions fully autonomously, like the Israeli Harpy. That is increasingly a matter of choice rather than necessity though—the UK’s Protector drone, armed with Brimstones, would theoretically be able to do so, flying autonomously and parsing target information using autonomous image analysis.

Still, the UK’s autonomous combat capabilities remain, for now, somewhat rudimentary. Protector, for instance, is not designed to operate in highly contested environments. AI platforms capable of performing aerial attack and air superiority roles are still a little way off in the UK, as elsewhere. The UK is currently working on its sixth-generation fighter programme, the Tempest. There will be plenty of AI involved—in parsing incoming information, for example, or in autonomously deploying defensive countermeasures to protect the aircraft. Perhaps there will be something more fundamental still; no human on board. Unclassified concept designs still conceive of Tempest as a crewed fighter with a cockpit. The MoD’s Combat Air Strategy calls for an aircraft “manned or unmanned (sic),” suggesting that Tempest may yet be crewless (MoD 2018b). In either case, the main platform may operate as part of a system alongside unmanned platforms—“loyal wingmen” of some variety yet to be determined. The UK did not have an entry in the Defense Advanced Research Projects Agency’s (DARPA) recent AI dogfighting contest and has no publicly known equivalent process underway to competitively refine AI-fighter pilots.

Rather than air superiority or strike aircraft, AI systems are more immediately promising in the intelligence, surveillance, and reconnaissance (ISR) roles—whether through unmanned platforms and sensors operating in space, air, sea and on land; or through machine intelligence analysis of the data that they collect. Understandably, much of the detail about the autonomous capabilities of such systems is classified. Nonetheless it is reasonable to suppose that similar technologies are being employed in the UK as in the United States, not least because some of the same suppliers, platforms and systems are involved—General Atomics supplies the RAF’s Protector unmanned aerial vehicle (UAV), a variant of its Reaper platform; and Palantir, which is main contractor of the US Department of Defense’s (DoD) “Project Maven” also offers its AI-powered analytics to the MoD. With technologies like these, autonomous flight, real time high-definition image capture, and AI labelling and processing of such imagery is eminently feasible, even if not yet operational.

Throughout British defence, a menagerie of robotic, unmanned systems is coming into being (Table 1). Some will be used for ISR, some for logistics activities, and some for strike, or for combat roles. Some will combine roles, like the strike-capable Protector. Increasingly systems will span the traditional domains of land, air, and sea, as with small unmanned helicopters like Anduril’s Ghost, used in tactical airspace for short-range ground reconnaissance, and perhaps attack. That raises some interesting organisational dilemmas: are such drones the purview of the Royal Air Force, or the Army? In field experiments, they have been used by the Royal Marines.

Table 1 UK military services experiment with different types of unmanned systems

Experimental kit is arriving so frequently that it is impossible to keep up. Newer models will undoubtedly be featuring in trials, even as you read this report. Nonetheless, some general observations can be made:

  • All three armed services are following a similar model—acquiring small numbers of platforms from a range of traditional and non-traditional suppliers. This includes suppliers from the UK and overseas; large, established providers; suppliers of military hardware and—perhaps more importantly for AI—of the code and systems underpinning it.

  • Alongside established defence corporations like BAE, MBDA and so on, are newer, software-focused arrivals like Adarga and Rebellion Defence who offer AI and data analytical services to the MoD.

  • There are giants, like Amazon, who furnish the UK MoD and intelligence agencies with the cloud computing services that securely house the data on which the AI works; mid-sized outfits, like Palantir who provide ways of parsing that information; and comparatively tiny outfits, like Callen Lenz, whose small UAVs have formed part of the RAF’s experimental swarming work.

There is a lot of work going on, only some of it visible to outsiders. But there is still a palpable sense of being in the foothills of more profound changes to come, driven by the national AI strategy. And whilst established customer relationships and organisational habits will continue to exert influence, there is a feeling of the kaleidoscope having been shaken.

2.1 From ‘AI Now’ to ‘AI Next’

The British AI capabilities being fielded now, even experimentally, are a long way behind the cutting edge of AI research. Some lag is inevitable—it takes time to develop and validate applications, and culture invariably intercedes to shape adoption. But the strength of the UK ecosystem is that it innovates basic research, rather than merely attempting to instrumentalise approaches developed elsewhere. In this it has few peers beyond the US, or perhaps France or Israel. In the foreword to the Integrated Review, the Prime Minister called for the UK to remain “at least third in relevant performance measures for scientific research and innovation” (HMG 2021a: 7). The top two were not listed, but likely included China alongside the US—a somewhat debatable proposition.

UK defence AI activity is embedded within a wider context, involving the defence industry, academic research and wider, civil research and development. The UK has long been a leading actor in the development of AI, reaching back decades to the emergence of the discipline of computer science. The UK possesses world leading AI researchers, and in DeepMind (owned by Alphabet/Google) it has perhaps the outstanding AI innovation hub of the last decade. It also has, we will see, a large in-house research base, an established defence industrial sector, and several newly prominent AI companies offering services to the MoD.

The innovation that will emerge from this ecosystem is, of course, uncertain. But it is already apparent that some areas will be important. Among these, advances in unsupervised machine learning and learning from limited data are already underway. There will likely also be dramatic developments in computer architecture—notably in quantum computing. The British MoD recently acquired its first quantum computer—a technology that promises computer processing orders of magnitude faster than conventional, binary supercomputers (McMahon 2022). As it matures, quantum computing may lead to radical developments in AI, and also in decryption—posing a threat to network security. In its AI Strategy, the MoD suggests that the next stage of AI might help with military tasks including automated cyber defence and intelligence fusion. Further out, it identifies more challenging activities, like operational planning and “machine speed command and control” (MoD 2022a: 34–35). That will require further conceptual breakthroughs in AI, including perhaps the ability to reason conceptually, or to better model adversary intentions.

Whilst the UK is well placed to innovate new approaches and technologies, it has some notable weaknesses. Today, the UK has no global technology giant, along the lines of Baidu, Google, or Microsoft—all of whom are currently leading funders of AI research. Brexit has created an economic headwind, affecting the UK’s attractiveness to inward investment and high-skill migration—especially from the EU—whilst sapping demand for UK output, both in services and manufacturing. The UK’s university sector remains world-class but faces multiple challenges—whether competition from American high-technology research companies for talent, or access to EU research funding after Brexit. Another challenge is British productivity, which again lags peers, especially in the US and EU. The reasons are hotly debated and likely multifaceted but have unarguably proved resistant to change.

3 Organising Defence AI

New ideas are one thing—the arrival of AI has also spurred plenty of organisational change, and some degree of muddle. There is a palpable sense of being at the beginning of changes that may soon be more far reaching, especially as AI drives conceptual changes.

One challenge is that AI itself is often “domain agnostic”—capable of operating across all three traditional domains (land, maritime, air) as well as the two newer ones (space, cyber). This suggests the value of central organisation at Departmental level, where we already saw the creation of a new Defence AI Centre to champion AI, alongside the existing Defence Autonomy Unit. Plenty of other actors within UK Defence are involved in developing approaches to AI, and the following is certainly not exhaustive.

One key player is Strategic Command, one of the UK’s four Front Line Commands, alongside Army, Navy, and Air Commands. Strategic Command is responsible for a range of joint capabilities and enablers. It already coordinates a range of AI-related activities and organisations, notable among which is its jHub unit, which promotes innovation by building relationships with technology suppliers, especially those with “dual use” civilian and military application. Also in Strategic Command is Defence Digital, charged with overseeing military IT. Defence Digital has an interest in AI, for example via its work on digital twins and synthetic environments that simulate the real world. Developing secure approaches to processing huge volumes of data is an important aspect of the MoD’s aspirations for AI but extends more broadly than AI to encompass all aspects of military information processing. So, the organization is responsible for developing a digital “backbone”—the physical capacity to share data, and its Foundry works to support organizations in accessing and exploiting the data.

Elsewhere, organisational partnerships on AI are increasingly common. For example, the MoD has established a defence BattleLab to facilitate technological experimentation. The lab is a collaboration between the Navy, Army and two other units, the Defence Innovation Unit and the Defence Science and Technology Laboratory (Dstl), the government’s main in-house science and technology research agency.

Dstl is perhaps the key government player in defence AI research. Although much of its output is not publicly accessible, it publishes open access guides, called “biscuit books,” on aspects of AI for wider audiences in Defence (DSTL 2021). The organisation partners with a range of other actors—some in government, as with the two Front Line commands in the BattleLab, some outside. One important and deepening Dstl relationship is with the Defence and National Security theme at the UK’s Alan Turing Institute, with which it has recently created a Defence Centre for AI Research (DCAR) (DSTL 2022). The Turing Institute, established in 2015 by a partnership of leading universities, is the UK’s national institute for AI and data science, and engages in a wide range of basic and applied research. Clearly the goal of the DCAR is to foster connections between academic researchers who are part of the Turing’s network, and those within Dstl. Another Turing-Dstl project is its work with the UK National Cyber Security Centre, an offshoot of the UK’s Government Communications Headquarters (GCHQ) electronic intelligence agency, to explore the employment of AI in automated cyber defence. Elsewhere in the UK’s defence apparatus, likely at the recently established interagency National Cyber Force, similar work is almost certainly underway exploring offensive autonomous cyber techniques.

Meanwhile, co-located with Dstl, but organisationally separate from it, is the MoD’s Defence and Security Accelerator (DASA). Founded in 2016, DASA works with private enterprises of all sizes in a bid to promote innovation—as with its funding of Flare Bright’s hand launched Snapshot tactical reconnaissance nanodrone, or with Marlin Submarine’s work in rapidly prototyping the Manta large submersible project (DASA 2021; Navy Lookout 2020).

4 Funding Defence AI

Broadly, there is substantial investment in British research, and it has clearly produced excellent outputs, including some cutting-edge defence equipment. The business sector accounts for a majority of overall R&D spending in the UK, much of which, of course, is not for defence. Private business funded some £20.7bn of R&D in 2019, some 54% of the total, comfortably outstripping the 27% spent by the public sector (Hutton 2021: 14–15). Defence R&D spending is also substantial, with government spending alone amounting to some £1.1bn in 2020.

Yet the UK is surprisingly weak when measuring R&D comparatively. While expenditure has been rising steadily in the UK, in nominal terms, over an extended period of several decades, as a percentage of GDP expenditure has been broadly flat for many years. It is currently around 1.7% of GDP, a figure that compares unfavourably with peer countries in Europe (Germany 3.2%, France 2.2% in 2019) and north America (USA 3.1% in 2019). The incumbent government has plans to increase R&D spending to 2.4% of GDP by 2027, but even that would only be broadly in line with the OECD average.

There are also plans to increase defence R&D. In its Command Paper responding to the Integrated Security and Defence Review, the MoD announced its intention to rapidly expand its R&D budget over coming years. The headline figure of £6.6bn spent over four years would represent a significant increase over the £1bn or so currently spent annually. The fine-grained details of what gets spent where are currently lacking—but the Review’s emphasis on AI makes it clear what the Department’s priorities are. Again, this all sounds striking—and it is far from small potatoes. But a comparison with the United States is sobering. The current defence budget in the US projects a 9.5% annual increase in R&D spending, to some USD130bn each year. That’s more than 100 times the UK figure.

While Dstl is an obvious focus of research on AI for UK defence, there are plenty of others engaged in AI R&D as part of the national security ecosystem. One major government-funded actor is UK Research and Innovation (UKRI), a conglomeration of the UK’s funding councils that direct funding into academic research. In 2020, UKRI accounted for £6.1bn of investment, and while much of this would have little direct impact on defence, plenty would—either directly in the innovation of new technologies and applications, or indirectly in advances in basic research.

One final government initiative is noteworthy—the establishment in 2022 of a new funding body, ARIA—the Advanced Research and Invention Agency, explicitly, if loosely, modelled on DARPA, the US Defense Department’s research powerhouse. ARIA is supposed to inject UK funding with a dash of risk tolerance, with the obvious inference that other funders, notably UKRI, have been too conservative. Its explicit mission is to invest in projects with the potential for paradigm shifting, transformative effects. This is certain to include considerable funds for AI research, though perhaps on basic research with applications some way downstream. With a projected £800M budget, ARIA will be a significant part of the innovation ecosystem—but critically, unlike DARPA, it lacks a formal link to defence.

5 Fielding and Operating Defence AI

The individual Front Line Commands clearly have an interest in developing approaches to AI. By “pulling through” emerging technologies, these commands, perhaps at least as much as the centralised allocation of R&D budgets, will shape the eventual employment of AI systems. There is plenty of salient scholarship on cultural approaches to understanding defence, including some that reflects on the British military’s long relationship with technological innovation. A key takeout is that ostensibly similar militaries, even allies, can employ similar technologies in rather different ways, with dramatic effects on fighting power.

5.1 The Army

Accordingly, the Army’s Futures Directorate is considering the implications of AI and unmanned systems for land warfare. The Directorate’s short paper on the Army’s Approach to Robotics and Autonomous Systems sketches some ideas for concept development, arguing that autonomous and remotely commanded systems will allow it to increase mass and dispersal, “whilst detecting and engaging the enemy in the most dangerous parts of the close and deep battle” (British Army 2021). There is not a huge amount of detail in the paper, and it is the land domain where adoption of AI will perhaps prove most challenging, owing to the complex terrain, both human and physical. The army is experimenting with small tactical robots, but AI’s immediate utility for the Army is likely to come in other areas, like the integration of command, control and ISR activities, the domain-agnostic cyber-contest for digital advantage, in tactical airpower, and perhaps in the longer-range coordination of indirect fires.

There is considerable debate in professional forums about the future structure of the Army, the sorts of equipment it should acquire, and how many personnel it needs. Of the services the Army seems the most unsettled in terms of its vision for future warfare—a reflection not just of the arrival of more sophisticated AI, but of the muddled and unsatisfactory conclusion of longstanding deployments in Iraq and Afghanistan, of the rapidly evolving high-intensity conflict in Ukraine, and of the Army’s longstanding procurement difficulties with major combat systems, like the Ajax Armoured Personnel Carrier (APC), Watchkeeper UAV and Warrior Infantry Fighting Vehicle (IFV). The Futures Directorate is responsible for shaping the intellectual way ahead, via its Project Wavell, which seeks a “theory of victory” fit for an era of increased autonomy and AI. And the Directorate has an ambitious goal for the Army of fielding a light brigade enhanced with robotic systems by 2025—only one year hence.

5.2 The Royal Marines

The Royal Marines (RM), meanwhile, have undertaken frequent small scale field experimentation with advanced technology, including autonomous weapons, often as part of their Future Force Commando programme.

The RM have a clear vision of small, technologically sophisticated units operating in the maritime domain and littoral, and in so-called “grey zone” conflicts, at the threshold of major combat operations. This concept has sharpened the focus of their field exercises. In one attention grabbing exercise, a combined RM-US Special Operations Forces (SOF) unit with experimental technologies reportedly outperformed a larger United States Marine Corps (USMC) adversary force (Brown 2021). There is plenty of autonomous-capable equipment under test here—including Anduril’s small reconnaissance helicopters and loitering munitions. But it is the conceptual work as much as the kit that stands out—as when the Marines experiment with platforms operating across multiple domains simultaneously.

The other notable feature is how much of this work is being communicated publicly—including on YouTube (Royal Marines 2021). The RM is clearly keen to be seen to be at the cutting edge, perhaps because in common with its American counterparts, also known for their conceptual agility, the small force faces continued threats to its independence and funding, and so seeks a distinctive identity.

5.3 The Royal Navy

Unsurprisingly given its maritime focus, the Royal Navy (RN) has been working with the Royal Marines on its Future Force concept. But the implications of AI systems for the Navy are likely to be broader and more profound than that. The RN’s forays into AI equipment are as yet relatively small scale—small (relative to crewed) non-nuclear-powered submersibles; and similarly small drones and surface vessels, notably autonomous minesweepers. Of course, it already employs autonomous systems in its missile defences and torpedoes. And the F-35 aircraft that fly from its two large aircraft carriers utilise AI in their information management systems.

Larger changes are inevitable. One example: DASA and Dstl are working with business and academic teams on an Intelligent Ship competition, which will explore the utility of human-machine teams across a range of maritime tasks, including engineering decisions and mission analysis (Lye 2021). The Navy itself has established several teams to work on technological innovation. In addition to its Chief Technology Officer, there is NavyX, described as an “autonomy accelerator” and Project Nelson, which focuses on digital technologies (Royal Navy 2022). In common with the other services, there is a palpable sense of energy and enterprise, but work remains relatively small scale. In time AI may challenge some more fundamental tenets of Britain’s approach to naval warfare—whether that is the focus on the carrier strike group, with crewed aviation; the role of Navy nuclear submarines as sole leg of the UK nuclear deterrent; or the way in which amphibious force is projected ashore. While there will inevitably be conceptual work underway on all these aspects and more, much remains outside the public sphere.

5.4 The Royal Air Force

The Royal Air Force might be expected to be at the forefront of efforts to innovate and instrumentalize AI. Certainly, many peoples’ visions of AI in national security are of an unmanned lethal drone. In reality, however, AI will make more of an immediate contribution to other aspects of air power, invariably as part of data-processing and decision-making systems that involve humans—not least because of ethical unease about full-autonomy, but also because the technology to do so remains immature.

Conceptually too, the RAF approach remains in its early stages. Extant air and space power doctrine dates to 2017 and includes no mention of AI, but there is plenty of discussion of the topic in professional air power forums and journals (MoD 2017a). The RAF’s Rapid Capabilities Office (RCO) is one in-house area of expertise—and is leading on the Tempest future combat fighter project. One of its other projects, Bablefish 7, neatly illustrates an area where AI can, and increasingly is, playing an important role; in integrating, filtering, and sharing all-source information—whether that is initially acquired from a space satellite, an aircraft of platforms on land or sea (RAF 2021). It was the RCO’s decision to abort work on the Mosquito, the RAF’s initial stab at creating a viable “loyal wingman” drone to fly alongside its fifth and sixth generation fighters (Jennings 2022). Successor wingman projects are inevitable.

Another important RAF project is its work on experimental swarming, for which it stood up a new dedicated squadron, No. 216 Test and Evaluation Squadron (Allison 2021). As with the Navy, the work on drones remains small scale and low-key, with the squadron, RCO and Dstl running a score of experimental exercises in the last few years. The two projects neatly encapsulate an unresolved tension for the RAF—which is the appropriate vision given the prospect of AI increasingly capable of flying aircraft in complex, contested environments? The current emphasis is on crewed aviation in exquisitely capable, incredibly expensive aircraft. An alternative is large numbers of less capable, perhaps disposable, swarming platforms that exploit mass and saturation to overwhelm air defences. Still another vision is of a missile-centric future, with long range, hypersonic missiles exploiting pure speed. That last vision seems the least prominent aspect of the RAF’s work and that raises a further dilemma—of balancing limited resources against costly technologies that are relatively unproven.

5.5 The Intelligence Agencies

Less publicly visible than the Commands, but certainly part of the UK’s national security ecosystem, the UK’s intelligence agencies, notably GCHQ, have a longstanding interest in using AI techniques to identify useful information within the torrent of data it acquires. There is crossover with the work of the services, particularly the RAF, which has taken the lead in space power. There is overlap too with the work of Defence Intelligence, which leads on intelligence analysis. For his part, the Chief of the Secret Intelligence Service (colloquially, MI6) recently pointed to greater use of AI in his organisation. Beyond generalities, however, it is not possible to say much. Occasional insights can be gleaned from investigative journalism like that of Barton Gellman following the Snowdon leaks, which confirmed a good deal of UK-US cooperation on the collection (sometimes in bulk) and analysis of electronic information, and the use of sophisticated machine learning techniques to parse it (Gellman 2021). But the technologies it details, while strikingly advanced, are already some years old. As with other areas of AI use in national security, there are important issues here of oversight, and a need to balance the priorities of the state with the rights of citizens.

5.6 Future Trends

This snapshot of Defence AI activity doubtless misses out many salient governmental organisations involved in developing AI technologies and concepts, whether in-house or in partnership with industry and academia. Still, it attests to both the range and dynamism of work underway. Many of these organisations are relatively new on the scene. More will likely follow in time, whether in response to developments in technology, or perhaps the desire of ambitious organisations and leaders to gain a foothold in what is increasingly perceived as a critical general-purpose technology. There is certainly a sense of organisational muddle and overlap in some areas. Skills bottlenecks and shortfalls, competition for resources, bureaucratic politics, and organisational culture—all these will shape the AI ecosystem as it evolves.

UK national security is currently likely only at the beginning of changes that will be more profound than the creation of bolt-on organisations, or of collaborations between different national security agencies. In common with other countries, there is clear potential for AI to drive fundamental change in armed forces, and in wider society too. Such changes will inevitably be refracted via prevailing cultures of national security, both within and across states. The UK’s Strategy for AI, as with other MoD publications, hints at the changes—whether that is talk of new platforms, concepts, or personnel requirements—but does little to spell out the details.

Can we say more about what those changes might be? In part this depends on the capabilities of the technology itself, and this is fast changing, almost on a weekly basis. But some large conceptual ideas are emerging in the UK context that bear further reflection. Among these:

  • Human Decision-Maker

The enduring importance of the human decision-maker, even in an era with pervasive and increasingly sophisticated machine cognition. That reflects an ethical, and also cultural, desire to preserve “meaningful” human control. Fleshing out the tactical details will be more difficult than expressing the desire.

  • Skillsets

Related is the need to “upskill” the workforce involved in defence, and more broadly to promote AI literacy in wider society. The AI Strategy highlighted the need for a skilled cadre of AI specialists in defence, and hints at the emergence of a career stream or structure that might foster that specialism within the uniformed services (MoD 2022a: 18). There are discussions about the possibility to bring experienced mid-career professionals with relevant skills into defence—along those lines, the AI strategy mentions the use of specialist reservists and flexible entry paths (MoD 2022a: 19). AI though is likely to be ubiquitous, and the government will need to strike a sensible balance between promoting AI knowledge as a generalist military competence and a specialism.

  • Mass and Scale

AI affords potential advantages in terms of mass, distribution and decision-speed. The head of the British army, perhaps optimistically, called for an army of 30,000 robots. And the head of the RAF argued that with AI, “We can have mass and technology and technological sophistication” (Mehta 2021). As in other wealthy democracies, defence inflation, driven by exquisite technology and cutting-edge designs, has shrunk the armed forces. Britain’s armed forces are smaller, both in personnel terms and in numbers of main platforms—tanks, surface combatants and multi-role fighter jets than at any time in the modern era. The emerging British vision is clear—AI will enable scale, while maintaining qualitative advantage. Whether that vision is feasible is another question. The practical implications will be profound—whether that is the tactical question of how to organise (and lead) a platoon of mixed humans and autonomous machines, or whether it still makes sense to organise armed forces along three traditional domain/service lines.

  • Vulnerabilities

AI systems are potentially vulnerable, for example, to Electronic Warfare countermeasures like jamming, or spoofing and susceptible to offensive cyber warfare. If the British vision is of a clone army of 30,000 machines—the clones had better not all feature the same Achilles heel. Then there are difficulties of assurance and trust, as when AI is susceptible to bias in training data. As will other states, the UK needs military AI that is reliable and trustworthy. Part of that demands AI that is sufficiently transparent that users can understand its decision-making.

6 Training for Defence AI

The defence AI strategy outlines in broad terms the need to develop the right skills for the autonomous era it envisages. So far, detailed work to flesh out that vision is lacking. There are some early indicators of more significant changes to come. For example, the military offers short courses in coding, including some sponsored by its jHub innovation team. The same unit has recently launched “innovation fellowships” for serving military officers, with the aim of fostering links across government and the private sector.

Elsewhere, AI is becoming increasingly prominent in professional military education syllabuses, whether distance learning, or residential courses. The three services have established programmes to allow competitively selected officers time in UK higher education pursuing advanced degrees or visiting fellowships—increasingly these are in AI or AI-adjacent subjects. Short professional development courses in AI-related subjects are also becoming increasingly available, as with one on data led decision-support and AI at Cranfield University’s Defence Academy campus.

7 Conclusion: An AI Transformation Debate

The UK’s defence budget is large and projected to grow further. The current government has pledged further increases over the decade, although it now faces stiff competition from other fiscal priorities. Unlike many NATO allies, the UK meets its 2% of GDP commitment, albeit with a suspicion of some deft accounting. But spending is stretched between competing priorities. The UK’s armed forces have long aspired to a full range of military capabilities, and an ability to deploy and sustain significant military power globally. This strategic culture is reflected in the Integrated Review, with its recognition of a tilt in geopolitical power towards the Indo-Pacific region, and an attendant desire to gear British military capabilities for national security challenges there. That includes a reinvigorated focus on blue water naval capabilities, the retention of long-range strategic airlift capabilities, and some forward basing. This thinking was already reflected to some degree ion the acquisition of two large conventional aircraft carriers and the F-35B jets to operate from them.

The Indo-Pacific turn in the UK’s outlook also reflects the current government’s pronounced EU-scepticism and its post-Brexit difficulties of forging a new relationship with the EU. But the war in Ukraine has challenged that worldview; refocusing attention on continental defence and creating an urgent need to restock munitions expended in Ukraine. Balancing its budget against its Pacific ambitions, its support for Ukraine, and its military modernisation programme, including its AI efforts, will be difficult.

The Ukraine conflict has also prompted further reflections in Britain on the future of warfare, not least because the UK has been one of Kyiv’s most prominent allies. Events in Ukraine are keenly studied by those charged with modernisation of the UK’s own forces. Part of this debate is visible in professional forums and on social media. On one hand, modernisers observe, combat in Ukraine relies on high technology, including AI technologies used in intelligence gathering and analysis, or as offensive and defensive tools in the cyber domain. The fighting itself presages an era of advanced, digitised warfare—a battlefield saturated with sensors, and the extensive use of unmanned platforms, especially commercially available drones. Distributed light forces, especially using man-portable air-defence systems (MANPADs) and guided anti-tank missiles (ATGMs) were much evident in the conflict’s early phases, where they proved effective against crewed aviation and Russian armour. AI’s tactical strength is likewise purported by enthusiasts to lie in distribution and scale.

But on the other hand, the Ukraine war demonstrates the continued utility of some vintage equipment and longstanding concepts. Artillery has been dominant in much of the fighting, especially long-range rocket systems which have been in service with Western militaries, including the British, for decades, just as have those MANPADs and ATGMs.

The upshot is that all sides in the British modernisation debate can take some support from events in Ukraine. Advocates for extensive AI-related reforms can argue plausibly that the combatants have not made full use of technologies that are only now beginning to emerge as viable military systems. Sceptics can point to the continued utility of existing systems and the need to hedge against the risk of trading in too much useful equipment for unproven technology. Gauging where the debate in the UK currently stands is a subjective exercise, but certainly the war has tempered the degree of enthusiasm for AI in many public national security debates, if only by drawing the focus away from what was until 2022 a prominent feature of defence-modernisation discussions.

To some degree, these tensions are not new. The UK has a long history of defence reviews in which rising defence inflation is set against the constraints of the economy, the emergence of new technologies and an uncertain geopolitical environment. Should the armed forces be more focused on global challenges, or the pressing concern of a continental threat? How far should the government of the day seek to promote domestic industry, even if the result is higher cost, lower quality equipment? Often the result has been belt-tightening and salami-slicing. Capabilities and personnel are thinned out, in-service dates for equipment are extended. To remain at the cutting edge, successive defence reviews have cut personnel numbers, rationalised formations, extended equipment in-service dates and gapped some roles, like carrier aviation and long-range anti-submarine aviation.

Too much despondency, however, would be wrong. Set against these challenges are a large and growing defence budget, strong, longstanding alliances with technologically capable partners including those in NATO, and the “Five Eyes” intelligence community. The UK has a track record of fielding highly capable, modern armed forces with global reach; of developing advanced military technologies and cooperating with allies. The UK’s armed forces are small in terms of numbers—both of personnel and major equipment. But they are high-quality, experienced, adept at operating in alliances and at adapting to new technologies.

Also weighing in the balance for the UK are the capabilities of likely adversaries, most notably China and Russia. Both states are mentioned explicitly as potential challenges in the Integrated Review, with China described as an increasingly assertive “systemic competitor” and Russia as “the most acute threat to our security.” Both challengers frame the need for the rapid and transformative adoption of AI in defence.

Yet conflict in Ukraine has amply demonstrated that Russia’s conventional threat was greatly exaggerated by Western analysts. Many were impressed by Russia’s modernisation efforts, by its ability to operate at reach, and to exploit opportunities in the unconventional, “grey zone” of modern warfare—notably, for example, through its propaganda efforts in social media. Despite longstanding efforts to modernise its armed forces and to develop cutting-edge military technologies, Russia’s combat performance has been poor, and advanced computer technology not much evident.

China spends much more, has larger, more modernised armed forces, and has a considerable research base. But China too faces substantial challenges to developing effective AI for national security, including significant corruption, skills and equipment bottlenecks, and a centrally planned ethos in government, business, and its armed forces. Long term demographic challenges and a markedly weakening economic outlook are additional impediments to progress.

The threats to national security and wider British interests from these two countries were a significant motivation for Britain’s own efforts to modernise defence. That modernisation will almost certainly continue, regardless of the authoritarian states’ evident difficulties. Analysts seem predisposed to emphasise the worst-case scenario when it comes to adversaries.

Compared to both these potential adversaries, however, the UK’s AI-defence prospects look bright. British scientists are at the forefront of AI research and have a long history of innovation. And parts of the British state have a longstanding track record of using machine learning techniques for national security. The two largest challenges for the UK will be developing AI that reflects British values (as the MoD acknowledges); and keeping pace with its vastly better resourced ally, the United States.