Technological and scientific progress, especially the rapid development of information technology (IT), plays a crucial role in peace and security issuesFootnote 1. Artificial Intelligence (AI) is one example. AI is a sub-discipline of computer science, dealing with computer systems capable of performing tasks which require human intelligenceFootnote 2. According to James Johnson, PhD, Lecturer in Strategic Studies in the Department of Politics & International Relations at the University of Aberdeen and author of the book Artificial Intelligence and the Future of Warfare, the hype around this has made it easy to overstate the opportunities and challenges posed by the development and deployment of AI in the military sphere. The author argues, that “speculations about super intelligent AI or the threat of superman AI to humanity” are entirely disconnected from today’s capabilities of AI. The book aims to address this problem by deciphering “proven capabilities and applications from mere speculation”, with a strong focus on the challenges AI poses to strategic stability, nuclear deterrence and how AI might influence nuclear weapon systems. The author concludes with implications and policy recommendations on how states could manage the escalatory risks posed by AI.

The book is organized in 3 parts which includes 8 chapters:

Part 1 asks how and why AI could become a force for strategic instability in the post-Cold War system. It poses as the theoretical framework of the book. Therefore, chapter 1 defines and categorizes the current state and evolution of AI and AI-enabling technologies and its possible implications in the military arena. It highlights the centrality of machine learning (ML) and autonomous systems to understand these implications. Chapter 2 describes the notion of military AI as “natural manifestation of an established trend in emerging technology”. The chapter argues that implications of AI could be profound for the central pillars of nuclear weapon systems (even if AI does not become the next revolution in military affairs).

Part 2 focuses on the role of AI technologies within the strategic competition between China and the US. It highlights the assumption that “technological innovation rarely causes the military balance to shift. Instead, how and why militaries employ a technology usually proves critical”. Therefore, chapter 3 and 4 ask how the US-China strategic competition is playing out within this field and what the possible impact of AI-augmented technology is for military technology between great military powers. It further asks, why these technologies are relevant to the US and how they respond to China’s technological hegemony. Chapter 3 argues that “the strategic competition playing out within a broad range of dual-use AI-enabling technologies will narrow the technological gap separating great military powers”. Further, chapter 4 highlights that under crisis conditions deep-seated mistrust, -understanding and tension between China and the US might exacerbate. It argues that “divergent US-China thinking on the escalation risks of co-mingling nuclear and non-nuclear capabilities will exacerbate the destabilizing effects caused by the fusion of these capabilities with AI applications”.

Part 3 presents four case studies (chapter 6–8) that consider the escalation risks associated with AI. This part presents the empirical core of the book. The case studies consider 1) the implications of these systems for the survivability and credibility of states’ nuclear deterrence forces, 2) the possible strategic operations and against this backdrop new challenges that drone swarms bring, 3) in what ways AI-augmented Cyber capabilities could be used to compromise the adversaries’ nuclear assets and 4) the impact AI systems, used by military commanders, could have on the strategic decision-making process. This part shows that emerging technologies will improve the ability of militaries, e.g., to locate, target and destroy adversaries’ nuclear deterrent forces. At the same time, it could influence strategic decisions that involve nuclear weapons. Therefore, the distinction between the impact of AI at a tactical level and a strategic one is not binary. The case studies also show that its use could be strategically destabilizing and that “future interactions of AI-powered cyber capabilities will increase escalation risks”.

The core argument of the book is that “military-use AI is fast becoming a principal potential source of instability and great-power strategic competition”. Accordingly, the future safety of military AI systems is a technical, political, and human challenge. The goal of the book is to clarify “some of the consequences of military-use AI’s recent developments in strategic stability between nuclear armed states and nuclear security”.

Whereas the book provides a good, not “too techy” introduction into AI and ML, in our perception, the author seems to overestimate the influence on nuclear weapons. So far, announcements regarding the modernization of nuclear forces or research proposals suggest that AI will not get directly integrated into the weapons itself, but rather be used e.g. for the data analysis and prediction that is necessary for nuclear early warning systems. Another more probable field of application is the usage of AI for simulations of nuclear detonations and the possible subsequent enhancement of nuclear warheads and their delivery systems. The book does not elaborate or discuss such aspects as part of its power-play arguments and instead warns about “the early adoption of unsafe, unverified, and unreliable AI technology”, which – from a technical perspective – is not fully comprehensible, as an “unreliable AI” is an AI that simply does not work in terms of their accuracy, prediction quality etc. and which probably will not be utilized at all. And given that the states, on which the book focuses – the US, Russia and China – already have enough nuclear weapons for a mutual total destruction, it remains vague, what advantage or enhancement military AI should provide for nuclear weapons that can bring this situation out of its current doomful balance. The author, therefore, correctly highlights that the run for AI in military systems is often a cognitive game, where perception is more important than actual capabilities.

In the end, the book provides a good and comprehensive perspective on the nuclear power play and strategic stability and how it is shaped by new technological developments. But with regard to the fact that AI already has and probably will have a much stronger influence on other weapon systems like (lethal) autonomous weapons or other already ongoing applications of AI e.g. in military governance, recruitment management and analysis or logistics and repair prediction and management, there are quite a few topics left for upcoming publications or later editions. And finally, as AI – like cyber – is relatively cheap in terms of the required financial, technological and personal resources, it would be interesting to analyse if AI can be a force enabler for currently smaller powers, at least in specific military branches, and how this might affect the international stability beyond the nuclear forces.