1 Introduction

Large-scale threats to humanity jeopardize the persistence and flourishing of human civilizations. Some of these threats arise from natural events. Supervolcanic eruptions and the impact of big asteroids produce dust which may linger in the air for months or years—shading sunlight, lowering atmospheric temperature, interfering with photosynthesis, leading to civilizational devastation, and ultimately threatening many biological species with extinction. Other large-scale threats are entirely man-made. Extended regions of our planet may become unfit to host human life on account of a nuclear war or the exacerbation of the climate crisis caused by anthropogenic emissions of greenhouse gasses.

The study of large-scale threats to humanity and their policy implications is the thematic focus of various academic and non-academic bodies. These include the Future of Humanity Institute at the University of Oxford, the Centre for the Study of Existential Risk at the University of Cambridge, and the Future of Life Institute. The Nuclear Threat Initiative is more specifically focused on reducing nuclear and biological threats. And the Bulletin of the Atomic Scientists provides information to reduce large-scale threats arising from nuclear weapons, climate change, misuse of biotechnologies, and AI.

This chapter provides a concise introduction to AI’s actual and potential impact on large-scale threats to human civilization that are posed by the climate crisis and the risk of a nuclear war. AI is having an increasing and often double-edged impact on these man-made threats. Coherent with the broad inspiring principles of Digital Humanism, responsibilities of AI scientists are identified, and ethically motivated actions are proposed that AI stakeholders can undertake to protect humanity from these threats and to reduce AI’s role in their buildup.

The chapter is organized as follows. Section 2 reviews AI’s double-edged impact on the climate crisis and good practices that AI stakeholders can undertake to reduce the carbon footprint of this technology. Section 3 is concerned with the potential impact on nuclear deterrence postures of deepfakes and AI-powered autonomous systems. Section 4 scrutinizes proposals to use AI for nuclear command and control in the light of opacities, fragilities, and vulnerabilities of AI information processing. Section 5 points to major ethical underpinnings for actions that AI stakeholders can undertake to reduce AI’s role in the buildup of large-scale threats to humanity. Section 6 concludes.

2 AI and the Climate Crisis

Climate data are being used to build AI models for climate warming mitigation and adaptation by means of machine learning (ML) methods. At the same time, however, some AI applications are paving the way to larger emissions of greenhouse gasses (GHG), thereby contributing to the buildup of higher global temperatures.

Basically, AI is climate agnostic and is having a dual impact on the climate crisis (Dhar, 2020). On the one hand, AI models help climate scientists to develop better climate models and prediction tools, thereby supporting the scientific community to contrast climate warming. Moreover, AI applications are playing an increasing role in climate warming mitigation, by learning to identify and reward thriftier energy consumption patterns in manufacturing, transportation, logistics, heating, and other sectors characterized by high levels of GHG emissions (Rolnick et al., 2019). Similar roles for AI have been proposed to support the European Green Deal (Gailhofer et al., 2021). The United Nations AI for Good platform helps one solve technological scaling problems for AI-based climate actions. Technologically more advanced countries have a related role to play, by facilitating access of less technologically advanced countries to AI resources for climate warming mitigation and adaptation (Nordgren, 2023).

On the other hand, AI applications are in use which pave the way for larger GHG emissions. Exemplary cases are AI models facilitating the extraction, refinement, and commercialization of fossil fuels. According to a 2016 Greenpeace report, major oil and gas companies take advantage of AI to improve the efficiency of their industrial pipeline. Models trained on data from seismic experiments and other geological data guide the search for new oil and gas wells. Additional AI applications improve the efficiency of fossil fuel transportation, refinery, storage, and marketing. By improving the efficiency of these processes, oil and gas companies aim to make larger quantities of fossil fuels available, eventually encouraging their consumption by decreasing their unit price. Finally, the development and delivery of these AI models by major AI firms jar with their pledges to achieve carbon neutrality soon (Greenpeace, 2016).

Ultimately, AI technologies afford protean tools to improve efficiency. However, one can indifferently use these tools to mitigate or else to exacerbate the climate crisis. Given their climate agnostic character, it is chiefly a matter of collective choice to direct the use of AI toward climate warming mitigation.

In addition to individual AI applications, one has to consider the overall impact on climate change of AI as a research, industrial, and commercial area. Attention toward this issue developed in the wake of alarming estimates of electrical energy consumption attributed to other information-processing activities and their hardware infrastructures. Bitcoin transactions require on a yearly basis as much electrical energy as a country like Argentina (CBECI, 2022). Data centers and data transmission networks are responsible for about 1% of worldwide energy-related GHG emissions (IEA, 2022).

According to an early estimate, which is more specifically concerned with AI, the training of some large AI models for natural language processing (NLP) has approximately the same carbon footprint as five average cars throughout their lifecycle (Strubell et al., 2019). This estimate was later found to be excessive (Patterson et al., 2022). But environmental concerns about the overall carbon footprint of AI were not thereby put at rest. Indeed, up to 15% of Google’s total electricity consumption is attributed to the development and use of the company’s AI models between 2019 and 2021 (Patterson et al., 2022). Moreover, only 10% of commercial AI electricity consumption is expended on training. The remaining 90% supports statistical inference and prediction by already trained models (Patterson et al., 2021).

It is not clear how these consumption patterns will develop in the future. An alarming consideration is that electricity consumption is sensitive to the size of AI models, and the goal of achieving more accurate inference and prediction has been prevalently pursued by developing bigger and bigger AI models based on deep neural network architectures. The size of these networks is usually measured by reference to the number of weighted connections between their neural units. In the NLP area, the number of these parameters steadily increased from the 350 million parameters of a 2018 language model in the BERT family, to the 175 billion parameters of GPT-3 in 2020, and on to the few trillion parameters of GPT-4 in 2023. Researchers and engineers operating in other AI application domains are similarly incentivized to pursue improved accuracy by means of increasingly bigger models. Clearly, the sum of these design choices contributes to extending electricity demand and is likely to enlarge the carbon footprint of AI research and industry. Finally, one should pay attention to the fact that AI applications enabling one to reduce the carbon footprint of services and processes can indirectly encourage a more extensive use of those services and processes. These rebound effects, admittedly difficult to appraise precisely, may considerably increase the overall AI’s carbon footprint (Dobbe & Whittaker, 2019).

These various data and trends are still inadequate to achieve a precise picture of AI’s electricity consumption and its carbon footprint. But imprecise contours offer no reason to deny the existence of a problem or to justify inaction by AI stakeholders. These include AI academic and industrial scientists, producers and providers of hardware systems for training and running AI models, CEOs of data centers hosting the required hardware infrastructure, and electrical energy producers and suppliers. Much can be done by these various actors to curb electricity consumption and correspondingly reduce AI’s carbon footprint.

Ultimately, ensuring electricity supplies from renewable energy sources would drastically reduce the AI carbon footprint. As of 2020, however, almost two-thirds of global electricity came from fossil fuel sources (Ritchie et al., 2022). And many years of sustained efforts will be presumably needed to reverse this proportion, so as to ensure a largely “green” electricity supply on a worldwide scale. In the meanwhile, without inertly waiting for these developments to occur, AI researchers and commercial actors are in the position to pursue some good practices contributing to reduce electricity consumption and curb correlated AI’s carbon footprint (Kaack et al., 2020; Patterson et al., 2022; Verdecchia et al., 2023):

  1. (i)

    Select energy-efficient architectures for AI models.

  2. (ii)

    Use processors that are optimized for AI model training—e.g., graphics processing units (GPUs) or tensor processing units (TPUs).

  3. (iii)

    Perform the required computations in the premises of data centers that tap from cleaner electricity supplies and are more efficient energy-wise.

Item (iii) points to actions that one may undertake across all sectors of information and communications technologies. Item (i), and to some extent (ii), points to actions that are more specific to the AI sector. None of these good practices depend on whether or how fast the electricity supply mix will become greener on a global scale.

In addition to AI scientists and firms, research and professional associations can play a distinctive role in fostering a greener AI. Indeed, AI associations may promote a new idea of what is a “good” research result recognized by AI scientific and professional communities, by modifying the entrenched criterion of evaluating AI models solely in terms of their accuracy (Schwartz et al., 2020). To correct this orientation, one may introduce research rewards and incentives to strive for combined energetic efficiency and accuracy of ML methods and downstream AI models. Competitions based on metrics which prize this compound goal might be launched too, in the wake of a long tradition of AI games, including chess, Go, poker, RoboCup, and many other competitive games. This new approach to what is a “good” research result goes equally well with time-honored AI research programs aiming to understand and implement intelligent systems using only bounded resources.Footnote 1 These actions by research and professional associations may prove effective independently of the required changes in electricity production and supply.

To sum up, the AI research and commercial enterprise involves multiple stakeholders. These actors can undertake actions to reduce both AI’s electricity consumption and the AI carbon footprint. Some of these actions are already identifiable from the standpoint of an admittedly imperfect knowledge of the AI carbon footprint and its causes. AI scientists and engineers may develop energetically more efficient AI models, choose more efficient hardware, and use greener data centers. The boards of AI scientific and professional associations may develop new ideas about what makes a “good” AI result, introducing suitable incentives to prize energetic efficiency. On the whole, since AI technologies are climate agnostic, it is an ethical, social, and scientific responsibility of AI stakeholders and political agencies alike to support the application of AI for climate warming mitigation and to contrast its use to exacerbate—directly or indirectly so—the climate crisis.

3 AI and Nuclear Deterrence

Since the end of World War II, nuclear war looms on humanity as a man-made large-scale threat. By signing the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), entered into force in 1970, the major nuclear powers pledged to prevent the proliferation of nuclear weapons and to eventually achieve worldwide nuclear disarmament. More than 50 years later, however, the number of states possessing nuclear arsenals has increased, no substantive progress toward nuclear disarmament has been made, and deterrence policies are still the main instrument that nuclear powers rely on to prevent nuclear holocaust.

According to nuclear deterrence theory, the possession of a sufficiently large and efficiently deployable nuclear arsenal holds the promise of a retaliatory counterattack and therefore discourages other nuclear states from a first use of nuclear weapons. Major weaknesses of deterrence policies and their presuppositions have been long identified and investigated. But new weaknesses are now being exposed by AI technologies for the development of autonomous systems and for the generation of deepfakes.

To begin with, let us consider the impact on deterrence postures of AI-enabled autonomous systems. US nuclear retaliation capabilities are based on land, air, and sea platforms for nuclear weapon systems. These comprise silos of land-based intercontinental ballistic missiles, submarines armed with SLBMs (submarine-launched ballistic missiles), and aircraft carrying nuclear weapons. Unmanned vessels for anti-submarine warfare may erode the sea prong of these deterrence capabilities. These vessels, whose autonomous navigation capabilities are powered by AI technologies, may identify submarines as they emerge from port or pass through narrow maritime chokepoints, trailing them henceforth for extended periods of time.

An early example of autonomous vessels for submarine identification and trailing is the US surface ship Sea Hunter. Originally prototyped in the framework of a DARPA anti-submarine warfare program, the Sea Hunter is now undergoing further development by the US Office of Naval Research, to perform autonomous trailing missions lasting up to 3 months. Another case in point is the autonomous extra-large unmanned undersea vehicle (XLUUV) Orca, manufactured by Boeing to carry out undersea operations including anti-submarine trailing missions and warfare. Similar functionalities are widely attributed to the Russian autonomous submarine Poseidon. And China is similarly reported to have a program for the development of XLUUVs.

According to a British Pugwash report, “…long-endurance or rapidly-deployable unmanned underwater vehicles (UUV) and unmanned surface vehicles (USV), look likely to undermine the stealth of existing submarines” (Brixey-Williams, 2016). And according to a more recent report of the National Security College of the Australian National University, “oceans are, in most circumstances, at least likely and, from some perspectives, very likely to become transparent by the 2050s.” In particular, submarines carrying ballistic missiles will be “detected in the world’s oceans because of the evolution of science and technology” (ANU-NSC, 2020, p. 1). Thus, by undermining the stealth of submarine retaliatory forces that are otherwise difficult to detect and neutralize, these AI-enabled autonomous vessels are expected to have a significant impact on the erosion of sea-based nuclear deterrence.

Nuclear deterrence is additionally weakened by AI systems generating synthetic data that are dubbed deepfakes. Generative adversarial networks (GANs) are used to fabricate increasingly realistic and deceitful videos of political leaders. The mayors of Berlin, Madrid, and Vienna—without realizing they were being deceived—had video calls in June 2022 with a deepfake of the mayor of Kyiv Vitali Klitschko (Oltermann, 2022). Deepfakes of political leaders potentially induce misconceptions about their personality, behaviors, political positions, and actions. Deepfake videos of nuclear power leaders like Barack Obama, Donald Trump, and Vladimir Putin were widely circulated. Fueling doubts about their rationality and consistency, these videos jeopardize the effectiveness of nuclear deterrence policies, which are crucially based on the credibility of second-strike threats to deter a first use of nuclear weapons.

4 Militarization of AI and Nuclear Defense Modernization

Proposals to use AI within nuclear defense systems are framed into a broader race to the militarization of AI. The US National Security Commission on Artificial Intelligence recommended integrating “AI-enabled technologies into every facet of warfighting” (NSCAI, 2021). One finds a strikingly similar call in China’s “New Generation Artificial Intelligence Development Plan,” underscoring the need to “[p]romote all kinds of AI technology to become quickly embedded in the field of national defense innovation” (China’s State Council, 2017). More curtly, Russian President Vladimir Putin claimed that whoever becomes the leader in AI will rule the world (Russia Today, 2017).

In the framework of these comprehensive AI militarization goals, the National Security Commission on Artificial Intelligence recommended that “AI should assist in some aspects of nuclear command and control: early warning, early launch detection, and multi-sensor fusion” (NSCAI, 2021, p. 104, n. 22). This recommendation was made on the grounds that increasingly automated early warning systems will enable one to reduce the time it takes to acquire and process information from disparate perceptual sources. Accordingly, human operators might be put in a position to achieve more rapidly the required situational awareness and to buy more time for downstream decision-making. From a psychological standpoint, these envisaged benefits would alleviate the enormous pressure placed on officers in charge of evaluating whether a nuclear attack is actually in progress. One cannot ignore, however, significant downsides emerging in connection with this proposal. Indeed, one can hardly expect AI to deliver these benefits without introducing AI-related weaknesses and vulnerabilities into the nuclear command, control, and communication (NC3) infrastructure.

To begin with, let us recall a famous and enduring lesson for risks that may arise from efforts to automate nuclear early warning systems. This lesson is afforded by the false positive of a nuclear attack signaled by the Soviet early warning system OKO on September 26, 1983. OKO mistook sensor readings of sunlight reflecting on clouds for signatures of five incoming intercontinental ballistic missiles (ICBM). Colonel Stanislav Petrov, the duty officer at the OKO command center, correctly conjectured that the early warning system had signaled a false positive and refrained to report this event higher up in the command hierarchy. Commenting years later on his momentous decision, Petrov remarked that “when people start a war, they don’t start it with only five missiles” (Arms Control Association, 2019). Petrov’s appraisal of the system’s response was the outcome of counterfactual causal reasoning and open-ended understanding of military and political contexts. Clearly, these mental resources exceeded OKO’s narrow appraisal capabilities. But the lesson to be learned extends to the present day. Indeed, counterfactual causal reasoning and the understanding of broad contextual conditions remain beyond the capabilities of current AI models.

Additional limitations of state-of-the-art AI technologies equally bear on a critical analysis of the NSCAI recommendation. AI models usually need vast amounts of training data to achieve good performances. Thus, the scarcity of real data about nuclear launches may prevent proper training of the desired AI early warning system. Suppose for the sake of argument that this bottleneck will be overcome—e.g., by means of innovative training procedures involving simulated data—so that the resulting AI model is found to achieve “satisfactory” classification accuracy. Even in this scenario, which is favorable to the NSCAI recommendation, the occurrence of errors cannot be excluded. Indeed, the statistical nature of AI decision-making intrinsically allows for misclassifications. No matter how infrequently such misclassifications occur, the false positive of a nuclear attack is a high-risk event, as it may trigger an unjustified use of nuclear weapons.

In view of the high risk associated with false positives of nuclear attacks, human decision-makers must carefully verify the responses of AI-powered early warning systems. But this verification requires time to be performed, possibly offsetting the additional time that one hopes to buy for decision-makers by means of AI-powered automation. In this verification process, temporal constraints are just one of the critical factors to consider. Automation bias is another crucial element, that is, the tendency to over-trust machine responses, downplaying the role of contrasting human judgments. Detected across a variety of automation technologies and application domains, automation biases were the cause of multiple accidents. Hence, human operators must be trained to countervail automation biases in their interactions with AI-powered early warning systems. However, effective training of this sort is hindered by the black-box character of much AI information processing and the related difficulty of explaining its outcomes.

A major interpretive difficulty arises from the fact that many AI systems process information sub-symbolically, without operating on humanly understandable declarative statements and without applying stepwise logical or causal inference (Pearl & Mackenzie, 2019). Moreover, the statistically significant features of input data that AI models learn to identify and use may significantly differ from features that humans identify and use to carry out the same problem-solving tasks. Because of these remarkable differences between human and machine information processing, AI learning systems turn out to be opaque and difficult to interpret from human perceptual and cognitive standpoints.

These interpretive hurdles propagate to the explanation of responses provided by AI systems. To detect and countervail machine errors, nuclear decision-makers should be put in a position to understand the reasons why an AI-powered early warning system provided a certain classification of sensor data. In the absence of surveyable and transparent stepwise logical, causal, or probabilistic inference on the part of the system, human operators are hard-pressed to work out for themselves an adequate explanation. One may alternatively try and endow the AI-powered early warning system with the capability of providing explanations to why questions by human operators. Explanations would have to be cast in terms that are cognitively accessible to human operators. The achievement of this overall goal characterizes the research area called eXplainable AI (or XAI in brief), which addresses the challenging problem of mapping AI information processing into cognitive and perceptual chunks that are understandable to humans, and to assemble on this basis “good” explanations for AI decisions, predictions, and classifications. However, pending significant breakthroughs in XAI, one cannot but acknowledge the difficulty of fulfilling the explainability condition which is crucial for nuclear decision-makers interacting with AI-powered early warning systems to achieve situational awareness.

Additional risks arising from the use of AI systems in nuclear early warning flow from vulnerabilities of AI models developed on the basis of ML methods. Adversarial machine learning (Biggio & Roli, 2018) reveals unexpected and counterintuitive mistakes that AI systems make and that human operators would unproblematically avoid making. By altering the illumination of a stop signal on the street—in ways that are hardly perceptible to human eyes—an AI system was induced to classify it as a 30-mph speed limit sign (Gnanasambandam et al., 2021). A human operator would not incur in such mistakes, for the small adversarial input perturbations inducing the machine to err are hardly noticeable by the human perceptual system. Additional errors, induced in more controlled laboratory conditions, are directly relevant to military uses of AI systems. Notably, visual perceptual systems based on DNN architectures were found to mistake images of school buses for ostriches (Szegedy et al., 2014) or 3-D renderings of turtles for rifles (Athalye et al., 2018). Clearly, these mistakes are potentially catastrophic in a wide variety of conventional warfare domains, for normal uses of school buses are protected by International Humanitarian Law, and someone carrying a harmless object in the hand may be mistakenly taken by an AI system to wield a weapon (Amoroso & Tamburrini, 2021).

Let us take stock. There seems to be undisputed consensus on the condition that only human beings—and no automated system—ought to authorize the employment of nuclear weapons. However, one cannot take at face value even the more modest recommendation to use AI in nuclear early warning. Indeed, one cannot exclude counterintuitive and potentially catastrophic errors made by these systems, of the same sort that adversarial machine learning enables one to highlight in other critical application domains. More generally, any suggested use of AI in NC3 stands in need of a thorough critical discussion, considering the opacities, fragilities, and vulnerabilities of AI information processing.

5 Responsibilities of AI Stakeholders and Large-Scale Threats to Humanity

It was pointed out above that AI stakeholders can undertake multiple actions to reduce both AI’s electricity consumption and the AI carbon footprint, in addition to restraining AI applications exacerbating the climate crisis and fostering applications of AI technologies for climate warming mitigation and adaptation. Moreover, AI stakeholders can raise public opinion awareness about threats to nuclear stability arising from actual or potential developments in AI, promote international scientific and political dialogues on these threats, and propose and support the implementation of trust and confidence building measures among nuclear powers to avert nuclear risks related to the militarization of AI technologies and systems.

Normative ethics provides substantive ethical underpinnings for these various actions. To begin with, prospective responsibilities for AI stakeholders to shield humanity from man-made large-scale threats flow from the obligation to do everything reasonable to protect the right of people to a dignified life. Additional obligations in the framework of duty ethics (aka deontological ethics) flow from the possibility that large-scale threats may even lead to human extinction (Bostrom, 2002). Indeed, Hans Jonas argued for the responsibility to protect the persistence of humanity in the wake of Kant’s idea of what constitutes human dignity. Jonas pointed out that—for all one knows today—only members of the human species are moral agents and bearers of moral responsibilities. One may regard other sentient beings inhabiting planet Earth as bearers of moral rights along with human beings, but none of them has moral responsibilities, and therefore cannot be regarded as a genuine moral agent, whose actions admit praise or blame. Under this view, moral agency will disappear from planet Earth if humanity goes extinct. Jonas offers the preservation of this unique and ethically crucial property of our world as the ground for a new imperative of collective responsibility: “Act so that the effects of your action are compatible with the permanence of genuine human life” (Jonas, 1984). In particular, one ought to refrain from building man-made threats to the persistence of human civilizations and to reduce existing threats of this kind.

Jonas emphasized the unlimited temporal horizon of this imperative: one must avoid technological actions that will lead to the extinction of genuine human life at any time in the future. In contrast with this, the temporal horizon of other obligations—notably including intragenerational solidarity and intergenerational care duties—fail to provide moral reasons to protect the life of distant generations. However, these short-term obligations provide additional ethical motivations to reduce large-scale threats that may soon materialize. Without the implementation of effective nuclear disarmament policies, nuclear conflicts are a standing threat to present generations. And the best available models of climate change predict that disruptive climate warming effects may be felt a few decades ahead in the absence of effective contrasting actions. Thus, in addition to Jonas’s categorical imperative, intragenerational solidarity bonds and intergenerational care duties provide significant ethical motivations to act on the reduction of man-made existential threats.

Contractarian approaches to justice afford yet another argument for the duty to do whatever is presently reasonable to preserve good living conditions for any future generation. Consider from this perspective John Rawls’s idealized model of the social contract for a just society. In this model, the subjects called to lay down the principles of a just society reason under a veil of ignorance. In particular, they cannot use information about the present or future generation that they belong to. Under this constraint, Rawls introduced a “principle of just savings” to protect the right of every person to live under just institutions independent of which generation she happens to belong to. The principle requires each generation to transmit to the next generation environmental, economic, and cultural resources that are sufficient to support politically just institutions (Rawls, 1971). Thus, in particular, each generation must refrain from exploiting the natural and cultural environments in ways that are incompatible with the unbounded persistence of a just society.

Finally, and more obviously so, consequentialist approaches in normative ethics afford basic moral motivations to choose actions protecting humanity from extinction or from widespread deterioration of living conditions. Indeed, major consequentialist doctrines—differing from each other in terms of which consequences of actions must be valued and how these consequences must be weighed and compared to each other (Sinnott-Armstrong, 2022)—converge on the protection and fostering of the aggregate well-being of human beings.

6 Conclusions

The real and potential double-edged impact of AI on man-made, large-scale threats to humanity is not confined to nuclear war and the effects of the climate crisis. It turns out that one can readily modify AI models for the discovery of new drugs, so that the modified models help one discover chemical compounds to build weapons of mass destructions (WMD). A pharmaceutical research group, using an AI model to discover new molecules for therapeutic purposes, demonstrated the possibility of this malicious dual use. Their model normally penalizes predicted toxicity and rewards predicted activity of chemical compounds against pathogens. By inverting this reward function, and running the model using limited computational resources only, many new and highly toxic compounds were identified, some of which turn out to be more toxic than publicly known chemical warfare agents (Urbina et al., 2022).

The malleability of AI technologies is quite unprecedented. It is an ethical, social, and political responsibility to develop AI for the flourishing and persistence of human civilizations, for protecting humanity from man-made large-scale threats, and for reducing AI’s role in their buildup.

Discussion Questions for Students and Their Teachers

  1. 1.

    Propose an innovative AI project contributing to climate warming mitigation or adaptation.

  2. 2.

    Describe the goals of a workshop where both AI scientists and politicians gather to discuss AI’s potential impact on nuclear stability.

  3. 3.

    Describe a public engagement initiative to raise awareness about man-made, large-scale threats to humanity.

Learning Resources for Students

  1. 1.

    Patterson, D., Gonzales, J., Hölzle, U., Le, Q., Liang, C., Mungia, L.M., Rotchchild, D., So, D., Texier, M., Dean, J. (2022) ‘The carbon footprint of machine learning will first plateau, and then shrink’, Computer(July), 18–28, doi: 10.1109/MC.2022.3148714.

    This article provides the reader with crucial information about the main good practices that have been identified so far to reduce AI’s carbon footprint. Additionally, a critical analysis is presented of related debates within the AI research community and of various estimates of the AI carbon footprint.

  2. 2.

    Gailhofer, P., Herold, A., Schemmel, J.P., Scherf, C.-S., Urrutia, C., Köhler, A.R., Braungardt, S. (2021) ‘The role of artificial intelligence in the European Green Deal’, Policy Department for Economic, Scientific and Quality of Life Policies, European Parliament, EU. Available at: www.europarl. europa.eu/RegData/etudes/STUD/2021/662906/IPOL_STU(2021)662906_EN.pdf (Accessed 26 March 2023).

    This report stimulates reflections about the multiple uses one can make of AI technologies and systems to support the European Green Deal and more generally to align the design and deployment of AI systems with climate warming mitigation and adaptation efforts.

  3. 3.

    Greenpeace (2016). ‘Oil in the Cloud. How Tech Companies are Helping Big Oil Profit from Climate Destruction’, Greenpeace Report. Available at: https://www.greenpeace.org/usa/reports/oil-in-the-cloud/ (Accessed: 26 March 2023).

    This report vividly illustrates the climate agnostic (and indeed double-edged) character of AI applications. It is emphasized there that AI applications can make the search for, commercialization of, and use of fossil fuels more efficient, thereby leading to more GHG emissions.

  4. 4.

    Boulanin, V. (2019). The impact of AI on strategic stability and nuclear risk. Volume I: Euro-Atlantic Perspectives. Stockholm: Stockholm International Peace Research Institute [online]. Available at: https://www.sipri.org/publications/2019/other-publications/impact-artificial-intelligence-strategic-stability-and-nuclear-risk-volume-i-euro-atlantic (Accessed 26 March 2023).

    This report provides a comprehensive analysis of AI’s potential impact on strategic nuclear stability, delving into new risk that AI may give rise to in connection with nuclear deterrence and nuclear command and control systems.

  5. 5.

    Cummings, M.L. (2021) ‘Rethinking the Maturity of Artificial Intelligence in Safety-Critical Settings’ AI Magazine 42(1), 6–15.

    This article questions the maturity of AI for use in a variety of safety-critical settings, in view of known weaknesses and vulnerabilities of this technology. In particular, it is useful to appraise risks that AI may introduce in nuclear command and control.