The United States remains the world’s preeminent military and technological power. Over the last decade, the United States has increasingly viewed artificial intelligence (AI) proficiency as a vital U.S. interest and mechanism for assuring U.S. military and economic power, recognizing its potential as a force multiplier. Over the last decade, artificial intelligence has become a critical capability for U.S. national defense, especially given the focus of the 2022 U.S. National Defense Strategy on the Indo-Pacific region and the pacing challenge of China (DoD 2022b).

As a result, the U.S. Department of Defense (DoD) has shown growing enthusiasm for AI and related emerging technologies. However, while the United States is currently making advances in AI research and development in both academia and the private sector, the Department of Defense has yet to successfully, on a broad scale, translate commercial AI developments into real military capabilities.

The United States government is generally well-placed to leverage defense AI and AI-enabled systems. However, various bureaucratic, organizational, and procedural hurdles have slowed down progress on defense AI adoption and technology-based innovation within the Defense Department over the last few years. Critically, DoD suffers from a complex acquisition process and a widespread shortfall of data, talent in Science, Technology, Engineering, Mathematics (STEM), and AI and training. Organizations working on AI and AI-related technologies and projects are often siloed, separated not only from each other but also from necessary data and other resources, and there exists within the department a culture that favors tried-and-true methods and systems, sometimes trending towards Luddism. These factors have contributed to a surprisingly slow pace of AI adoption. The National Security Commission’s 2021 Final Report to Congress summarized, “despite exciting experimentation and a few small AI programs, the U.S. government is a long way from being AI-ready” (NSCAI 2021: 2).

Thus, despite its potential to enhance U.S. national security and be an area of strength, and given the long U.S. tradition of military, innovation, and technological leadership, AI risks becoming a point of weakness, expanding “the window of vulnerability the United States has already entered” (NSCAI 2021: 7). AI will continue to be a point of insecurity if the United States does not pick up the pace of innovation to reach responsible speed and lay the institutional foundations necessary to support an AI-savvy military.

In the last few years (this report is current as of December 2023); however, the Defense Department has made substantial headway on some of these challenges, restructuring its approach to defense AI. In November 2023, the Department of Defense published a new Data, Analytics, and AI Adoption Strategy and has since begun to execute it. Most significantly, the DoD has completed a significant overhaul of its AI organizational structure, creating a new Chief Digital and Artificial Intelligence Office (CDAO) to consolidate its disparate AI projects and stakeholders and better align them with the department’s data streams. It has also since established the Generative AI Task Force Lima to assess, synchronize, and employ generative AI capabilities across the Department, updated the Autonomy in Weapons System Directive 3000.09, and launched the Replicator Initiative, which seeks to have DoD fielding thousands of all-domain attritable autonomous systems within the next 18–24 months. Notably, the United States DoD is undergoing significant changes and revitalization of its overall approach to defense AI. However, whether these new AI efforts will be sufficient to allow the U.S. to make up for time lost remains to be seen.

1 Thinking About Defense AI

The United States and other countries have recognized the potential power and efficiencies AI can generate, especially in military contexts. China has famously declared its plan to become the world leader in AI by 2030, while Putin has argued the state that becomes the first to conquer AI will become the “ruler of the world” (Vincent 2017).

The use of cutting-edge, emerging technologies—including AI—in the Russian-Ukraine conflict has made the potential applications of these capabilities much more tangible for states and has piqued interest in everything from drones to Ukraine’s so-called “Uber for Artillery” (Kahn 2022; Cooper 2022). Consequently, it has also made evident the condensed timeline militaries face to have these capabilities operational and deployed on the battlefields if they wish to remain competitive.

In line with this global trend, the United States views artificial intelligence as an enabling technology and force multiplier that will generate efficiencies and, if leveraged successfully, will reinforce (or arguably renew) U.S. competitiveness and global technological and military dominance. Along with recent shifts in U.S. defense and security strategy to address China’s pacing challenge, defense AI is considered essential for U.S. military capabilities worldwide.

1.1 What Is the U.S. Understanding of Defense AI?

In 2018, with the release of the first U.S. Department of Defense Artificial Intelligence Strategy, there was a formalized definition of what AI means in U.S. defense contexts. The strategy concisely defined AI as “the ability of machines to perform tasks that normally require human intelligence” (DoD 2018b). Until that point, much of the rhetoric of the U.S. defense community sometimes—inaccurately—made “artificial intelligence seem like a munition” rather than an enabler (Horowitz 2018). Therefore, the 2018 formalization of the definition of AI was a significant step forward in getting the defense establishment closer to the mark when it came to AI. However, defining AI in this manner has been challenging for many in the national security enterprise to grasp. This definition encompasses decades-old technologies dating back to WWII, such as autopilot on aircraft, automated warning systems, and missile guidance, to more recent breakthroughs, such as facial recognition technology, autonomous vehicles, and machine and deep learning algorithms. These definitional lines are further blurred when distinguishing between artificial intelligence, automated/automatic systems (which respond mechanically to inputs), and autonomous systems (which operate on pre-programmed instructions), which may or may not be AI-enabled.

1.2 Why Does the United States Want AI?

AI has become a key pillar in national strategies to achieve U.S. interests, from the Trump Administration to the Biden Administration. When addressing national security challenges and the balance of power, progress in defense AI is often used as a heuristic metric for assessing U.S. military and technology leadership.

The Biden administration has identified China as the pacing challenge shaping current U.S. national defense and security strategy, as well as future military planning (Horowitz 2021). The White House, in its national security strategy, explained this shift in strategy to meet a shifting global balance in power, as China has steadily become the “only competitor potentially capable of combining its economic, diplomatic, military, and technological power to mount a sustained challenge to a stable and open international system” (White House 2022c: 8). As a result, a unique emphasis has been placed on the Chinese threat to U.S. technological dominance.

As a result, much of the U.S. effort on defense AI and other emerging technologies has been contextualized concerning competition with China. Worrying about Chinese AI advancements creates a sense of urgency and a need to advocate for the United States to pick up the pace with responsible speed regarding AI investment, research, development, acquisition, and deployment. The U.S. military believes AI investments could generate essential capabilities in several areas, with some closer to fielding and others still in the early stages of Research, Development, Testing and Evaluation (RDT&E).

2 Developing Defense AI

The previous sections outline how the United States thinks about defense AI in terms of its national interests, goals, and security. Over the past 5 years, AI has become an ascendant capability in defining U.S. technological leadership. This section will measure the United States’ progress in successfully developing, adopting, and leveraging AI capabilities for defense. Subsequent sections will discuss the mechanisms for executing defense AI policy within the United States.

2.1 U.S. AI Strategy and its Evolution

In 2018, the Department of Defense published its first-ever AI strategy, Harnessing AI to Advance Our Security and Prosperity. It emerged from the recognition that technological advances have always been at the forefront to ensure the United States had an enduring “competitive and military advantage” and other states (namely U.S. competitors) were already making significant military investments in AI. The strategy accompanied the newly created Joint Artificial Intelligence Center (JAIC), which was mandated to execute much of the DoD’s vision and “synchronize DoD AI activities to expand Joint Force advantages” (DoD 2018b: 5, 9). The strategy positioned AI as a human-centered tool that would help the DoD better support and protect U.S. servicemembers and civilians, enhance national security, and create a more efficient and streamlined organization.

In June 2022, the office of the new CDAO published the Responsible Artificial Intelligence (RAI) Strategy and Implementation Pathway (RAI S&I Pathway) (DoD Responsible AI Working Council 2022). The RAI strategy acknowledges AI requires a more holistic and integrated approach and reinforces other DoD policies on AI and autonomous systems, such as Directive 3000.09—which established guardrails for autonomy in weapons systems (DoD 2012). The updated RAI strategy also formally enshrines DoD’s AI ethical principles, which, since adoption in 2020, have become essential guardrails the department has used to shape its AI efforts, spanning everything from experimentation to use.

Finally, in November 2023, the CDAO released the updated Data, Analytics, and Artificial Intelligence Adoption Strategy to build upon and supersede the initial 2018 strategy (Clark 2023). While not a significant step-change in and of itself, the updated strategy more accurately reflects the current status of defense AI priorities, efforts, and responsibilities in the Department, given the advancements, updates, and reorganizations undertaken in the past few years.

2.2 The United States: Falling Behind?

For decades, the United States has been the world’s leading military power and the foremost technological innovator—two distinct yet mutually reinforcing designations. The United States military is uniquely positioned to capitalize on advances in artificial intelligence and other emerging technologies compared to other states. The academic and private sectors within the United States have become the preeminent contributors to furthering the field of AI. Whether in the form of AI conference citations or repository contributions, the weighted citation impact of corporate-academic publications, or attracting much of the world’s AI and machine learning talent, the United States surpasses its peers (Zhang et al. 2021: 24; Zhang et al. 2022: 16–35; Zwetsloot et al. 2021a; Zwetsloot et al. 2021b). Despite having a rich AI ecosystem at its fingertips, the United States Department of Defense has failed to become a driving force of AI progress—less than 4% of all AI publications in the United States were government-sponsored in 2021 (Maslej et al. 2023: 27).

As Horowitz et al. (Horowitz et al. 2022: 158) put it, “Leading militaries often grow overconfident in their ability to win future wars, and there are signs that the U.S. Department of Defense could be falling victim to complacency. Although senior U.S. defense leaders have spent decades talking up the importance of emerging technologies, including AI and autonomous systems, action on the ground has been painfully slow.” It is clear when it comes to successful defense AI adoption, let alone leadership, just having the technology is insufficient and must be accompanied by organizational and bureaucratic change and integration.

2.3 AI Backsliding, Luddism, and the “Valley of Death”

The 2018 National Defense Strategy noted, “success no longer goes to the country that develops a new technology first, but rather to the one that better integrates it and adapts its way of fighting” (DoD 2018a: 10). Being slightly more realistic and acknowledging much of AI development is not being pioneered in government, the 2022 National Defense Strategy has promised that DoD will become a “fast-follower” of market- and commercially-driven technological capabilities with military relevance (DoD 2022b: 19). However, as of writing, the United States has yet to match its execution with its stated intentions and outlined AI strategies completely.

Implementation of this vision for AI leadership has been challenging for the U.S. defense establishment for several reasons:

  • Difficulty in transitioning AI research into scalable programs of record supported by the services;

  • Siloed research, AI programs, and data streams;

  • Lack of STEM and AI talent and general technological literacy and training opportunities.

Like many other large, bureaucratic systems, the Department of Defense is often biased in favor of tried-and-true, existing capabilities over new tools and technologies (Horowitz et al. 2022: 160). Despite its recognized potential as a force multiplier and military innovation enabler, AI, in particular, has faced resistance within the DoD. This hesitancy might be due to perceptions that AI distances humans from decision-making on the battlefield by enabling systems to operate more autonomously. Some within the armed forces have noticed this trend of Luddism within the department, calling this “deliberate incrementalism,” whereby AI projects that often meet set requirements and pass testing and verification procedures with flying colors are purposefully delayed when it comes to deployment with “cautious and lengthy feasibility studies,” and sometimes cancellation (Spataro et al. 2022).

For example, in the early 2000s, the U.S. Air Force and Navy partnered to create a series of autonomous aircraft capable of conducting surveillance and military strikes, which have evolved into the X-45, the X-47A, and X-47B prototypes. Within two decades, the aircraft were already proving their mettle. Not only could they accomplish complex missions with little human oversight, such as landing on aircraft carriers and completing aerial refueling operations, but they often did so better than the crewed systems (Spataro et al. 2022). Despite the promise the prototypes demonstrated, in what some have called a “case of technological infanticide,” the Air Force viewed the systems not as an improvement but as a threat to the F-35 fighter jet and dropped out of the joint program. The Navy continued with the program for a few more years until it canceled it due to internal debate (Osborn 2021). Other AI and autonomous experiments, such as Alpha Dogfight—DARPA’s program to train AI algorithms to beat a human pilot in a simulated aerial dogfight—which has been touted as successes, have failed to lead to any actual implementations (Halpern 2022; Gould 2020).

The lofty promises of defense AI juxtaposed with the reality of the conservatism of the military services in AI adoption have contributed to a widening of the “valley of death”—the chasm a technology developed in the private sector must cross before its acquisition by the militaries. Whereas in the early 1960s, it might take a new technology 5 years on average to bridge the gap, today, it can take a decade or more for a capability to move from the lab to the battlefield (Greenwalt and Patt 2021). While some features of AI may have exacerbated the gap, it exists as the armed forces often require “a higher level of technology maturity than the science and technology community is willing to fund and develop” (GAO 2015: 4).

While the DoD may have been able to avoid the valley of death previously, this has become an incredibly sharp sticking point recently. AI and other newer technologies are increasingly software-based and originate almost entirely in the private and academic sectors. Historically, the Department of Defense has struggled with “developing, procuring, and developing software-centric capabilities,” with the acquisition process moving much slower for software-based systems than hardware and weapons systems (GAO 2022: 21). Thus, some institutions within the DoD, such as the Defense Innovation Unit, have taken on roles as an “accelerator” or “translator” of commercial technology for national security and circumvent some of the hurdles by providing funding and faster contract times (DIU 2023a). Nevertheless, such institutions still face challenges in gaining access to acquisition resources. Moreover, such efforts are merely stopgaps to a broader acquisition system problem.

Recognizing this, in August 2023, Deputy Secretary of Defense Kathleen Hicks announced an incredibly ambitious initiative, Replicator–a program focused on processes and ways to overcome some of these barriers to effectively scale technologies into real capabilities. The first big bet for the program would be to “field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18-to-24 months,” and if the program successfully demonstrates a pathway to overcoming the valley of death, it will be replicated and inculcated for other capabilities across the department (DoD 2023). While the promise of Replicator is immense, and there has been some steady momentum, its success ultimately “hinges on overcoming a myriad of challenges, from production scalability to bureaucratic inertia, that have hindered previous similar innovation adoption efforts” (Kahn 2023).

There have been some early hints of progress in overcoming the difficulties described above. Some signposts include the U.S. Air Force fast-tracking development of the Phoenix Ghost loitering munition for almost immediate use in Ukraine, the Replicator Initiative, and indications that the Air Force is also considering a new program of record for a next-generation autonomous aircraft (Insinna 2022a; Insinna 2022b).

3 Organizing Defense AI

The United States defense establishment has had a rollercoaster relationship with AI. AI has had a history of sudden periods of progress and overhype—generating sudden boons in funding—followed by troughs of divestment when reality fails to match heightened expectations. The up-and-down has sometimes led to “backsliding” in defense AI progress (Ciocca et al. 2021).

In the very early days of the field, even before the term “artificial intelligence” was coined in 1956, AI research was heavily funded by organizations like the Office of Naval Research (ONR) and the Advanced Research Projects Agency (ARPA) (now known as the Defense Advanced Research Projects Agency, or DARPA) (Schuchmann 2019a). The hope was to use machine translation to aid the U.S. Navy during the Cold War by automatically translating Russian to English. However, stalls in progress in machine translation and slow-moving development in other related AI fields led DARPA and other organizations to fund less blue-skies and fundamental research in favor of more applied projects. As a result, many refer to this period during the 1970s as the first “AI Winter.”

In the 1980s, AI again captured the U.S. military’s interest. DARPA invested USD1bn in a strategic computing initiative which hoped to reach a level of machine intelligence that would propel the United States ahead of competitors like Japan, which was experiencing an economic, industrial, and technological boom (Roland and Shiman 2014). The project ultimately over-promised, ushering in a second—much longer—AI Winter during which the U.S. military once again shied away from the field (Schuchmann 2019b).

It is only in the last decade—due to significant advances in machine learning, natural language processing, and computer vision—that AI has once again become a priority for the U.S. national security enterprise (Fig. 1). In 2014, the Department of Defense announced its Third Offset Strategy, the aim of which was “to draw on U.S. advanced technologies to offset China’s and Russia’s technological advances” (Gentile et al. 2021). One of the central tenets was to “find new ways to cultivate technological innovations and interact with the commercial world” to counter DoD’s diminished role in driving innovation. While the Third Offset only lasted in an official capacity until 2018, it significantly influenced the 2018 National Defense Strategy, which argued a new cohort of technologies, including AI, autonomy, advanced computing, big data analytics, robotics, directed energy, hypersonics, and biotechnology would be the technologies to “ensure we will be able to fight and win the wars of the future” (Gentile et al. 2021: 72; DoD 2018a: 3).

Fig. 1
A schematic indicates the timeline of U.S. defense AI policy developments, emphasizing key milestones such as the 2014 Third Offset Strategy and the 2018 National Defense Strategy. Other significant events include the launch of the DoD's Third Offset Strategy, the inception of the Replicator Initiative, the establishment of the National Security Commission on AI by the U S Congress, and the submission of the final report to Congress by the N S C A I.

Recent US policy developments related to defense AI. Source: Author’s Chart

Since 2018, AI has become a key pillar in U.S. defense and national security strategy. As technology has developed and progressed, and DoD’s prioritization has shifted dramatically over the last 5 years, so has DoD’s approach to organizing for AI. The progression of AI within the U.S. military can be divided into three distinct periods or eras, primarily differentiated by how defense AI has been organized within DoD. These include the Project Maven Era: 2017–2018, JAIC Era: 2018–2022, and the CDAO Era: 2022-present.

3.1 Project Maven Era (2017–2018)

Since its establishment in April 2017 as the Algorithmic Warfare Cross-Functional Team, Project Maven has become the most visible proof-of-concept for the application of AI for defense purposes in the United States (Office of the Deputy Secretary of Defense 2017). The idea behind the initiative was to relieve the burden on human operators tasked with analyzing video footage obtained from unmanned aerial systems (UAS). The Maven algorithms augmented or fully automated the object detection, classification, and alert tasks using computer vision supporting the Defeat-ISIS campaign.

Unlike previous DoD-funded AI projects, Maven was a resounding success and surpassed expectations. Even in the face of a public controversy early in its creation, by the end of its first year, Maven had its first models working directly in combat operations (Simonite 2021). By 2020, Maven was being applied across multiple conflicts, marking “a monumental early AI-driving win for DoD” (Vincent 2022).

Undoubtedly, Project Maven’s swift and sweeping success was “enabled by its organizational structure: a small, operationally focused, cross-functional team that was empowered to develop external partnerships, leverage existing infrastructure and platforms, and engage with user communities iteratively during development” (Allen 2017). Maven was the first to set up to leverage AI effectively for a clear, well-defined purpose. In addition, there was an explicit data-labeling and cleaning effort to ensure models were trained and applied to the best data, as well as a concerted emphasis on timeliness, with a requirement that algorithm-based technology would be integrated with Programs of Record in 90-day “sprints” (Office of the Deputy Secretary of Defense 2017). The launch of Project Maven was accompanied by the release of DoD’s first-ever AI strategy, discussed above.

3.2 JAIC Era (2018–2022)

Emboldened by Project Maven’s success, in 2018, the DoD established the Joint Artificial Intelligence Center (JAIC) as the centralized hub for AI within the Department to “seize upon the transformative potential of Artificial Intelligence technology for the benefit of America’s national security” (JAIC 2020a). The creation of the JAIC marked a key inflection point in the U.S. approach to defense AI. It had significant funding and high internal and external visibility, which signaled a clear message: AI would be critical for the future of U.S. national security.

In the months following its establishment, in quick succession, Congress established the National Security Commission on Artificial Intelligence (NSCAI), the JAIC received its first director, the White House enacted Executive Order 13859 on Maintaining American Leadership in AI, and the DoD published its first-ever AI Strategy.

The JAIC’s introduction marked the beginning of the AI spring within the defense enterprise, succeeding in elevating AI and laying the foundation for the widespread recognition of AI as critical for the future of U.S. national security and defense, which is bearing fruit today. In particular, the JAIC “made headway on AI adoption and data literacy, with initiatives like “AI 101,” and on the data integration issue, as part of the Artificial Intelligence and Data Initiative (AIDA)” (Horowitz and Kahn 2022). AI R&D within the Defense Department has steadily grown, with the military services investing more in AI and related technologies, projects, and programs.

Ironically, as the JAIC succeeded in its original intent—as AI evolved and investment in the technology skyrocketed—it had become “torn between being a developer of algorithms itself and being an enabler that helps the military services figure out how to develop and implement algorithms within relevant military programs” (Horowitz and Kahn 2021). While the organization of the JAIC “followed best practices from military innovation and business innovation literature” at the time, “which advocated for surrounding the need to create a spinoff or separate sub-organizations to value the potential of emerging technologies” the institution had since outgrown itself, becoming less clear in its aim as it became the owner of an increasingly varied portfolio of projects, technologies, and responsibilities (Horowitz and Kahn 2021). Furthermore, while well-funded, it lacked the authority “to compel the military services and other institutions to collaborate” on AI and AI-related projects (Horowitz and Kahn 2021).

3.3 CDAO Era (2022-Present)

While the DoD created more and more separate projects and institutions like Maven and the JAIC (with varying degrees of success, funding, and support), the organizational and bureaucratic infrastructure were not well-suited to a technology that, by definition, was broad in its forms, applications, and use-cases. The defense AI enterprise within DoD remained siloed. As many as “fifteen separate departments and organizations funded and worked on AI and AI-adjacent technologies, often without formal coordination or throughlines,” resulting in “redundancies, gaps, inconsistencies in application and access to data and resources” (Horowitz and Kahn 2022).

In recognition, the Department of Defense moved to reorganize its major institutional AI players in early 2022, restructuring the AI efforts it had built piecemeal from the ground up. Hoping to achieve a more integrated approach to defense AI, the Pentagon created a new office—the Chief Digital and Artificial Intelligence Office (CDAO), which would subsume the JAIC, the Defense Digital Service (DDS), and the Office of the Chief Data Officer (CDO).

For U.S. defense AI adoption, aligning these organizations could help to bridge the gaps between institutional players and better connect “DoD’s AI efforts with data, the fuel AI requires” (Horowitz and Kahn 2022).

3.4 The Defense AI Ecosystem More Broadly

The defense AI ecosystem within DoD is encompassed, in part, by the broader Office of the Under Secretary of Defense for Research and Engineering (OUSD(R&D)) organizations. This includes defense agencies and field activities such as the Defense Innovation Board, the Small Business Innovation Research and Small Business Technology Transfer Programs (SBIR/STTR), the Innovation Steering Group, Science and Technology Futures, the Offices of the Deputy Chief Technology Officer (CTO) for Science & Technology, and for Critical Technologies, DARPA, and more.

Since its early involvement in the 1950s, DARPA has continued to “lead innovation in AI research as it funds a broad portfolio of R&D programs, ranging from basic research to advance technology development” (DARPA 2023). As of writing, DARPA has over 50 currently ongoing AI-related projects on applications of AI ranging from making machine learning more explainable to using AI to assess better secures of critical mineral supplies. DARPA has its own streamlined contracting procedures and funding mechanisms, and because it is focused on R&D, it has had the flexibility to conduct more early-stage blue-skies research. While not all projects have translated into concrete capabilities or programs of record, DARPA is a consistent, key contributor to the overall defense AI ecosystem and the defense research and engineering ecosystem.

A few other cross-departmental specialty organizations designed to target AI and other emerging technologies have been established under this umbrella, which has helped direct funding and investment in capabilities. Namely, the Defense Innovation Board (DIB), established in 2016, was constructed to provide independent recommendations to the Secretary of Defense and other senior leaders within the DoD on emerging technologies the military should adopt. The Defense Innovation Unit (DIU) was stood up precisely to field and scale commercial emerging technologies across the military, and from June 2016 to September 2021, it leveraged USD20.1bn in private investment and awarded USD892.7M in contracts (DIU 2021: 7).

Outside DoD, some private sector initiatives have also emerged, attempting to facilitate the transition of commercial-sector emerging technologies into government and the Department of Defense and serve as essential connective tissue between Silicon Valley and the Pentagon.

3.5 Working with Allies and Partners

AI has also become a new binding mechanism between the United States and its allies and partners. As a significant component of the messaging and strategy surrounding the US approach to defense, AI has been used to counter China’s growing technological primacy. Many of the DoD’s efforts on AI have been folded into broader efforts to collaborate with regional partners.

For example, as a part of the Indo-Pacific Strategy released by the White House in February 2022, the Biden Administration announced the creation of a new Quad Fellowship which would recruit and financially support students from the United States, Japan, Australia, and India to pursue graduate degrees in STEM fields at U.S. institutions (White House 2022a: 10).

The trilateral security pact between Australia, the UK, and the US, known as AUKUS, created to deter China further, has revolved significantly around technology transfer and cooperation in developing emerging technologies, including AI and autonomy. As some have said, “AUKUS seeks to win the technology competition with China by pooling resources and integrating supply chains for defense-related science, industry, and supply chains. This will be the decades-long and multifaceted purpose of AUKUS—a transnational project racing to seize advantages in artificial intelligence, quantum computing, and cyber technology” (Tarapore 2021).

There has been more regular coordination on topics like AI governance and ethics with other states developing AI, including “academic conferences, Track II academic-to-academic exchanges, bilateral and multilateral dialogues, and discussions in various international forums,” such as the DoD-hosted AI Partnership for Defense and the Political Declaration on Responsible Military Use of AI and Autonomy which has nearly 50 signatories (Scharre and Lamberth 2022; JAIC Public Affairs 2022; Department of State 2023).

4 Funding Defense AI

While complete details of the official Department of Defense budget and project spending are not publicly available, analysis of unclassified requests by the DoD paints a clear picture of a steady increase in the amount of funding designated for AI and other related and emerging technology research, development, testing, and evaluation (RDT&E) over the last few years. In Fiscal Year (FY) 2021, Stanford University’s Institute for Human-Centered Artificial Intelligence estimated there were about 305 unclassified DoD RDT&E programs that specified the use of AI or machine learning technologies, comprising about USD5bn (Zhang et al. 2021: 168). Govini has estimated from FY17–FY21, the U.S. government spent about USD50bn on AI, machine learning, and autonomy technology (Govini 2022: 2). Approximately 84% of which was funded via direct contracts, 15% by grants, and the rest from other transaction authorities (OTAs) (Govini 2022: 24).

While the majority of these contracts and grants were awarded to the regular spread of large defense companies—Lockheed Martin, Northrop Grumman, General Dynamics, BAE, Raytheon, and Booz Allen Hamilton were all in the top 10 vendors —there have been “emergent” companies that have benefited from the work of the DIU and the other organizations that have made it their mission to facilitate collaboration between Silicon Valley and the Pentagon, such as Anduril, Applied Intuition, Databricks, ModalAI, Rebellion Defense, and ShieldAI (Govini 2022: 25). There are also stakeholders like Palantir, which recently received considerable media attention for the algorithmic power it has provided to Ukraine, that don’t quite fit into either bucket but are increasingly becoming key players in developing AI for defense (Ignatius 2022a, 2022b).

In March 2022, the Biden Administration set a “record peacetime national defense budget of USD813bn, which earmarked USD773bn for the Pentagon” (Stone 2022). A staggering 17% of the funds directed towards the Pentagon are being allocated to research and development. In announcing the FY2023 budget request, the administration argued the “all-time high” of USD130.1bn for research and development reflected the understanding of the “need to sharpen our readiness in advanced technology, cyber, space, and artificial intelligence” in particular (White House 2022b). The budget builds on “DoD’s progress to modernize and innovate,” not only “including the largest investment ever in RDT&E—more than 9.5% over the FY 2022 enacted level,” but also dedicated USD16.5bn to Science and Technology, USD3.3bn to microelectronics, and USD250M to 5G, and an undisclosed amount to artificial intelligence as a part of its efforts on “Advanced Capability Enablers” (White House 2022b).

5 Fielding and Operating Defense AI

Despite some of the difficulties discussed above, the United States has been actively prototyping, fielding, and operating applications of AI across the Department of Defense and the armed services. While the uses for AI in defense contexts are seemingly endless, from using AI to enhance the precision and accuracy of existing systems to generating simulation-based training initiatives and wargames, some of the more visible, established applications of AI the DoD has been pursuing are in the following areas:

  • Intelligence, Surveillance, and Reconnaissance (ISR)

  • Cyber

  • Autonomous Systems and Vehicles

  • Command and Control

  • Disaster Relief

  • Logistics

The below sections detail some examples of the more visible, mission-specific applications of AI the U.S. military has pursued.

5.1 Intelligence, Surveillance, and Reconnaissance (ISR)

AI is already demonstrating its dramatic impact on ISR capabilities due to its ability to recognize patterns quickly and analyze large swaths of disparate data from various sources. Project Maven, which used computer vision and algorithms to aid in video and image analysis, was the first AI project within DoD to be considered a resounding success. Other, more recent AI initiatives have emerged, including the Army’s Scarlet Dragon which uses data from Maven to provide AI-augmented targeting assistance for large-scale combat operations, while the Marine Corps is working to “incorporate algorithms developed as part of Project Maven into their capabilities and to modernize legacy weapon systems” (Wasserbly 2021; GAO 2022). The Navy’s Task Force 59 is working to create cost-effective, fully autonomous vehicles that also have AI-enabled surveillance capabilities to monitor threats ranging from “hostile Iranian drones to an aggressive Chinese posture to rogue pirates” (Barnett 2022a).

5.2 Cyber

Concerning how AI might impact cybersecurity and cyberoperations, much of the discourse within the United States has been about its disruptive potential. AI is expected to “make the work of cyber defenders more difficult over time, with faster and faster computers enabling increasingly complex attacks and more rapid network intrusion” (Segal and Goldstein 2022: 31). The Navy and the Army both employ commercial machine learning algorithms, trained on commercial and government data, to better detect cyber threats (Kenyon 2022). The DoD has worked closely with Cyber Command to employ AI to enhance network protection tools.

5.3 Autonomous Systems and Vehicles

Advances in AI—and in particular, the integration of AI into piloting, guidance, navigation, and ISR and target acquisition systems on platforms—have enabled greater degrees of autonomy in everything from vehicles to munitions. R&D projects currently in development include the Navy’s Ghost Fleet—the goal of which is to have nearly one in three warships be entirely autonomous, without any human crew aboard, by 2045— and the Air Force’s Golden Horde experiments, which hope to develop swarming air-fired and air-dropped smart weapons that can autonomously share information, change course, and seek high-priority targets (Mizokami 2022; Insinna 2021). Most recently, the first effort under the Replicator Initiative is focused on attritable, all-domain, autonomous systems, with a goal of thousands of systems to be purchased in tranches within the next 2 years.

5.4 Command and Control

AI is also increasingly used to collect, identify, and synthesize multiple data streams to improve battlefield and situational awareness in real-time and better connect sensors with operators and decision-makers. Using AI to create a single source of information in this manner is sometimes referred to as a “common operating picture” (Barnett 2020). A Congressional Research Service report points out that “currently, information available to decision-makers comes in diverse formats from multiple platforms, often with redundancies or unresolved discrepancies” (Sayler 2020: 13). In this regard, AI is seen as the critical component to implementing the DoD vision of Joint All-Domain Command and Control (JADC2)— “which aims to centralize planning and execution of air-, space-, cyberspace-, sea-, and land-based operations” to create a wholly-connected and in-sync military. The DoD released its JADC2 Implementation Plan in March 2022, which elaborated that “JADC2 enables the Joint Force to ‘sense,’ ‘make sense,’ and ‘act’ on information across the battlespace quickly using automation, artificial intelligence, predictive analytics, and machine learning to deliver informed solutions via a resilient and robust network environment” (DoD 2022a). Data and AI have become so central that, moving forward, the CDAO will be heading up the strategy element of JADC2 (Pomerleau 2022).

All of the service’s JADC2 projects—the Army’s Project Convergence, the Navy’s Project Overmatch, and the Air Force’s Advanced Battle Management System—have indicated the use of AI in some shape or form. The Army used its AI-powered network, Firestorm, to transmit intelligence directly from U.S. Army sensors to Australian and British forces in a recent Project Convergence experiment with allies (Feickert 2022; Hoehn 2022; Strout 2020; Lacdan 2022). The Air Force has also launched a series of Global Information Dominance Experiments to give more time to commanders to make decisions “by integrating more information from a global network of sensors and sources, using the power of AI and machine-learning techniques to identify the important trends within the data, and making both current and predictive information available” (Barnett 2021; U.S. Air Force 2021). DARPA has also launched programs to leverage AI to “network systems and sensors, prioritize incoming sensor data, and autonomously determine the optimal composition of forces” in the form of the Air Space Total Awareness for Rapid Tactical Execution project (Sayler 2020: 13; Barnett 2020).

5.5 Disaster Relief

The DoD also pursues AI for use cases with humanitarian goals. When the JAIC was first established in 2018, it had two initial capability delivery projects called National Mission Initiatives (NMIs) it was tasked with, one of which was Humanitarian Assistance and Disaster Relief (Cronk 2019). The idea behind the NMI is to use AI and machine learning to power “problem-solving prototypical applications to quickly identify and locate people and infrastructure impacted by natural and manmade disasters” (Esri 2019). Predictive geospatial intelligence and computer vision, for example, are both being developed for use in these situations (DIU 2023b).

5.6 Logistics

The second NMI the JAIC was initially tasked with was Predictive Maintenance. A significant component of logistics is ensuring materiel is up to standards and well-maintained. The idea behind the NMI was to use AI to generate efficiencies and reduce costs associated with maintenance by predicting in advance when a component might fail—a technique known as predictive maintenance (Department of Defense Office of Inspector General 2022: 2). In this way, AI could provide a unit-based, specially tailored recommendation instead of waiting for a system or part to fail before fixing it or relying on set force-wide maintenance schedules.

6 Training for Defense AI

One of the most widespread, recurring points of concern about U.S. defense AI adoption is the broad lack of STEM expertise and talent in government (Horowitz & Kahn 2020). In fact, according to the NSCAI’s final report, it is the “alarming” deficient of diverse and tech-savvy talent within both the DoD and Intelligence Community that stands as the “greatest impediment to the United States being AI-ready by 2023” (NSCAI 2021: 121). The report continues, warning if the government fails to invest in building a digital workforce, the United States “will remain unprepared to buy, build, and use AI and its associated technologies” (NSCAI 2021: 121).

While the United States is attractive to global AI talent pool members, the public sector has failed to compete with academia and industry (Zwetsloot et al. 2021a). A survey of 254 U.S. AI Ph.D. graduates, for example, indicated only 31% would even consider a government role, citing a lack of access to both computing and data resources as well as growth opportunities and an inability to pursue research (Aiken et al. 2020: 2, 13).

Despite the blueprint provided by the NSCAI report and a congressional mandate to develop an AI workforce and education strategy in the 2020 National Defense Authorization Act (NDAA), there has not been any comprehensive effort to enact many of the recommendations outlined, nor to reform hiring, recruiting, and training processes in either the DoD or IC (U.S. Government Publishing Office 2019). However, the CDAO has begun to design and propagate a consistent AI education strategy and improve general understanding of AI across the department and the armed services through the launch of a series of “AI 101” educational pilot programs (JAIC 2020b; Barnett 2022b).

7 Conclusion

The United States has both the desire and means to achieve world leadership in defense applications of artificial intelligence. There is support from top leaders and policymakers across the government, and a rich AI research ecosystem exists across the private and academic spheres. Moreover, AI is increasingly viewed as critical in addressing national security concerns, particularly in capability-matching U.S. adversaries and addressing the pacing challenge with China that animates the 2022 National Defense Strategy.

Surprisingly, despite these stimuli, the U.S. government, particularly the Department of Defense, has yet to seriously, and on a broad scale, employ AI beyond one-off projects or initiatives. The lag is partly due to a predisposition to favoring and being more accustomed to hardware-based capabilities rather than software and momentum that biases the status quo. However, the most considerable obstacles slowing down U.S. defense innovation, and AI adoption especially, have been (1) an organizational structure and acquisitions process that is not best suited to translating general-purpose technologies of commercial and civilian origins into fundamental capabilities to be used in national security and defense contexts, and (2) a significant AI/STEM talent deficit.

Significant geopolitical changes and events, including the Russia-Ukraine conflict and continuing evidence of China’s technological rise, have crystallized the near-term military impact of emerging technologies, including AI. The United States Department of Defense has reacted by increasing the urgency with which it has pursued its AI goals by creating new organizations designed to improve DoD’s AI adoption capacity, formalizing guiding ethical principles, increasing funding and support for projects and acquisitions mechanisms tailor-made for AI, and reorganizing its internal AI and data ecosystem.

There has been some early indication of progress due to these recent course-correction measures. After years of unmet potential, DoD is now more effectively moving forward towards creating a more AI-enabled US military, which is promising. However, only time will tell the long-term implications and whether recent efforts will be sufficient to launch a fully AI-enabled U.S. military.