Artificial Intelligence (AI) has become pervasive across global society over the last 15 years. This is principally because contemporary AI is overwhelmingly a commercial product, predominantly developed in the civil domain, and frequently designed to meet consumer market demands.

AI is now starting to be used for defence purposes, but the military comes late to the game. Defence AI is a case of technology push not technology pull. The result is that defence forces globally, and including Australia, are fundamentally uncertain about AI’s place in warfighting. Accordingly, they are now deeply involved in numerous AI experimentation programs. Operational concepts will then be derived from these experiments with the concepts going on to drive future equipment acquisition programs.

While today’s AI technology may not be optimized for defence purposes, there are definite upsides. Given mass civilian production, the technology is lower cost than defence-only technology traditionally is. Moreover, much of the personnel, training, logistics, industrial base and service support foundation such technology requires are already available and spread widely across society. Middle-power Australia fits within this paradigm. The Australian Department of Defence [hereafter abbreviated as Defence] use of AI fits within a broader national picture; it is effectively a subset although adjusted to meet specific defence needs, context, capabilities, and capacities.

This chapter initially discusses how Defence thinks about AI from a definitional and doctrinal perspective. The second section describes the recently revised innovation ecosystem that Defence uses to develop AI for military purposes, with the third detailing some important Australian defence AI projects currently in development, and the fourth assessing the ecosystem approach being taken. The fifth section looks at the forthcoming major projects that will provide funding for AI technology developments in the near-medium future. The sixth and seventh sections examine the single-service organisations currently involved in trialling AI and the single-service plans for training their future personnel in AI matters. The conclusion briefly assesses the Australian military’s overall AI progress and intent.

1 Thinking About Defence AI

Military organizations traditionally desire to carefully define matters to ensure personnel across their large organizations have both a clear understanding and a shared one. However, with AI, as in the wider society, there are some definitional variations about what AI is. In early 2022, the Navy defined AI as “a collection of interrelated technologies used to solve problems and perform tasks that, when humans do them, requires thinking” (RAS-AI Campaign Plan 2022: 64). In August 2022, Army opted for a similar but subtly different definition: “A collection of techniques and technologies that demonstrate behaviour and automate functions that are typically associated with, or exceed the capacity of, human intelligence” (Robotic & Autonomous Systems Strategy 2022: 4).

Definitional precision is at least partly necessary as within Defence over the last five years there has been a conflation of AI and autonomy. Matters concerning AI are then often discussed in terms of autonomy, in some respects confusing a technology with a quality and inadvertently creating conceptual difficulties when the two do not overlap. The Army Robotic and Autonomous Systems (RAS) v2.0 strategy observes:

AI underpins the realisation of true autonomy of RAS. Without it, RAS will reach autonomy limits quickly, remaining remote controlled and automatic at best. Extending AI to facilitate truly intelligent and adaptable machines and capable human-machine and machine-machine teams will be critical to future RAS capabilities. AI tools will also be foundational in decision support, providing RAS with the ability to rapidly analyse significant volumes of data, see patterns and make observations and recommendations (Robotic & Autonomous Systems Strategy 2022: 20).

A second aspect of Defence’s conceptualization of AI is noteworthy, although within mainstream thinking: the clear recognition of the centrality of data in supporting AI development. The 2021 Defence Data Strategy argues that maintaining “a capable, agile and potent” Australian Defence Force (ADF) will increasingly rely on AI technologies, and this means that Defence’s data holdings must be managed and discoverable in a way that can support AI development (Defence 2021: 35). The strategy sets out the vision, pillars, practical initiatives, and priority data areas to elevate Defence’s data maturity and become a more data-informed organization.

1.1 Joint and Single Service Thinking

Given the conflation with autonomy noted earlier, the joint-service Concept for Robotic and Autonomous Systems published in late 2020 is effectively the ADF’s capstone publication informing its adoption of AI technology. In terms of AI, the concept aims to address how Australia’s future defence force can best exploit AI to gain advantages across the conflict spectrum and, rather innovatively, how Australia can counter threats posed to the future defence force by AI (Australian Defence Force 2020: 22).

The key exploitation judgements are to use AI in human commanded teams to improve efficiency, increase mass and achieve decision superiority while decreasing risk to personnel. Efficiency involves using AI to perform certain tasks faster and more reliably than human operators, thereby increasing force capacity. In terms of mass, defence force structures can now move away from being a small number of exquisite platforms to instead featuring many, small, lower-cost, AI-enabled systems. In so doing, military forces will have many more battlefield ways possible to generate an advantageous concentration of combat power, disperse the force to enhance survival, and generate deception (Australian Defence Force 2020: 36–37).

AI use for decision superiority involves assisting making and implementing better and more accurate decisions, while using tempo and leverage to best effect. An important part of this is improving the situational understanding of human or machine decision-makers by improving their awareness, analysis, and comprehension. Lastly, systems employing AI can operate in ways that decreases the risks to defence force personnel during operations; AI systems can be programmed to be as fearless as the decision-makers wish (Australian Defence Force 2020: 40–42).

The joint service document was shaped by earlier single-service thinking. Equally, more recent single-service thinking is now building on the joint service document, providing additional depth and insights.

The Australian Army provided the initial impetus for the ADF formally thinking about AI with its 2018 Robotic and Autonomous Systems Strategy; this has recently been revised and republished as the 2022 Robotic and Autonomous Systems Strategy v2.0 document. This asserts such technology can maximize each soldier’s performance, improve decision-making, generate mass and scalable effects, enhance force protection, and improve efficiency. These attributes have some subtle differences to those which the joint service document sets out.

Soldier performance would be maximized by reducing individuals’ cognitive burdens through collecting, processing and presenting information in a useful and intuitive manner. In terms of decision-making, the complicated battlespace of the future is seen as requiring AI processing of vast data troves of information on friendly and adversary forces to adequately support commanders at the operational and tactical levels. Moreover, AI through human-machine teaming could also potentially give a modestly sized army significantly increased combat power and mass through deploying large numbers of AI-enabled systems without the need to expand the human workforce. Likewise, humans might be better protected by transferring many current dangerous battlefield tasks, such as reconnaissance and intelligence collection, to AI systems. Lastly, AI could noticeably enhance efficiency, particularly in the logistics chain. AI systems can bring “aware logistics,” creating a “sense and respond” logistic structure that moves from a “just in case” to “as needed” as operations evolve (Robotic & Autonomous Systems Strategy 2022: 8–18).

In a similar vein, the Royal Australian Navy’s (RAN) RAS-AI Strategy 2040 describes what the “five very fundamental effects” such systems will deliver:

  • Force Protection involves shielding people by increasing their situational awareness and providing innovative alternatives to traditional maritime combat approaches so helping keep sailors out of harm’s way.

  • Force Projection sees AI allowing the RAN to generate mass and tempo on a scale otherwise unachievable while enabling a presence in Australia’s maritime reaches that crewed platforms could not, on their own, achieve.

  • Force Partnerships envisages Navy’s AI systems being integrated by design with the overall ADF, and priority being given to interoperability with Australia’s strategic partners.

  • Force Potential involves human-machine teaming maximizing human potential by allowing novel ways to conduct and sustain operations, reducing cognitive loads on commanders and enhancing training, simulation, and force level planning.

  • Sovereign Control encompasses two different aspects. Firstly, having a system of control that protects its data and secondly, the navy being able to rapidly task-organize multiple AI assets across air, land, sea, space, and electromagnetic domains (RAS-AI Strategy 2020: 14).

The Royal Australian Air Force (RAAF) has not yet published a formal document related to its AI intent although a recent Chief of Air Force argued that “artificial intelligence and human-machine teaming will play a pivotal role in air and space power into the future.” The role envisaged is mainly increasing personnel productivity through using AI to undertake tasks that are predictable, repetitive and which do not require imagination and innovation. AI is not about replacing people but instead allowing employing this “scarce resource” better. The Chief’s vision is that the RAAF “will be AI-enabled using robotics to augment roles, and humans working with machines, so they get the best out of both. The days of boring menial tasks will be gone. Our most scarce resource, our people, will focus on higher value and the creative tasks that we need” (Laird 2021).

1.2 Key Emerging Concern

The joint service document noted frets about countering hostile states and non-state actors using AI and envisages responding to this challenge by using:

  • Perception Attacks that disrupt the ability of the AI to properly perceive its environment, possibly by inducing a false understanding of the situation.

  • Control Attacks that are assume future AI systems will still need some human input or direction and so this connection may be purposefully obstructed.

  • Information Warfare to degrade the quality of the data that the AI system is using, whether when devising its operating algorithms or when using them.

  • Platform Destruction simply aims to physically destroy the AI system although this is becoming more difficult as such systems may be small and used in mass swarm attacks.

As an important enabling effort to countering hostile AI, Defence is developing the ability to collect technical intelligence on the algorithms and data utilized by adversary AI (Australian Defence Force 2020: 44–50). This will allow optimized counters to specific AI systems to be devised. Even so, generic counters will still be necessary given threat intelligence may have some gaps and shortcomings.

2 Developing Defence AI

2.1 National Framework

Defence AI development is nested within the overarching national AI Action Plan which itself is a key artifact of the Australian Government’s Digital Economy Strategy. Australia’s AI Action Plan sets out the Australian Government’s vision for Australia to be a global leader in developing and adopting trusted, secure, and responsible AI. The plan will be implemented under four primary focus areas, all of which can be imagined within a defence perspective.

Focus One is supporting businesses to adopt AI technologies that increase productivity and competitiveness. Focus Two is creating an environment to grow and attract world’s best AI talent. Focus Three is using cutting edge AI technologies to solve national challenges; in the defence domain these are identified as developing applications for intelligence mission data together with virtual reality and graphics applications. Focus Four stresses AI usage should reflect Australian values and that ethics are incorporated as the technology develops (Department of Industry, Science and Resources 2021).

The national AI Action Plan invests €63.33 M over 4 years to establish a National AI Centre and four subordinate AI Digital Capability Centres (DCC). It is envisaged that this initiative will help drive collaboration between research organizations, businesses and industry and generate a thriving AI ecosystem.

The National AI Centre is intended to drive business adoption of AI technologies by coordinating Australia’s AI activity, expertise and capabilities in a manner that improves national productivity and competitiveness. The center focuses on key central themes including responsible AI, AI for diversity and inclusion, and AI at scale while becoming a focal point for international partnerships. The National AI Centre is organizationally located within Data 61, the data and digital specialist arm of Australia’s national science agency, the Commonwealth Scientific and Industrial Research Organization (CSIRO 2023).

The four, lower-level DCCs each focus on a specific application of AI, such as robotics or AI-assisted manufacturing. The centres are principally aimed at supporting the commercialization of Australia’s AI expertise and capabilities which often resides at the small and medium enterprise level (CSIRO 2023).

2.2 Internal Defence AI Ecosystem

In the defence domain, an AI development ecosystem is also steadily being established. The Australian Department of Defence has two main parts: one commanded by the Chief of the Defence Force (CDF) and the other managed by the Secretary of the Department of Defence. Each is responsible for different parts of the overall defence AI ecosystem.

In some respects, the military part commanded by CDF has moved backward. The Joint Capabilities Group had earlier set up the Defence Artificial Intelligence Centre (DAIC) to build the capability foundations and accelerate the understanding and implementation of AI across Defence (Defence 2021: 35). However, the DAIC has now been discontinued with AI development activities in the military part of Defence now shifting solely to single service organizations (discussed later) rather than taking a joint force perspective.

The largest portion of the Defence AI ecosystem is within the civilian part of Defence and under the control of the Innovation, Science & Technology Capability Manager. A major reorganization has seen the Next Generation Technologies Fund, the Defence Innovation Hub and the Science, Technology and Research Shots (STaRShots) ended and the setting up of the new Advanced Strategic Capabilities Accelerator (ASCA). The development of AI for defence purposes will now be actioned under the ASCA process.

ASCA has been designed to address shortcomings in the earlier innovation pathway within Defence, in particular by integrating the diverse parts of the innovation process and in accelerating the transition of innovative technology into in-service military capabilities. ASCA comprises three distinct programs: the Emerging and Disruptive Technologies (EDT) program funds early research into promising new technologies; the Innovation Incubation program funds the acquisition of new commercial technology modified to meet military priorities; while the Missions program funds rapidly pulling through into military service those disruptive technologies that meet pressing defence needs (ASCA 2023).

ASCA has some €2.06bn of funding allocated to it over the next decade. Deliberations on funding allocations involve the Chief Defence Scientist at the Defence Science and Technology Group (DSTG), the Vice Chief of the Defence Force (a three-star military officer), and the Deputy Secretary Capability Acquisition and Sustainment Group. In this manner, ASCA encompasses defining the military need, developing the new technologies to meet this need, the transition into equipment acquisition and the introduction into service (Hilder and Monroe 2023). ASCA began in July 2023 with initial starts including EDT program research into countering the effects of AI-powered technologies used for spreading disinformation, and the Mission program seeking improvements to the processing and synthesis of mass intelligence data (ASCA 2023).

DSTG is involved in other activities as well as ASCA. DSTG’s latest strategic plan sees DSTG doing less research itself and instead playing a stronger role in coordinating support to Defence from the national S&T enterprise that encompasses other publicly funded research agencies, universities, and commercial enterprises (Defence 2020b: 2).

In terms of AI, a major entity within DSTG is the Trusted Autonomous Systems (TAS) defence cooperative research center. TAS facilitates emerging defence technology projects through coordinating, and at times collaborating with, other interested government, commercial and academic entities. Recent projects include involvement in developing an autonomous underwater vehicle, a prototype cooperative robotic system suited to high-tempo land operations called Hyper Teaming and a proof-of-concept demonstration using a patrol boat modified for autonomous operations. Most of the projects are industry-led but TAS is also undertaking two common-good activities: Ethics and Law of Autonomous Systems, and Assurance of Autonomy (Trusted Autonomous Systems 2022).

DSTG also works closely with universities including setting up the Defence AI Research Network. This aims to establish and sustain a community of AI researchers who work together in an environment that stimulates new ideas and knowledge and which supports AI system evaluation, testing, and integration. An example of the network’s activities is the recent funding provided to the University of South Australia and Deakin University for two AI projects involving processing noisy and dynamic data into information useful for military decision makers (Defence 2023b).

There is also the more general Australian Defence Science and Universities Network that connects DSTG into state-based research and innovation networks. DSTG has a senior Defence scientist, an Associate Director, embedded in each of the state-based networks to promote cooperation useful to Defence (Defence Science and Technology Group 2022a). Beyond national defence, DSTG also manages the National Security Science and Technology Centre (NSSTC) concerned with domestic and transnational security matters. The Centre is involved in advancing relevant AI, machine learning and data science capabilities (Defence Science and Technology Group 2022b).

2.3 External Industry-Academia Defence AI Ecosystem

Small and medium enterprises (SME), and including start-ups, comprise some 90% of the companies in Australia’s defence industrial sector. Of the remaining 10%, there are a few medium-sized companies and several large foreign-owned businesses. Of the later, BAE Systems Australia has been involved in AI developments for more than a decade. Boeing Australia has become active in AI over the last 5 years, while the Australian subsidiaries of Lockheed Martin and Northrup Grumman have recently also become interested in AI in command and control systems. Within this, BAE Systems has done considerable work in-house whereas the American companies have acquired relevant Australian SMEs or formed business relationships with them.

Australian SMEs are proving innovative in devising AI solutions. However, the SME dominance of the defence industry sector, combined with few adjacent relevant industries such as telecommunications and personal electronics, means there is a shortage of investment capital that limits SME growth possibilities. Given this, there are uncertainties over whether Australian SME AI innovations can transition to become a sustainable operational capability; for such a transition, larger sovereign Australian enterprises or consortia are considered necessary (Robotics Australia Group 2022: 144).

The issue gains additional importance as AI, seen as encompassing algorithms, machine learning and deep learning, has been designated as a Sovereign Industrial Capability Priority. Accordingly, Australia “must have” access to, or control over, the skills, technology, intellectual property, financial resources, and infrastructure necessary for long-term defence capability support (Defence 2022: 1, 4). Given these requirements, the investment capital issue needs addressing to allow constructing Australian sovereign supply chains resilient to shocks and outside interference.

There are proposals to overcome the capital shortfall problem through shaping existing Government investment to generate enduring sovereign infrastructure that fosters SME growth. Significant capital and schedule efficiencies might be achieved through cost reduction of business and technical processes by creating a ‘scaffolding’ for re-use and leveraging of existing investments. Embracing enterprise collaboration may avoid wasteful duplication of effort, provide substantial efficiencies, and enable capital pooling that delivers outcomes unrealizable by individual enterprises (Robotics Australia Group 2022: 144–145).

Supporting SMEs and the wider defence industry sector there is the recently established Trailblazer Universities Program, intended to drive the commercialization of academic and industry research. Under this, the Concept to Sovereign Capability (CSC) project that encompasses defence AI will involve the University of Adelaide and the University of New South Wales working with industry to rapidly secure capital for both collaborative research projects and the commercialization of defence and dual-use technology successes. Over 80% of industry commitments to the CSC are from Australia-based SMEs (Savage 2022).

2.4 International Cooperation

DSTG is involved in defense Al R&D collaboration within the Five Eyes community (Australia, New Zealand, Canada, UK, and US) through The Technical Cooperation Program, and in specific bilateral collaborations with the UK and US. There is also growing interest in further developing S&T partnerships with Japan, the Republic of Korea, Singapore, India, and other countries in the Indo-Pacific region (Defence 2020b: 9).

In 2021, the existing AI collaboration with the US and UK was expanded under Pillar II of the AUKUS enhanced trilateral security partnership with an initially focus on accelerating AI adoption and improving the resilience of autonomous and AI-enabled systems in contested environments (Department of Prime Minister 2022). By late 2023 this had led to incorporating AI into anti-submarine sonobuoy processing systems on P-8A Maritime Patrol Aircraft that all three nations use, and in trialing in the UK and Australia AI algorithms and machine learning for force protection, precision targeting, and intelligence, surveillance and reconnaissance purposes (Defence 2023c).

Outside of the formal governmental processes, there are other linkages directly between countries and Australian AI companies. For example, the US Air Force’s (USAF) venture capital division, AFVentures, a part of the AFWerx technology development organization, has recently funded Australian AI company Curious Thing to reimagine the USAF recruitment process (Australian Trade 2022).

2.5 Important Defence AI Projects in Development

There are three important defence Al R&D research projects underway that each illustrates various aspects of the current Australian thinking about defence AI and R&D priorities.

2.5.1 ‘Loyal Wingman’ Airpower Teaming System

Boeing Australia has developed an Airpower Teaming System of which the most visible component is a jet powered Uncrewed Air Vehicle (UAV) with fighter-like performance. Initially called Loyal Wingman and now designated the MQ-28A Ghost Bat, this UAV uses cognitive AI to allow teaming with crewed fighter aircraft for air combat, reconnaissance, and surveillance missions in contested environments. The RAAF Chief at the time observed:

The true value [of the project] is…hidden inside the airframe of Loyal Wingman. And that is the development of the code and the algorithms which form the AI behaviors that will optimize its combat capability. The Loyal Wingman project is a pathfinder for the integration of autonomous systems and AI to create smart human-machine teams (Laird 2021).

The joint venture between the RAAF and Boeing started in 2017. A major step forward was a 2019–2020 experimentation project approved by TAS and run by Boeing Australia that embedded machine learning techniques on board four small test-bed UAVs allowing them to detect, decide, and act during missions. In the Ghost Bat the AI is embedded in the BAE Systems (BAES) Australia flight and mission control system.

The prototype flew in February 2021 with five more since built; a final assembly facility for production UAVs is being constructed at Wellcamp in Queensland. In May 2022 the government announced a further seven would be acquired for €278M to enter service with the RAAF in 2024–25 (Airforce Technology 2023).

The seeming high unit cost of each Ghost Bat is an issue. In early 2023, the RAAF Chief considered the cost of advanced combat drones needs to reduce before they become widely used by the air force: “the price point, and where it really looks interesting to us, is if we can get it to about a tenth of the cost of a manned fighter. So if we get to 10%, then I can start to build the mass and survivability of not just manned platforms, but the entire air combat system” (Packham 2023).

2.5.2 M113 Optionally Crewed Combat Vehicle

In 2019, BAES Australia converted two M113 AS4 Armoured Personnel Carriers into Optionally Crewed Combat Vehicles (OCCV). This project involved the TAS and used the now ceased Next Generation Technologies Fund. The vehicles were used to help the Australian Army’s Robotic and Autonomous Systems Implementation and Coordination Office (RICO) better understand employing autonomy on the battlefield and the implementation of the 2018 Robotics and Autonomous Systems Strategy. Natalie Waldie, BAES Program Manager Technology Development observes: “The M113 [is] a convenient (…) experimental platform to demonstrate autonomy. Autonomy doesn’t achieve what it needs to unless you can effectively integrate it into your overall battle space CONOPS, and that’s really what we’re exploring with Army” (Levick 2020).

In 2020, a further 16 M113 AS4 vehicles were converted to OCCVs in a €5M project that also funded additional testing. In 2021, four OCCVs participated in Exercise Koolendong, a live-fire warfighting exercise in the Northern Territory, which tested the vehicles’ ability to operate in harsh combat environments. In mid-2023, some of the vehicles were fitted with remote controlled, tele-operated weapon station and tested during live fire exercises in southern Australia. Three of the vehicles can be remotely operated from a single control vehicle up to 5 km away (Ferguson 2023).

The M113s use the BAE Systems Vehicle Management System (VMS) developed over the last two decades at the company’s Red Ochre LABS in Adelaide. The VMS incorporates AI and is derived from the company’s’ domain agnostic autonomy technologies that have been used on several other programs, including in the Ghost Bat.

2.5.3 Bluebottle Uncrewed Surface Vessel

The OCIUS Technology Bluebottle is an uncrewed, long-duration, autonomous, 5kt surface vessel that operates on solar, wave and wind power, and can carry a payload of some 300 kg including thin line sonar arrays, radar, 360-degree cameras, an automatic identification system and other sensors. Bluebottle incorporates AI neural networks, edge computing processing of sensor signals, low bandwidth communication links and a “team” based software architecture where peer vessels independently maneuver to achieve the group’s assigned common goals, such as making an interception.

Initially funded in 2015, the prototype began tests at sea in 2017. Another four Bruce-class Bluebottle were built to allow trailing Bluebottle teams in southern Australian waters. This experimentation then led onto design improvements incorporated into five Beth-class Bluebottles acquired between 2020–2023 (Naval Technology 2023).

In 2021, four Bluebottles operated as an intelligent network patrolling Northern Australian Indian Ocean waters fitted with a payload able to detect unauthorized vessels, alert a shore-based command centre and then approach the intruder for detailed investigation. In 2023, at a major autonomous system exercise in southern Australian waters, Bluebottles fitted with thin-line towed array sonars worked with vessels of the US Navy’s Unmanned Surface Division One to track and isolate an autonomous underwater vehicle simulating a crewed submarine. The Bluebottles than worked in collaboration with a C2 Robotics Speartooth (an Australian developed uncrewed underwater vehicle) to cue a Navy MH-60R Romeo helicopter to further prosecute the target (Felton 2023).

2.5.4 Assessment

The three projects are quite different in aims although in each, AI and human machine teaming is at the core of their capabilities. The Bluebottle is on the cusp of being operational as an Exclusive Economic Zone patrol system with the Ghost Bat intended to lead directly to a near-term operational air combat capability. In contrast the M113 OCCV is simply an experimental platform for technology and operational concept development purposes. However, the crewed M113 is obsolescent, and the Army still has some 480 available. If these were converted into OCCVs they would significantly enlarge Australia’s armored capabilities. Being uncrewed, the OCCV could be used in combat operations in situations a crewed M113 would not be risked.

In terms of exports, Ghost Bat would require US government approval. Boeing is already pitching some of the Ghost Bat technology for inclusion in emerging USAF uncrewed air vehicle programs. Similarly, BAES Australia exports of its VMS used on Ghost Bat and the M113 OCCV would probably require UK government approval. While Blue Bottle seems a simple sailboat, its technical sophistication may mean some variants may also be subject to US approval.

3 Organizing Defence AI

Defence is deeply involved in numerous experimentation projects to better understand AI and the contribution it may make. Given the demise of a joint approach, this experimentation is not unreasonably organized around the three services.

Implied in this approach is that AI will not cause significant structural change within Defence but rather be simply absorbed as just another new technology. AI is conceived as being used to either enhance, augment, or replace existing capability, meaning the existing services will not be greatly impacted. Organizationally, Defence will apparently remain as it now is well into the future, at least in terms of AI. This replicates the approach being taken by Australia’s major allies and importantly allows interoperability to be more readily maintained than if there were major structural changes (Australian Defence Force 2020: 54–55, 60).

As part of the initial organizational steps, it is considered the acquisition of data storage and access capabilities must commence immediately. Such capabilities will be the foundation of AI used by Defence in the future. Moreover, building human confidence in AI will also begin, with trust built on relatively simple systems in preparation for the introduction into service later of more complex systems (Australian Defence Force 2020: 56).

The focus on experimentation has also led to a significant reorganization of the existing innovation pathway to better ensure a “good idea” moves expeditiously to in-service use. The ASCA process is still being developed and implemented meaning its successes or shortcomings are unlikely to be evidenced for some years. However, some tentative observations have been made.

ASCA will principally facilitate the adoption of core technologies developed elsewhere. Its design means that the entity will generally tend to neglect innovations that do not have a clear near-term pathway to acquisition. This means that AI developments are likely to get some funding, but the actual quantum will be meagre. Given this constraint, the desire to develop a sovereign AI capability able to optimize defence AI for national uses may be at odds with ASCA’s design and its focus on adopting other’s core technologies. Toby Walsh at the University of New South Wales’ AI institute warns that “adopters don’t get the rewards that creators do, and (…) adopters don’t get to choose what the technology does” (Walsh 2023).

ASCA relies heavily on exploiting commercial technology for defence purposes not on bringing some wholly new break-through technology into use and gaining a distinct strategic edge. But this approach may not produce, what Defence is looking for, as George Henneke recently argued:

We can develop an edge in dual-use technology by designing an acquisition system that runs faster than an adversary’s. But it will never be more than an edge. On the time horizon of most acquisition systems—months or years, not decades—that adversary can mimic and counter our capabilities. Excessive reliance on dual-use technology creates not a sustained strategic advantage, but an arms race. Over time, we will not outpace our competitors (Henneke 2023).

4 Funding Defence AI

Beyond the R&D monies discussed earlier, AI development funding will be provided by those major capital equipment acquisition projects that feature extensive use of AI and machine learning technologies. The latest force structure investment plan was issued in 2020 and had six relevant major projects: one Army, three Air Force, one information and cyber domain, and one Navy. Four projects are in this decade: Air Force Teaming Air Vehicles, Integrated Undersea Surveillance System, Joint Air Battle Management System, and the Distributed Ground Station Australia. A new plan is expected in mid-2024 and may adjust some aspects.

4.1 Army

Under the new Future Autonomous Vehicles project, a fleet of uncrewed systems and vehicles, sufficient for up to brigade operations in size, will be acquired to support land force operations. This project will build from the M113 experimentation program and aim to enhance the war-fighting capability of the ADF while also protecting Australian personnel. The acquisition phase of the project is scheduled to run 2033–2040 and has been allocated a budget for planning purposes of €5–€7.5bn (Defence 2020a: 72, 77).

4.2 Air Force

The Ghost Bat/Loyal Wingman R&D program is envisaged being bought into service through a major project titled ‘Teaming Air Vehicles.’ This project will involve the “acquisition of remotely piloted and/or autonomous combat aircraft, including teaming air vehicles, to complement existing aircraft and increase the capacity of the air combat fleet.” The project’s acquisition phase is scheduled to run between 2026–2040 with a currently allocated budget provision of €5–€7.4bn (Defence 2020a: 51, 57).

The Distributed Ground Station Australia acquisition project will run between 2024–2031 and cost an estimated, €0.8–€1.2bn. This processing, exploitation and dissemination facility will be responsible for the analysis of data collected from Air Force intelligence, surveillance, and reconnaissance aircraft. Staff will be able to access national and open-source intelligence resources and use AI to rapidly fuse collected information to provide decision-makers with greatly enhanced near-real time situational awareness of events (Defence 2020a: 57).

The Joint Air Battle Management System (JABMS) acquisition project running from 2023–2031 has a budget provision of €1.2–€1.9bn. JABMS will provide greater situational awareness to deployed ADF forces from advanced air and missile threats and improve interoperability with allies (Defence 2020a: 57). The JABMS project recently selected Lockheed Martin Australia as the preferred tenderer.

Lockheed is working with Australian company Consilium Technology on examining modelling, simulation and AI technologies that can be rapidly combined into an open architecture framework. Consilium Technology is also exploring the use of machine learning to support future all-domain data transfer capabilities during contested warfighting environments (ADM 2022). Another company, Consunet, is also participating in the project using AI for developing spectrum awareness and management tools, and electromagnetic spectrum modelling (APDR 2022).

4.3 Information and Cyber Domain

Defence has responsibilities for some defensive and offensive cyber capabilities, and certain intelligence collection systems. The force structure plan states that “funding will be set aside to ensure Defence remains competitive in the future as emerging technologies, such as artificial intelligence, arise in this domain.” A new major acquisition project titled Emerging Technologies has been penciled in for 2033–2040 with a budget provision of €1.14–€1.7bn (Defence 2020a: 27, 31).

4.4 Navy

The Integrated Undersea Surveillance System acquisition project runs between 2025–2040 and has a budget provision of €3.38–€5bn. The project will bring into service an integrated undersea surveillance system and in this examine the utility of optionally crewed vessels, uncrewed surface systems and uncrewed undersea systems (Defence 2020a: 39, 45).

5 Fielding and Operating Defence AI

There are more organizations involved in Australian defence AI than only those at the government department level, joint service, dedicated science agencies or universities. The individual services, Navy, Army, and Air Force, each have their own internal entities that consider and support emerging technologies that might enhance their discrete warfighting capabilities.

5.1 Navy

To guide its adoption of AI, the Navy has a three-part policy of engagement, design, and demonstration. First, Navy will engage widely across Defence, with defence industry and with allies. Second, Navy will use a concept-led design approach to architectures, mission management and common control systems. Third, Navy will generate opportunities to demonstrate emerging and developed AI capabilities to operational users. This will both help develop new AI systems and expedite the introduction of fit-for-purpose capabilities into naval service.

Within Navy, the Warfare Innovation Navy (WIN) branch established in 2018 is the AI focal point for Australia and internationally, including at the NATO Maritime Unmanned Systems Initiative meetings. WIN is located within Navy’s operational level headquarters at the Fleet Base East in Sydney and is currently facilitating an experimentation program to support force structure options development and capability improvements.

In terms of demonstration, the Autonomous Warrior (AW) series regularly displays, evaluates, and trials emerging AI capabilities at various Technology Readiness Level (TRLs). AW provides an opportunity to increase mutual understanding between industry and the Navy in a realistic environment while fostering collaborative relationships. AW involves four events conducted annually with each event having a specified operational focus and undertaken at different exercise locations, depending on the nature of the activity and convenience for industry (RAS-AI Strategy 2020: 44).

5.2 Army

In 2020, the Australian Army set up an organization somewhat like WIN. A difference however is the Robotic and Autonomous Systems Implementation and Coordination Office (RICO) is within the Future Land Warfare Branch of Land Capability division at Army’s strategic level headquarters in Canberra. RICO’s role involves exploration, coordination, and concept development of disruptive technology, specifically including AI.

In September 2023, the government announced a significant restructure of the Army which included re-rolling the first Armoured Regiment into an innovation and experimentation unit to accelerate Army’s adoption of new and emerging technology (Defence 2023a).

5.3 Air Force

In 2015, the Air Force set up Plan Jericho to begin building an ecosystem in which good ideas, whether from within Air Force or externally, could answer military problems by translating and accelerating leading-edge knowledge and thinking into new defence capability. The intent was for partnerships with industry, academia, and broader society to support, inform and enable the rapid exploration and realization of ideas of researchers, innovators, and entrepreneurs. Since then, the concept has been sharpened and re-oriented to provide the infrastructure and services to make it easier to build partnerships across organizations and access the expertise and resources necessary.

Jericho has three main teams. The Jericho Edge team initially engages with partners to identify and understand opportunities. The Edge team then brings in Jericho Labs to assemble communities of interest across large-organizations, start-ups, small companies, and universities to discover, test, and prototype the identified opportunities. The separate Jericho Analytics team then tests the new ideas using net assessment, wargaming and red teaming (Air Force 2021).

Jericho’s present thrust is in augmented intelligence, a concept developed from ideas of human-machine teaming. It is seen as combining the creativity and flexibility of humans with the tempo, precision, and mass of machines. The intent is to generate human-inspired dilemmas at machine speeds to cognitively overwhelm adversaries.

6 Training for Defence AI

Within Defence, some organizations have begun thinking about the training issues that the widespread introduction of AI will bring. Such training is situated within the national approach. In Defence, the single Services are responsible for raising, training, and sustaining their assigned force elements. Navy is the furthest advanced in considering the impact of AI with Army having given training matters some initial thought. Air Force has not publicly released its thinking on training.

6.1 Joint Workforce Perspective

The relevant ADF joint concept publication only briefly mentions training. Importantly though, it notes that with the introduction of AI military training will involve not just humans but also the AI. AI systems will need to be trained in a manner like their human operators by exposing them to operationally realistic scenarios so that the AI can develop knowledge bases from the data collected during these events. This training will not just be at the individual AI system level but also at the formation level where multiple AI systems will interact with multiple human-machine teams (Australian Defence Force 2020: 34). AI machine learning is where most attention is currently focused. However, in the future collective training involving both humans and AI will become increasingly important. Such events will also allow humans who are teamed with AI, or who work with AI systems, to gain confidence in their reliability and dependability.

6.2 Navy Workforce Perspective

Navy presently leverages expertise from industry and academia to deliver formal AI training and on-the-job upskilling. The later “learning by doing” approach is increasingly important in growing Navy’s AI workforce. Navy plans to continue to build upon these industry and academic relationships but sees the need to start Navy-wide AI education and training in three streams: specialists, generalists, and integration with industry.

  • Specialists

In the near-medium term, Navy considers it will need to rapidly develop new specialist skills through either instituting new employment categories or expanding and re-defining existing ones. The more important new specialist skills are Technician, Data Specialists, AI System Operators and Test & Evaluation.

  • Generalists

Navy’s total workforce will need to incorporate foundational AI technology literacy. All commanders, operators and decision-makers will need to have a foundational understanding of AI. This is likely to include basic skills in machine learning, as well as an understanding of teaming and social decision-making. Navy believes that introductory AI courses should be introduced into ab initio training as soon as possible.

  • Integration with Industry

The speed of AI development will not allow Navy to maintain in-house all the necessary AI skills required. Industry will be required to house, maintain, and deliver AI systems and will also increasingly play a role in analysis and decision support to deployed forces. These elements of workforce transformation are not mutually exclusive. While by 2040 Navy’s whole workforce will require foundational AI knowledge, the Navy will still require specialists with deep knowledge and training as well as the ongoing delivery of specialist training and education by industry and academia. For some categories, such as Combat Systems Operators, the change will be evolutionary. For other categories, the introduction of AI will be disruptive and require close collaboration between AI technologists, category managers and workforce planners (RAS-AI Strategy 2020: 19).

6.3 Army Workforce Perspectives

Army is taking a more philosophical approach than Navy. This partly reflects that the changes to Army education and training that AI bring may be far-reaching. Army personnel often perceive their service as being personnel not equipment oriented. An oft used saying declares that ‘Army equips the man, but Navy and Air Force man the equipment.’ This logic would suggest Navy and Air Force would find it easier to adjust their existing education and training approach as AI is simply another digital technology to absorb. There is an emerging belief that in the age of AI, Army may have to rethink the importance of technical training to be comparable to the other services.

Looking to the future, with AI already proliferated across the commercial domain, AI is similarly likely to proliferate across Army. It may be used in intelligence analysis, strategic decision support, operational planning, command and control, logistics, and weapon systems. To use AI however, organizations will need their personnel to be well informed to both shape the application of AI and provide quality assurance (Ryan 2018).

Armies will need more than just deep technical experts in the development of algorithms and the design of AI for military systems. Army’s future workforce using AI should be a mix of those with a basic understanding, more informed users, and specialists with advanced skills. Over the coming years, at almost every rank level, Army personnel will require basic literacy in AI, including knowledge of its application, how to provide a level of assurance and quality control, and how to optimally combine it with human intelligence. The Army’s military education and training system does not currently provide technological literacy for all their personnel. However, it is the coupling of technical experts with a heightened technological literacy across the entire force that will allow future military organizations to fully exploit the benefits of artificial intelligence.

6.4 Enhanced Training Using AI

AI may change the way militaries educate their personnel. AI tutoring systems can already provide one-on-one human tutoring and this concept could be further developed into having an AI lifelong-learning partner accompanying individuals from entry into the military and through their career. In a similar manner, military instructors may have their own teaching assistant able to communicate with their students’ AI partners to interpret individual students’ profiles and provide suggestions on tailored learning.

AI may also help develop the cognitive skills that underpin higher-level operational and strategic planning in teams. This could be done by offering more authentic environments for collaborative learning, large-scale wargame simulations, providing more intelligent adversary systems to challenge students, and using purpose-designed algorithms and curriculum data to amalgamate lessons from previous activities (Ryan 2018).

Field training using AI has both benefits and warnings. Training will need to evolve to properly incorporate human-machine teams, but these teams should not become overly dependent, complacent, or uncritical in using the technology. Training scenarios must develop users who ‘trust but verify’, that is, have confidence in the AI without being uncritically accepting of it. In this regard, data powers AI machine learning. Capturing and managing the data generated in training environments will be important to refine machine learning and system improvement (RAS-AI Strategy 2020: 19).

7 Conclusion

The ADF’s conception of AI’s utility for defence purposes is fairly conventional. AI is mainly conceived as being used in human-machine teams to improve efficiency, increase mass, and achieve decision superiority while decreasing risk to personnel. Even so, middle power Australia is following a relatively active AI development program with a well-defined innovation pathway and numerous experimentation projects underway.

There is also a reasonable level of force structure ambition. The latest major equipment acquisition plan, covering the next 10–20 years, sets out six defence AI-relevant projects, one Navy, one Army, three Air Force and one information and cyber domain. Even in this decade, the AI associated projects are quite substantial and include Air Force Teaming Air Vehicles (est. cost €6.15bn), Integrated Undersea Surveillance System (est. €4.19bn), Joint Air Battle Management System (est. €1.55bn) and Distributed Ground Station Australia (est. €1.01bn).

Associated with this investment is a high expectation that there will be considerable involvement by Australian AI companies. Indeed, as of recently AI has been determined to be a Sovereign Industrial Capability Priority. The Australian defence AI sector though is mainly comprised of multiple SMEs that individually lack the scale necessary for major equipment projects and would need to partner with large prime contractors to achieve the requisite industrial heft. There are also wider national concerns about whether Australia will have a large enough AI workforce over the next decade to handle commercial demands, even without Defence drawing people away for its requirements. Both factors suggest Defence could end up buying its AI offshore and principally rely on long-term foreign support, as it does in many other major equipment projects.

An alternative to simple offshore purchases might be funding collaborative AI developments with the US military. A harbinger of this may be the Australian Navy’s new experimentation program that involves a recently decommissioned patrol boat being fitted with Austal-developed autonomous vessel technology, featuring AI. Austal is also involved simultaneously in a much larger US Navy program fitting its system to one of the company’s Expeditionary Fast Transport, the USNS Apalachicola (Austal 2023). In this case, Austal is an Australian company with a large US footprint and so able to work collaboratively within both countries. There is a strong possibility though that the Australian Navy, simply because of economies of scale, is likely to adopt the US Navy variant of Austal’s work rather than a uniquely Australian version.

The outlier in this option might be the Boeing Australia Ghost Bat program that may see AI-enabled, loyal wingman type, uncrewed air vehicles in limited operational service with the ADF in 2024, and thus before the US services. The US Air Force is running several experimentation programs aiming to develop suitable technologies, some of which also involve the Boeing parent company. There is a high likelihood of cross-fertilization between Australian and US programs. This raises the tantalizing possibility of a two-nation support system of a scale that would allow the Australian companies involved to grow to a size suitable for long term sustainment of the relevant ADF AI capabilities. This possibility might be a one-off however, as there seems to be no other significant Australian AI program.

Australia collaborating with the US on AI or buying US AI products can ensure interoperability. However, in seeking such an objective there is always a tension between being interoperable with either specific individual US forces or instead across the whole of the ADF. This tension is likely to remain as AI enters service, especially given its demands for compatible big data.

Interoperability and domestic industry support maters are important issues that to varying degrees have influenced Australian government decisions on major capital equipment acquisitions for many decades. These traditional concerns though may need to be counter-balanced by emerging geostrategic uncertainties and ADF capability considerations.

Australia is now becoming worried about the possibility of conflict in the Indo-Pacific region given rising Chinese assertiveness coupled with the example of Russia’s invasion of Ukraine. To offset the numerically large military forces of the more belligerent Indo-Pacific states, some advocate developing a higher quality, technologically superior ADF able to help deter regional adventurism.

In being a general-purpose technology, AI can potentially provide a boost across the whole ADF, not just one or two elements within it. Such a vision though is not what is being pursued. Current AI plans will most likely lead to evolutionary improvements not revolutionary changes. AI is conceived as being used to either enhance, augment, or replace existing capability; this approach will mean the future ADF will do things better, but not necessarily be able to do better things.

A revolution in Australian military affairs seems unlikely under present schemes. For this, defence AI would need to be reconceptualized as a disruptive technology not a sustaining innovation. Embracing a disruptive approach would be intellectually demanding and in suggesting adopting unproven force structures could involve taking strategic risks. These are reasonable concerns that would need careful management.

Against such worries though, must be balanced the risk of China’s People’s Liberation Army successfully fielding disruptive AI and suddenly becoming qualitatively and quantitively superior to other Indo-Pacific militaries. The business of making war inherently involves balancing risks. Given the stakes, it might be time for Australia to embrace disruptive AI, rather than playing safe with a sustaining innovation approach that simply replicates current force structure thinking. The strategically intelligent choice might be doubling down on artificial intelligence.