Since the publication of a Theoretical Issues in Ergonomics Science special issue on “Human-Autonomy Teaming in Military Settings” in 2018 (Chen 2018), there have been tremendous advancements in artificial intelligence (AI) and autonomous systems technologies. In the military domain, these capabilities are increasingly implemented in systems that Warfighters use during missions — both on and off the battlefield, aboard manned vehicles, and in the context of unmanned systems mission management. In order to deploy these systems effectively, the military must critically consider human-autonomy teaming (HAT). Indeed, many intelligent systems are able to conduct sophisticated planning, sense-observe-orient-decide-act, and work side by side (literally or figuratively) with Warfighters; cognitive capabilities, formerly the domain of humans, have increasingly become incorporated into highly automated/autonomous systems (Schulte et al. 2016). This trend clearly shifts the human–machine relationship from a supervisory hierarchy to a partner-like collaboration. In other words, intelligent systems increasingly play the role of a peer teammate rather than that of a subordinate tool. However, there are challenges associated with military operations utilizing intelligent systems embedded in human socio-technical environments — potential lethality, responsibility and controllability, high stress and workload, and time pressure, just to name a few. More often than not, military HAT issues pose unique challenges that have not been addressed in civilian applications.

This special issue (SI) tackles military HAT issues from multiple fronts — theoretical frameworks and considerations related to team processes and performance (e.g., communications, trust, and workload), Warfighter-machine interaction and interface designs, and simulation-based experimentation. The contexts examined in the studies cover a wide range of military operations — humans working with small robotic systems, cyber analysts, human interaction with virtual agents, management of multiple heterogeneous unmanned systems, human-swarm interaction, and helicopter pilots working with an adaptive agent in the cockpit environment. While multiple articles are authored by U.S. Department of Defense researchers, there are also contributions from (or supported by) military agencies in Australia, Canada, and Germany.

Hou et al. present a conceptual framework for developing trustworthy agents and effective HAT. The model, IMPACTS, describes system design principles that include intention, measurability, performance, adaptivity, communication, transparency, and security. An actual system, Authority Pathway for Weapon Engagement, is used to illustrate how the framework can be applied towards designing an intelligent adaptive decision aid. The authors also describe a field test in a multinational military exercise, in which feedback about the decision aid was obtained from participating subject matter experts.

Baker et al. review eleven team communication assessment techniques that are particularly relevant to HAT. The authors provide examples of efforts to apply those techniques in military HAT contexts and discuss issues associated with team cohesion, trust, and other team performance outcomes in those settings. Based on the extensive review, Baker et al. identify four critical areas for future research that can provide useful guidance for researchers interested in team communications-related topics.

Lyons and Wynne examine the efficacy of using the Autonomous Agent Teammate-likeness (AAT) scales to assess six aspects of perceptions of machine agents’ teammate-likeness: agency, benevolence, communication, interdependence, synchrony, and team orientation. An online experiment was conducted, in which the participants were provided a brief narrative of a new technology with either high or low teaming characteristics. The results demonstrate the utility of the AAT scales to reliably assess perceptions of agents’ teammate likeness.

Holder and Wang discuss user interface design requirements for an AI agent to work effectively and transparently with human cyber analysts. The design methodology captures the information requirements for supporting the human analysts’ situation awareness. The intelligent agent, with its explainable AI capabilities embedded in the system, can explain threats/vulnerabilities and recommend courses of action to the human analyst just as a junior cyber analyst would to a senior analyst. The authors also discuss ways to improve the user interface design based on interviews of subject matter experts.

Calhoun et al. describe their effort of applying a set of seven “Guiding Questions for Human-Autonomy Teaming” — originally developed for military transportation planning tool designs — towards a prototype control station for managing multiple heterogeneous unmanned vehicles (“IMPACT”). The authors describe IMPACT’s Warfighter-machine interfaces based on the seven questions (under three categories: situation driver; visualizations and control mechanisms; solution generation and presentation), which are then used as a guide to identify requirements to improve the interface designs for future missions.

Debie et al. propose an adaptive recommender system for human-swarm interaction that is based on a bio-inspired shepherding control method. The recommender system is adaptive and is capable of adjusting its frequency of communications with the human operator based on the human’s workload. The recommender was evaluated in simulation-based experiments consisting of aerial reconnaissance scenarios and the experimental results demonstrate the efficacy of the proposed optimization algorithm.

Brand and Schulte present their approach to workload adaptive pilot assistance in manned–unmanned teaming (MUM-T) missions in the military aviation domain. The authors showcase a functional chain comprising a real-time pilot observation and activity determination, an instantaneous and projected mental workload estimation, and a reactive as well as proactive associative intervention of a cognitive cockpit assistant system. The article provides the experimental results from a pilot campaign with military pilots using a virtual mission and cockpit simulation. Brand and Schulte show that pilots’ mental states can be operationalized and utilized for adaptation of intelligent cockpit assistance in highly complex military multi-vehicle missions.

Yu et al. investigate the utility of physiological sensing (brain and heart rate activities) for distinguishing workload levels of unmanned-aerial-vehicle (UAV) operators when conducting remote targeting tasks. Military pilot trainees participated in a simulation-based experiment, in which the participants played the role of sensor operator and performed target search tasks with various levels of difficulty. Based on the experimental results, the authors recommend that a hybrid sensing approach be adopted as each modality has its strengths and weaknesses in distinguishing operator workload.

Taken together, the articles in this special issue provide a small yet representative view of the state-of-the-art of military HAT. However, while many of the key issues are discussed in these articles, a comprehensive coverage of HAT research requires a much broader range of examination of topics, including (a) conceptual views on novel approaches and architectures of human-autonomy workshare and cooperation; (b) designs of AI-based multi-agent systems; (c) architectures, methods, and algorithms; and (d) human-systems integration and human-in-the-loop experimentation for validation and proof of concept. Additionally, future HAT research can greatly benefit from further investigations on real-time mutual understanding between human and machine agents and dynamic human-agent functional allocation policy. Besides these cognitive ergonomics efforts, hardware and software ergonomics innovations — such as speech technologies and brain computer interfaces — will foster human-like and even effortless human-agent interaction and communications. Concerning AI research, we can expect further advances in the area of cognitive capabilities, learning, explainability, and secure implementations in safety–critical systems. Indeed, as AI-based and autonomous agents are increasingly integrated in a wide spectrum of military as well as civilian systems, a responsible introduction into our society will be a crucial factor for the public appreciation and acceptance. Ethics and security-related topics are not addressed in this special issue — interested readers can review a recent proposal published by the European Commission (2021) on a legal framework and a report published by the U.S. National Security Commission on Artificial Intelligence (2021).