Keywords

1 Introduction

Today, higher cognitive functions (e.g., perception, planning, and decision-making) that are traditionally exclusively owned by the human, are becoming an integral part of automated functions. In the last one or two decades the term “autonomous system” has widely been used to describe complex automated systems working largely independent from a human operator. However, the more capable the automation has become, the more essential the challenging issue of human-system functional allocation and integration has turned out to be [1]. We share the concern of Bradshaw et al. [2] that an undifferentiated use of the term of “autonomy” and the proliferation of automation can lead to unfruitful discussions and oddly defined development programs. We see the need for a conceptual framework unifying the nomenclature and description of systems in which human beings interact with complex automation. Therefore, in this article, we attempt to identify and formally describe common grounds among researchers in this field. Despite our concerns, we want to adhere to the term of “Human-Autonomy Teaming (HAT)” to describe systems in which humans work with highly automated agents. Where those agents carry attributes like “autonomous” or “intelligent”, we will assign the unified term cognitive agent in this nomenclature. We establish a procedure and a common language to describe concepts of HAT. Our goal is to contribute to a more objective debate, to facilitate the effective communication between researchers, and to provide guidance to practitioners. Our approach, in general, is twofold. Firstly, we suggest a common symbolic language, as well as a procedure to follow to describe systems, system requirements, and top-level system designs. Both have a stronger focus on human-automation work share and integration aspects than traditional systems engineering practices and tools (e.g., Unified Modelling Language, UML). We borrow the notion of design patterns from the domain of systems and software engineering and adapt it for use in human factors engineering of highly automated dynamic systems. Secondly, we encourage the analysis of current HAT research and development approaches in order to identify solutions and best practices from empirical studies. This article shall also provide advice for designers of HAT systems how to approach the design process in a strictly top-down manner.

1.1 Design Patterns in Engineering

Christopher Alexander proposed that every building and town is composed of patterns [3]. The patterns are a result of forces and processes that combine such that towns or buildings develop in particular ways. By developing a language of these patterns, Alexander et al. were able to describe the forces that produced patterns as well as the consequences of those patterns [4]. Pattern descriptions also specified their relationships to other patterns so that one could create a network of patterns to describe a project. Finally, Alexander et al. set out processes by which the patterns could be used. They envisioned the descriptions of forces and consequences as useful in making arguments to decision making bodies. They also described how one could specify a project from top-down using patterns to make decisions about the ultimate design.

Design patterns and pattern languages became a popular tool for software engineering in the 1990s. This is usually traced to Gamma et al. in 1995 [5]. That patterns describe a repeated problem and the core of a solution to that problem class is their fundamental value. Gamma et al. tried to be more explicit than Alexander concerning the components of a pattern. Four critical elements were listed: the pattern name, the description of the problem, the description of the solution, and the consequences. A more detailed template was created, and their book [5], like the second volume of Alexander’s work [4] provided a catalog of some discovered patterns, each entry describing a suitable problem space, the solution template, positive and negative consequences, and providing implementation advice. The popularity of software patterns led to further efforts to catalog patterns for software analysis [6] and design [7], and data models [7]. Discussions of negative patterns often found in systems or organizations were described as anti-patterns [8] with discussions of how to correct the problems. Conferences were held to capture the experience of practitioners in program design as a counterweight to scientific activities that focused on new approaches [9].

Ultimately, a pattern literature for HAT will accomplish the same goals that Alexander initially set out in the domain of architecture and land-use. Patterns serve to communicate generalized solutions to problems faced by engineers. Alexander believed that one could look through a catalog of patterns, identify the key features and forces of a project, and select the appropriate starting points. Then using the linkages provided among the patterns in his catalog one could bring in other appropriate patterns until one had a description of the solution in the form of a language of patterns. Our forces will include human performance issues, limitations of autonomy, communication issues, and many other critical factors. If our patterns can describe forces, features, consequences, and linkages to other patterns, pattern languages as solution descriptions may be possible within this domain.

1.2 Design Patterns in Human Factors, Ergonomics, and HAT

In the field of human factors and ergonomics, the description of design patterns also became fashionable recently. Borchers [10] is one of the first who described the linkage between human-computer interaction and design patterns. Kruschitz and Hitz [11] also provide a good overview. Kahn et al. [12] looked at design patterns for sociality in human-robot interaction.

Sheridan’s well-known Levels-of-Automation (LoA) scale (e.g. [13]) is one of the early design patterns in human-automation interaction. System designers very successfully use this scale or one of its derivatives (e.g. [14, 15]) in many different application fields, sometimes even without knowing. In that sense, we see these kinds of Management-by-Consent/Management-by-Exception-based LoA-scales as a collection of often-found design patterns. They apply for supervisory control relationships [13] between human and machine. In this use case, LoA-scales provide an excellent source for deciding how to design the interaction for certain specific functions. Scales of levels-of-autonomy (e.g. [16, 17]) refer more to design options for the scope of a full system. Their focus is predominantly on the description of the independence of the system from human intervention.

Juziuk [18] provides a comprehensive listing and overview of efforts to document design patterns in multi-agent systems. This gets us closer to what we need for HAT, in that cognitive agents and their relationships to each other are considered.

In the following chapters, we want to create a generalized framework and methodology for describing a wider variety of configurations in different scopes. We will present an approach to derive system requirements and to describe top-level system designs for systems involving HAT based on design patterns.

2 Basic Concepts

The traditional systems engineering view is solely on the formulation of requirements and the design of the technical functions of a system. The human operator only appears as an actor, usually located outside the system boundary. This approach is reasonable when automation is relatively simple, in the sense that it can perform specific clear-cut part-tasks. There, one can well describe the relationship between the (technical) system and the human user through use cases calling for a certain user-system interaction.

In this article, in contrast, we want to take account for the following trends: (1) the automation in HAT will become much more capable, (2) the work share and interaction between the user and the system will be much less stable (e.g. adaptive automation [19]), and (3) the task performance of human and automation will be highly dependent on a cognitive level. Hollnagel and Woods [20] speak of joint cognitive systems in this context. Consequently, our approach focuses on two aspects, (a) the description of the purpose we want to design a HAT system for before the actual design, and (b) the incorporation of the human user within the process and system boundary.

2.1 The Work Process

The process of meaningful, goal-oriented co-action of humans (e.g., operators) and machines (e.g., unmanned vehicles (UVs) with automation), including artificial cognitive agents, shall be called a Work Process ( WProc ) (see Fig. 1a). A Work Objective ( WObj ), i.e., the mission or the purpose of work, defines and initiates the WProc . The WObj usually comes as an instruction, order, or command (e.g., a UV mission assignment). The proper definition of the WObj is of high priority and most critical for the definition of the system boundaries and the design. The WProc is embedded into a Work Environment ( WEnv ). WEnv inputs to the WProc are the physical Environment ( Env ) (e.g., atmosphere, threats), material and energy Supplies ( Sup ) (e.g., fuel, weapons), and Information ( Inf ) (e.g., ATC clearances, airspace regulations). Finally, the WProc generates certain physical or conceptual effects to the environment, i.e. the Work Process Output ( WPOut ) (e.g., target photo/video, destruction of target, provision of information to other WProc ).

Fig. 1.
figure 1

(a) Work process; (b) hierarchical work processes; (c) networked work processes

The WProc itself imposes meaningful actions upon a particular Work Object ( WO ) (e.g., target to be destroyed, materials to be transported, premises to be secured) being part of the WEnv . The WEnv may also host other WProcs . In case one WProc interacts with other WProcs , this can be organized as a hierarchical structure, i.e. a superior WProc generates the WObj for one or many sub-ordinate WProcs and monitors their results, thereby forming supervisory loops (see Fig. 1b). Alternatively, the WProcs might be organized as a networked structure, i.e., parallel WProcs depend on each other in a way that their WPOuts cause environmental changes relevant to other WProcs , or provide supplies or information to other WProcs , thereby forming a more or less tight mesh of interdependent WProcs , each following individual WObjs (see Fig. 1c).

A proper work process design should be the starting point for each development of a system involving HAT. At this stage, “it is more important to understand what the [… system] does […], than to explain how it does it” [20]. However, defining the WObj , the system boundary, and the interfaces of the WProc you want to design for is a hard task to do. The result will heavily influence the system design. From our experience in engineering HAT systems, we suggest a list of guidelines as in Table 1.

Table 1. Guidelines for Work Process design

Figure 2 shows an example of a common WProc design taken from civil aviation. The example consists of three individual WProcs , including WProc: Airline Flight , which is the process we want to design a system for. This process changes the WEnv by transporting passengers ( WO: PAX ). It is in a hierarchical subordinate relationship to WProc: Airline Dispatching that provides WObj: Flight and supplementary information, and to which flight and aircraft status information is fed back. With WProc: Air Traffic Control , a network is established, in which radar surveillance takes place, and requests and clearances are exchanged.

Fig. 2.
figure 2

Example for a common Work Process design with hierarchical and networked structures

Each WProc has a certain life cycle and can be broken down into a potentially large number of subsequent and/or concurrent sub-processes. During the life cycle of a WProc , it may be exposed to many use cases. For system design, it is important to collect and describe these use cases and sub-processes that finally result in tasks to be performed either by a human, a cognitive agent, or by conventional automation. Without going deeply into well-established methods of systems engineering and cognitive task analysis (e.g. [21]) in this article, Table 2 provides some guidelines from our experience in the consequential top-down design of HAT systems.

Table 2. Guidelines for Work Process Use Case Analysis

2.2 The Work System

Now we are ready to open the black box. From now on, we look at the physical system that runs the WProc described so far (i.e., the system we want to design). We will name this a Work System ( WSys ), which is our first design pattern. It is important to note that the WSys inherits the complete definition of the corresponding WProc we described before. Within the box, in principle, there are two essential roles to be taken to run the WProc : the Worker and the Tools . Consequently, the WSys is composed of two components, each taking one of the roles (see Fig. 3).

Fig. 3.
figure 3

Design pattern WSys as physical instance of the corresponding WProc comprising the roles of the Worker and the Tools being in a Hierarchical Relationship ( HiR , green arrow) (Color figure online)

The main characteristic of a Worker is to know, understand, and pursue the WObj by own initiative. Without this initiative, the WProc would not be carried out. Therefore, a WSys cannot exist without a human Worker , by definition. Otherwise, we would not speak of a WSys , but rather of a mere technical artifact, i.e. a Tool . Only Tools would not make a WSys , nor perform a WProc , due to the lack of purpose. The Worker is the only instance and responsible for breaking down the WObj into relevant tasks. The Tools , on the other hand, will receive tasks from the Worker and will only perform them when told to do so. Hence, the Worker and the Tools are always in a Hierarchical Relationship ( HiR , green arrow in Fig. 3) that may be characterized by more detailed design patterns.

We would like to mention that in an earlier article [22], we defined “autonomy” by use of the WSys as the authorization of the Worker to self-define the WObj . Only the human Worker shall exercise this authority, for ethical and other reasons.

Traditionally, only a human or a human team represents the Worker . Machinery and automation would constitute the Tools . Thereby, a conventional human-machine system is created, involving manual control, and in presence of automation, also human supervisory control [13]. However, the notion of the WSys provides, additional information concerning the WObj , i.e., the purpose of work, and the system boundary, including the definition of the interfaces to the environment. Finally, the Tools shall never contain a full WSys , or humans. From our experience, nested WSys are not an option. As an alternative, we recommend modeling the structure as a hierarchy of individual WProcs . Table 3 provides some guidelines from our experience for an initial WSys design.

Table 3. Guidelines for initial Work System Design

3 Introduction of the Cognitive Agent into the Work System

With the advent of more advanced methods to provide higher cognitive capabilities on behalf of automated functions, the introduction of Cognitive Agents ( CogA , little ‘R2D2’ in Fig. 4) into the WSys becomes an option. In the past, the focus in this field was predominantly on the provision of suitable information processing methods and algorithms (e.g., artificial intelligence, computer vision, soft computing; cf. e.g. [23]). Two trends have been followed in the past two decades concerning the role such an agent could potentially take in system design. Firstly, so-called autonomous systems, i.e. systems that aimed at performing user-given tasks, as much as possible independent from human intervention; and secondly, decision support, assistant, or associate systems, acknowledging that a human predominantly performs the work, while supported by a machine agent [24].

Fig. 4.
figure 4

(a) Design pattern WSys with CogA as Tool in HiR ; (b) Design pattern WSys with CogA as Worker in HeR (Color figure online)

We want to acknowledge these trends by introducing two new elementary design patterns. Figure 4(a) shows a design pattern, where there exists a HiR (green arrow) between the Worker and the CogA being part of the Tools . Within the Tools , a HiR (green arrow) between the CogA and other automated Tools exist. Figure 4(b) shows a design pattern, where in addition to the HiR between the human Worker and the Tools (cf. Fig. 3), there exists a Heterarchical Relationship ( HeR , blue connector) between the human Worker and the CogA being part of the Worker in this case. Concerning this HeR , we would like to introduce one restriction, i.e. the CogA shall not be given the authority to define or even question the WObj . The human Worker shall always have the final authority do decide.

Figure 5 shows some examples of existing setups we constructed using the elements human Worker , Tools , CogA , HiR , and HeR . Figure 5(a) depicts a regular non-HAT system, in which two human operators cooperate while using technical equipment (e.g., a two-pilot flight-deck crew operating an aircraft). Figure 5(b) might be useful to either reduce the crew size (e.g., single pilot operations [25]), or to increase the span of control (e.g., larger “autonomy” of UVs [16]; single agent operation of multiple UVs [26]). In this case, the CogA is delegated certain tasks which otherwise a human crewmember would execute. In both examples, the effect could be mostly attributed to a reduction of the taskload of the human Worker . Elements of adaptive functional allocation might also be involved [27]. Figure 5(c) is an extension to (b) where there is more than one agent tasked by a human operator, each controlling its individual system (e.g., task-based guidance of multiple UVs [28]). Again, the challenge here is the increase of the span of control by means of spreading the taskload. Figure 5(d) goes even further down that road. Here, the human user controls a cooperating team of multiple agents, each of which operating its own equipment (e.g., pilot controlling a cooperating team of multiple UVs [29]; multi-agent system controlling multiple UVs [30]).

Fig. 5.
figure 5

Examples for WSys setups constructed from the elements human Worker , Tools , CogA , HiR , and HeR

However, an increase of automation complexity and span of control, as exercised in (b)–(d), may also result in automation-induced shortcomings. These effects have been reported by many researchers (e.g., [31] for a classical source). To counteract suchlike problems, a number of scientists suggested approaches as in Fig. 5(e). Here, the agent works in cooperation with the human operator being an element of the Worker (e.g., pilot assistance or associate systems [32]). Core elements here are the agent’s initiative in achieving the WObj , and the ability of the agent to decode the human’s mental states as a basis for cooperation. Means of assistance can be attention allocation, mixed-initiative operation, and adaptive automation techniques [19]. Finally, Fig. 5(f) shows a setup, in which a human is controlling a system via agent delegation, while at the same time being assisted by an agent being part of the Worker (e.g., assisted guidance of a single UV operator [33]; assisted multi-UV mission management [34]). In the two latter cases (e) and (f), where a CogA is part of the Worker , the CogA inherits the required attributes of the role of a Worker as claimed in Sect. 2.2, i.e., to know, understand and pursue the WObj by its own initiative [22, 24].

At this stage, it is obvious that there can be constructed many more configurations, especially when we look at distributed multi-user, multi-agent systems with complex HiR/HeR structures.

4 Actor-Relationship-Actor Tuples

As became clear during the discussion of Fig. 5 in the previous chapter, there are possible very many WSys configurations only by combining one or few of the symbols provided (i.e., human Worker , CogA , Tools , HiR , and HeR ). Thereby, researchers and practitioners can depict their individual solutions described in literature, and hence, make them comparable. In this chapter, we now want to look at the possible { Actor-Relationship-Actor }-tuples, which can occur in all possible WSys . Figure 6 gives an annotated overview of all possible tuples.

Fig. 6.
figure 6

(a) Hierarchical, (b) heterarchical{ Actor-Relationship-Actor }-tuples; (shaded: human involved; *: equal configurations; cross: invalid option)

A hierarchy of an agent or a tool over a human, or a tool over an agent we do not want further to consider. The same applies to a heterarchy of a tool with either a human or an agent, for obvious reasons. Tuples, which do not involve humans, may not directly be interesting for HAT systems. However, they certainly can influence the behavior of the automation “under the hood”, and therefore, be worthwhile to look at, at least from a pure engineering stance. Also, the pure human-human relationships, either hierarchical or heterarchical, may not directly be relevant for HAT systems, except of course for WSys with more than one human. Apart from that, they may serve as valuable source for design metaphors. Finally, we do not want to allow a Human-Agent HeR , where the agent is part of the Tools , since per definition there is always a HiR between Worker and Tools .

Within the scope of the modeled WSys , the { Actor-Relationship-Actor }-tuples describe the binary relationships between two entities. The possible combinations include tuples with well-established design patterns.

The tuple { Human-HiR-Tools } describes the basic setting of human supervisory control [13]. By the time when Sheridan established his LoA-scale, automation used to be mostly rather clear-cut control automation following relatively simple rules or algorithms. In most of these systems, the human exclusively owned higher cognitive capabilities. It was acknowledged in later works [14] that automation could also take over higher cognitive tasks. New scales for LoA have been developed (e.g. [14, 15]), which, to a certain degree, reflect the situation that automation became capable of assuming functions of information acquisition and analysis. These scales are applicable to the tuple { Human-HiR-Agent }. Since then, many works have been conducted, supporting the relevance of this tuple and provide valuable design patterns (e.g., [26, 28, 35]). Van Breda and coauthors also provide a good overview [36].

Finally, the { Human-HeR-Agent }-tuple has become particularly interesting in situations where automation-induced human erroneous action should be prevented. We already mentioned a few citations in this context [22, 24, 3234]. Also further international approaches to adaptive associate systems should be mentioned that are representative to many others [37, 38].

5 Conclusions

In this article, we describe a method for documenting human-autonomy teaming (HAT) design patterns. Therefore, we followed a strict top-down procedure, inspired by systems engineering and cognitive ergonomics. Coming from a general human-systems view, we ended up at the level of frequently recurring actor-relationship-actor-tuples, which may serve as containers for similar or competing design patterns to be found, described, and discussed. However, the benefit of this approach heavily depends on the researchers’ and practitioners’ efforts to describe their solutions to HAT problems using the offered method. In doing so, we could avail of a great opportunity for rational discussions on future highly automated systems.