Keywords

1 Introduction

Manned-Unmanned Teaming (MUM-T) allows force multiplication by teaming a manned mobile component (e.g. a helicopter) with unmanned units (e.g. UAVs). The resulting formation has the potential to be more effective, as dislocation of its members is possible and to be less vulnerable for casualties, as the unmanned members can execute dangerous tasks. In contrast to a fully autonomous group a human pilot can control or intervene actions of unmanned members, which is especially useful in cases of automation failures or a contact with unprecedented situations. In addition, for armed autonomous systems the “Man in the Loop” principle [1] is much easier to implement, as the decision can be made on site with shorter reaction time and better tactical awareness. The greatest advantage is the possibility to achieve better tactical behavior of the team. Tactics and military stratagem are currently difficult to implement in automated systems. The human component is able to make such decisions on behalf of the unmanned members, increasing the team effectiveness even more.

A current research project at the Institute of Flight Systems involves improving the effectiveness of MUM-T concepts. A wing consisting of a transport helicopter with a two-man crew and up to three UAVs is tasked with search and rescue or transport mission. The commander is in command of the helicopter as well as the UAVs. This results in a great increase of workload for both pilots, as the commander is now occupied with UAV mission management and has only limited capacities to support the pilot flying. Several concepts are being developed to assist the crew in reducing the workload and achieving the mission goals. The topic of this work is the command and control interface between the commander and the UAVs.

Section 2 lists the work already done in this field. In Sect. 3 the concept is presented. Section 4 describes the implementation of the system. In Sect. 5 the evaluation method is presented, followed by the results in Sect. 6. Section 7 wraps up the findings.

2 Previous Work

Many publications exist on different aspects of human-machine interaction. In the case of this article, especially the workflow between user and machine is relevant. Sheridan [2] first coined the term of supervisory control for an automated process, which works on tasks assigned by a human operator. Such supervisory controlled systems can be converted to agent supervisory controlled systems by inserting an intelligent agent between the operator and the automated process [3]. This enables higher-level cognitive functions, like planning and situation assessment, in the system and leads to a reduction of workload for the operator. Uhrmann [4] created a system using the concept of task-based guidance, where a cognitive agent interprets tasks given by a pilot, breaks them down in achievable subtasks, and presents the pilot with the resulting plan for error checking and approval. The cognitive agent can then execute the plan independently.

Uhrmann showed that the reduction of workload can enable an inversion in the span of control of current reconnaissance UAVs. In this case instead of having multiple operators controlling one UAV, a single pilot commanded three UAVs from a two-seated helicopter cockpit while simultaneously flying in a transport mission [5].

One of the success factors is the simple communication from the pilot to the automation to achieve complex mission tasks. A few button presses can easily convey his or her intent to the agent, as it is aware of circumstances influencing the situation. This reduced communication in context rich environments is similar to human interactions.

The work was done in a simplified environment, for example sensor operation was reduced and the complexity of the tactical situation was restricted. The interactions between pilot and agent were also simplified. For example, the only status information available was the current tasks and a flag indicating if the agent was planning or not. The system therefore lacks a proper communication back to the pilot, as plans are either displayed in overwhelming detail or not detailed enough, which is inappropriate for most situations. In real environments, where the pilot is responsible for the actions of the unmanned aircraft, the right amount of information is necessary. Clauß [6] used the term of etiquette, coined by Miller [7], and applied it to agent supervisory control for single UAV systems in order to identify the relevant command interface and status information for such systems. Furthermore, unexpected situation changes, which create plan failures or require re-planning, are not communicated in a helpful or easy way [8]. This kind of specific feedback is necessary especially for multi-UAV operations, as the time to process the provided information and to formulate solutions is shared between all UAVs. To expand upon this work is the content of this article.

3 Concept

The term responsive design is widely used for web interfaces, which scale their content and format to the display space available [9]. Applying this idea to human-automation interaction results in a user interface, which scales the information presented to the user according to his needs and wants. Since the user interface represents a two-way communication, information from the user to the system, which in a UAS application is commands, should be included as well as the status information flow from the system to the user. Having such an interface and an underlying system able to understand this kind of input and provide output accordingly for each UAV, allows the pilot to focus on the details where it is necessary, while maintaining an overview of the situation. Key requirements of the system therefore should be:

  1. 1.

    Simple command interface for each UAV, conveying the pilot’s intent

  2. 2.

    Detailed command interface for each UAV, allowing more fine grained control when necessary

  3. 3.

    Overview of the current status of all UAVs and the tactical situation

  4. 4.

    Easily accessible detailed status information about each UAV

  5. 5.

    Proactive display of information about UAV conflicts

  6. 6.

    Detailed information about UAV conflicts if wanted by the pilot.

The idea is to combine several strategies to fulfill all requirements. Requirements (1) and (2) are achieved by providing a system with task-based guidance and scalable autonomy [10]. This allows using the same vocabulary for mission tasks and communicating the operator’s intent, thus forming a simple and usable command interface with the ability for more detailed control. Requirements (3) and (4) are achieved by providing a user interface, which allows to scale the amount of accessible information depending on the pilots’ needs, but at the same time is able to display an overview of the tactical situation. Requirement (3) is supplemented by a fixed part of the user interface, offering a status overview of all UAVs. Requirement (5) is achieved by directing the attention of the operator to emerging problems. This is done by a three-step warning system, with the ability to access more detailed information at will, fulfilling requirement (6).

4 Implementation

4.1 Cognitive Agent

For a system capable of task-based guidance the baseline is an intelligent software agent, which can assess the current situation and develop plans in order to execute assigned tasks. The implementation was done using the Drools Expert rule engine [11] for situation assessment and a SHOP [12] like HTN-planning algorithm [13] implemented inside Drools using the Drools Rule Language (DRL). When given a task, rules apply appropriate options from a collection of preprogrammed recipes. The recipes for subtasks, also called methods, are objects in the working memory of the rule engine, which have the target task type, as well as the subtasks to add, as attributes. For example, the “Recon” task is split into the two subtasks “Calculate Recon Route” and “Select Recon Route” by the “Recon”-method. As multiple results of the route calculation are possible, the “Select Recon Route”-alternative creates an alternative for each available route, resulting in several plan alternatives, as seen in Fig. 1. Alternatives are also objects in the working memory with the target task type and the available alternatives as attributes. Each plan alternative is a copy of the original plan, where the specific task was replaced with an alternative task. A task, which cannot be broken down or replaced by alternatives, is called a primitive task. The result of the previous steps is an ordered list of primitive tasks. Special rules for each primitive task type exist. When a primitive task is activated during planning, these rules simulate the effects of applying the operator by manipulating the working memory accordingly. For example a “Fly Route”-task has the simulated effect of moving the UAV along the specified route until the end of the route is reached. Later, when a valid plan is selected and executed in the real world, operators are not longer simulating the effects, but actually changing the environment by initiating actions. In the case of the “Fly Route”-task this would be a command to the flight management system to fly the specified route. The application of methods, alternatives and operators are the basic HTN algorithm.

Fig. 1.
figure 1

Planning world tree with branches spawning from alternatives (red) (Color figure online)

The planning process could continue until all available alternatives are explored. This would result in low planning performance for complex problems. Instead, as soon as an executable task is inserted into the plan, it is simulated, as described above, and the resulting changes are recorded in working memory in the planning world associated with this plan. Planning worlds leading to impossible outcomes, e.g. destruction of the UAV or physical impossibilities, are pruned and planning is continued on other plans. To improve pruning, further rules evaluate the costs associated with the tasks and an A*-algorithm is employed to restrict planning to the plan with the currently best prospects. The result is a plan tree as depicted in Fig. 2.

Fig. 2.
figure 2

Resulting plan as a tree of tasks

As soon as a plan is selected, the same simulation rules are used to extrapolate a copy of the plan into the future, while monitoring rules check for conflicts on the simulated as well as the current plan. When a conflict is detected, the simulation data is used to determine the type, the cause and the time until it becomes critical. For example a monitoring rule for proximity warnings to enemy units is activated when the simulated UAV is moved along a route by a “Fly route”-task and the distance to an enemy unit is lower then a threshold. Each activation of this rule increases the threat costs of the affected plan, which leads to a preference of other plans over this one during the planning process and to a warning to the user, when this plan is selected for execution. This feature enables proactive warnings, as mentioned previously. Since the HTN planning algorithm makes use of the same rules in order to evaluate the different alternatives, the knowledge is reused, which results in a powerful planning and monitoring architecture. The warning system is currently able to identify the following types of plan conflicts or ROE violations:

Critical:

  • Ground Collision: Following the current flight plan leads to a dangerous proximity of the aircraft to the ground.

  • Threat: The aircraft is currently in the firing range of an enemy unit.

Less critical:

  • Trespass: Following the current flight plan causes the aircraft to enter the firing range of an enemy unit in the future.

  • AirspaceViolation: Following the current flight plan causes the aircraft to violate airspace borders in the future.

  • NoFlyViolation: Following the current flight plan causes the aircraft to violate no-fly-zones in the future.

  • GimbalLock: The camera gimbal is currently in use by the human operator and not available for a scheduled reconnaissance task.

  • DetectionLevelViolation: Following the current flight plan to an unwanted proximity of the aircraft to the reconnaissance target, which might cause detection by enemy forces.

4.2 User Interface

Overview.

The main component of the user interface is a moving map with integrated sensor display. The command and control interface for the agent is implemented directly on the map display (see Fig. 3).

Fig. 3.
figure 3

Moving map interface with UAV (1), dock (2), tactical symbols (3) and sensor view (4)

Among map usability functions like a north arrow, an adjustable scale and simple drawing functions, the user interface for the UAV consisted of the following items.

Aircraft symbols (1) display position, altitude and speed information along with a point on the map, indicating the current camera viewing position on the ground.

The UAV dock allows for fast selection and overview for all available UAVs. It is also used to start, pause or abort the execution of a mission plan.

Tactical symbols on the map (3) display the current situation using the NATO Mil-Std-2525C [14] (e.g. SAM) alongside new symbols for routes and areas.

Two sensor views allow direct access to the onboard sensors, including low-level functions as locking on ground positions or adjusting the zoom.

Intent Communication

from the operator to the agent is mainly done by assigning tasks through the map interface. To issue a command the operator selects a point or an object on the map and chooses an entry from the appearing context menu (see Fig. 4). This way the target and the type of the task are selected. Possible commands are “Transit”, “Recon”, “Scout” and variations. By issuing a task the operator communicates his or her intent to the agent. For example, the intent of a “Transit” task is to move the UAV to a certain location, while a “Recon” task is used to gather information about a certain object.

Fig. 4.
figure 4

Context menu on a route. Used for issuing tasks.

Constraints for tasks (e.g. Aggressiveness, Altitude, etc.) can be applied by clicking on a task arrow and choose the appropriate selection in the context menu. This allows more detailed control over the execution of the task.

Status Feedback

is mostly embedded into the map. Issued tasks are displayed as arrows pointing towards the target with the task type as symbol next to it (see Fig. 5 top right). The current agenda is therefore easily accessible and embedded into the map context.

Fig. 5.
figure 5

Plan status: Task arrows are broken down into subtasks, if zoom level is increased.

As soon as a plan is calculated, zooming in onto the tasks can access more information about them and how they are executed (see Fig. 5 bottom left and right), as task arrows exceeding a certain length on the map are replaced with their subtasks (see Fig. 5).

The level of detail displayed for the plan is connected to the zoom level of the map. At first only the subtasks are displayed. With higher zoom levels, more detailed information is revealed. This function makes use of the tree structure of plans, as tasks are broken down into subtasks, as soon as the arrows exceed a certain length. The last possible step is displaying the actual flight route. This way the operator is able to adjust the amount of information. According to the situation a status overview, a detailed status report or multiple degrees in between can be accessed without cluttering the screen. The current number of tasks in the agenda is displayed next to the UAV symbol in the dock, which is displayed in Fig. 6. The background of the task count label indicates whether the UAV is executing tasks (flashing green), on hold (white) or offline (red). A running planning process is signaled by a spinning wheel at the same position. Around the UAV symbol a circular bar indicates the progress in executing the current plan.

Fig. 6.
figure 6

UAV Dock. Left UAV has two tasks and warnings, center UAV is planning, and right UAV is offline. (Color figure online)

A green circle indicates the currently selected UAV. The three control buttons are colored the same as the currently selected UAV and therefore signal their command target. The top button switches title, depending on the UAV status. It displays “Hold” during plan execution and “Execute” (currently not visible) when holding. The “Clear” button clears all tasks, while the “Lock” button centers the view on the UAV and causes the view to follow the UAV.

Warnings

are displayed on the map as well. They contain the source and location of the conflict, the time until the conflict gets critical and offer an automatic solution if one is available. Figure 7 depicts a warning situation. Similar to the level of detail for the current plan, the warnings are at first only displayed as icons in the UAV dock, as well as next to the UAV (1). When selected, they reveal type and critical time as well as a solution button, if available (2). When the “Show” button is pressed, a description dialog appears (3) and the causes are centered and highlighted on the map (4). This approach results in a freely selectable level of detail for status and warning information and allows the operator to concentrate on the current tasks without cluttering the user interface.

Fig. 7.
figure 7

Three levels of warning information: Symbols in dock and next to UAV (1), detailed description (2) and info dialog (3) with flashing object indicator (4).

In this example the UAV flight plan leads through the range of an enemy SAM and the camera gimbal is locked by the human operator, thus not available for an automatic recon task. Pressing the “Replan” button would result in a flight path around the SAM, but could not release the gimbal, as only the operator can authorize this.

5 Evaluation

Goal of the evaluation was to get a first impression if the concept of responsive human-automation interaction is feasible within the context of UAV guidance and is accepted by the military users. The evaluation setup consisted of a simulated reconnaissance mission with a single UAV controlled from a ground control station (GCS). The mobile GCS of the institute was equipped with the previously described user interface concept. A reduced subset of the MUM-T mission types consisting of area, route and point reconnaissance tasks was used. Each reconnaissance mission consisted of a target, reporting points and a bounding box, created by no-fly zones. Additional obstacles, like enemy surface to air missiles (SAM), increased mission complexity. 5 officers of the German Bundeswehr were tasked to execute simple variations of each mission type. After an introduction and extended training session of around 45 min, the subjects completed three missions, amounting to around 10 min, including the briefing. During the experiments planning time, execution time, reconnaissance accomplishments and errors (e.g. airspace violations, threats to the UAV and low distances to recon targets) were measured. Questionnaires after each mission gathered the subjective rating for the system. In addition, interviews with the subjects provided information about the reasons for their evaluation.

6 Results

All subjects completed the missions successfully in the required time. The total mission time was on average 71 s with a standard deviation of 25 s (min 26 s, max 114 s). Planning time was on average 17 s with a standard deviation of 7 s (max 27 s, min 7 s). Execution time was on average 54 s with standard deviation of 24 s (min 19 s, max 96 s). All subjects reached 100% reconnaissance performance and made on average only 0.74 mistakes per mission, which consisted mainly of violations of no-fly-zones and detection level. Figures 8 and 9 depict the combined questionnaires for the subjective results.

Fig. 8.
figure 8

Combined questionnaire results, part 1 of 2

Fig. 9.
figure 9

Combined questionnaire results, part 2 of 2

The ratings for the adjustable status feedback were very positive. Even during the training the subjects used the function very often. The warnings system was assessed to be relatively complex, but helpful nonetheless, although its usage varied between subjects. Overall, the concept was fully accepted by the military users.

7 Conclusion

This article introduced a concept for a responsive human automation interaction by combining task-based guidance with scalable autonomy for the communication to the machine, and an adjustable status feedback interface, including warnings, for the communication to the pilot. The necessary base for the applicability of this concept is an intelligent agent software, capable of planning and situation assessment. Due to the HTN planning capability plans are calculated as task trees. This offers a simple way to prevent clutter on the interface as well as opacity by linking the amount of information displayed to the length of a task arrow on the map. More information can therefore be obtained by zooming in on the task. Warnings assist in situations were planning conflicts or violations are detected by the system. They are presented in detail if accessed by the user. The evaluation of the system was done with military personnel by using questionnaires. Overall the operators rated the user interface concept very positively. The next intended steps are to evaluate the concept for multi-UAV missions inside a helicopter simulator and to perform actual flight tests with a ground control station commanding a UAV with the presented concept, while measuring the effects on workload.