1 Introduction

Man-machine systems, such as a human pilot and an aircraft, can be considered as a closed-loop control system. In accordance with the approach popularized by McRuer and colleagues [1], such a system can be generalized as a control schema, as illustrated in Fig. 1.

In this schema, the aircraft controller is approximated by a function, \(Y_c\), that describes how the control dynamics of the aircraft ought to interpret the command inputs, u(t), from the human pilot to generate a joint system state, y(t). External disturbances, such as modeled wind turbulences, can be input into the controller to perturb the overall output. The changing objective of the system (e.g., the flight trajectory) is represented by a target function, \(f_t(t)\). Deviations between this target function and the current system state represent the error, e(t), that the human pilot has to resolve by submitting new command inputs to the aircraft controller. This control behavior of the human pilot can be approximated by a linear function, \(Y_p\), which is typically derived by fitting data that is obtained from human participants. During behavioral experimentation, parametric properties of the target function, \(f_t(t)\), and characteristics of the machine controller, \(Y_c\), can be systematically manipulated to determine this model of a generic pilot’s response characteristics.

The appeal of this approach in describing the relationship of a pilot and an aircraft is apparent. It allows the control behavior of a pilot to be modeled as a simple linear mathematical function, which consists, for example, of the operator’s input gain and the time delay of the same input. An overview of such pilot models are provided in [2]. More importantly, such an approach allows for the human operator to be simulated in closed-loop control systems, which can include engineered machine controllers and modeled disturbances. In other words, possible interactions between the pilot and the vehicle can be simulated and evaluated for different aviation scenarios. Situations that merit further investigation can be thus identified for actual behavioral experimentation. Moreover, this framework unifies the, otherwise, separate efforts of human behavioral research and control engineering in an integrated framework. It also presents the opportunity to systematically evaluate how a human operator might respond to a novel aircraft controller, such as a personal aerial vehicle for daily commuting [3].

In its most basic form, this schema makes several implicit assumptions. First, it assumes that error feedback, e(t), is perfectly communicated to the human operator. However, the mode of communication — that is, how this error feedback is presented to the pilot — can result in a non-veridical perception of error feedback. This will be discussed in Sect. 2. Second, the schema only accounts for the tracking of a single flight variable. However, controlling an aircraft is a complex task that involves tracking multiple flight variables simultaneously, which are themselves coupled to each other. Thus, pilots have to shift their gaze across different instruments periodically so as to update themselves on the overall state of the aircraft. This will be addressed in Sect. 3. Third, the human pilot is typically modeled as a consistent stationary unit. However, the operational state of the human pilot can be expected to be influenced by situational factors such as anxiety, perceived workload, attentional fatigue and overload. Some of these factors might themselves be influenced by the presentation mode of error feedback or the need to manage the tracking of multiple flight objectives. This is considered in several studies that are presented across Sects. 2 and 3.

Fig. 1.
figure 1figure 1

Basic control schema of a human-machine system.

2 Influences of Error Visualization

Accurate visual communication of error feedback to the human operator is necessary to support stable closed-loop control performance. This section discusses three aspects of error visualization, namely the issue of instrumentation versus outside-world feedback, the influence of feedback latencies, and how the available field-of-view could have a long-term impact on flight control behavior.

A basic control schema does not specify the communication channel through which error is displayed to the pilot. Given ideal meteorological conditions, a pilot is permitted by visual flight rules to navigate and maintain aircraft stability by monitoring terrain features and the horizon. When outside world visibility is compromised, a pilot is compelled by instrument flight rules to refer to instruments, such as a heading indicator and an attitude indicator that serve as proxies for supporting navigation and orientation control.

On the one hand, outside-world feedback contains rich visual cues, such as optic flow that serves as a visceral cue for both self-heading and -orientation. Increasing terrain realism by including buildings and hills in a fixed-wing approach display has been shown to improve the accuracy of horizontal, vertical and altitude perception [4]. More specifically, increasing the density of objects (i.e., number of trees) in the outside-world visualization of a flight simulator has been shown to support better estimates of altitude, which resulted in less control variability in a low-altitude flight task [5]. On the other hand, instruments that track specific flight variables (e.g., heading indicator) can be better than outside-world cues (i.e., optic flow) in supporting control behavior, especially under unpredictable, fast-varying conditions [6]. In a recent study, we directly compared participants who relied on either outside-world cues (i.e., terrain features and a visible horizon) or its instrument equivalent (i.e., attitude indicator) to correct for instabilities in the roll orientation of their aircraft [7]. Although self-reports of workload did not vary significantly between the two types of error visualizations, stability control was less variable when it was supported by the instrument visualization. To understand this difference in control performance between the two visualization conditions, control output was separated into two components — one that reflected the perceptual bias of the target function (i.e., a constant signed error off the zero-point) and another that reflected the variance around this perceived (and possibly biased) target function. This separation revealed that participants exhibited substantial perceptual bias when they relied on the realistic outside-world visualization. In other words, relying on visual cues from an outside world environment can cause pilots to systematically mis-estimate the desired target function. In agreement with these findings, highly experienced rotorcraft pilots have been observed to rely less on looking at the “outside-world” and more on their instruments, as compared to their less experienced counterparts [8].

Another important aspect of error visualization is the latency in its presentation. Latencies can be introduced into a control system due to a lag in the vehicle dynamics or as a result of a transport delay between the system state, y(t), and visualization of the error feedback, e(t). The latter could be especially perceptible in remote visualization systems, such as Helmet Integrated Display Sight Systems (HIDSS), that are intended to compensate for reduced visual flight conditions by presenting error visualizations that are computed from on-board sensors (e.g., forward looking infrared radar) [9]. Pronounced delays can impair control performance and reduce the perceived handling qualities of the aircraft [10, 11]. Furthermore, we found that pilots often increase their stick input activity, u(t), in the presence of such delays in order to compensate for unintended over-shooting. This can result in pilot-induced oscillations that are generated by stick inputs at higher frequencies than is required to resolve the target function of the task objective itself [12, 13]. With time, these pilot-induced oscillations can grow and destabilize the entire control system. In addition, visualization delays have a impact on perceived as well as physiological workload. Increased visualization delays have been associated with larger self-reported workload [14] and electrodermal activity [13], suggesting that the human operator suffers considerable stress under delay conditions.

A final aspect of error visualization that merits discussion is the size of the field-of-view. Restricting the field-of-view of the outside-world can curtail the availability of visual cues. However, its detrimental influence on control performance is surprisingly limited. Although improvements in roll stabilization can be achieved by supplementing a central display with the use of peripheral displays [15], any improvements that can be gained from increasing the field-of-view of the central display to cover peripheral vision tend to be exhausted after \(40^\circ \) [16]. From this, it might appear that an arguably small field-of-view contains sufficient information for the purposes of a roll disturbance task. Nonetheless, it remains worthwhile to question how a small field-of-view could restrict other aspects of flight control besides limiting the availability of visual information. For example, a restrictive field-of-view could impair flight control by inducing non-optimal looking behavior. A basic control schema assumes that error feedback is submitted to the pilot just in time for its immediate resolution. However, a pilot can move his eyes to look ahead ‘in time’ and to predict the consequence of the current system state (e.g., course deviation) on achieving the final objective (e.g., landing target). Therefore, a restrictive field-of-view of the outside world environment in a flight training simulator could inhibit a trainee pilot in acquiring look-ahead behavior that is presumably advantageous in the real world. Preliminary findings in my lab support this conjecture. We have found that restricting the horizontal (but not the vertical) field-of-view during flight training can have a long-term influence on acquired eye-movement behavior. When tested in a large \(230^\circ \) field-of-view environment, participants who learned to perform a side-step maneuver in a \(60^\circ \) horizontal field-of-view condition exhibited a greater tendency to restrict their eye-movements to near objects than those who were trained in a \(180^\circ \) field-of-view environment.

3 Information-Seeking Behavior

Gazetracking technology allows us to monitor how a pilot seeks out task-relevant information during flight control. This section deals with how this unobtrusive method of pilot observation can inform us on the relevance of instrument scanning to control performance, the pilot’s operational state, and how attention might shift between different flight variables during a more complex flight maneuver.

A cockpit environment presents a diverse array of instruments that has to be attended to. Each instrument tends to be looked at as frequently as it delivers relevant information [17]. Thus, tracking the gaze of the human operator during task performance enables us to make inferences concerning the perceived relevance of the information that is communicated by each instrument.

Controlling a real aircraft requires a pilot to track more than one flight variable. Fixation on a single flight variable will cause the pilot to lose control over other variables, since flight variables tend to be dynamically coupled to each other. For example, increasing the airspeed of a rotorcraft (e.g., Bo-105) by pitching it downwards brings about the unintended consequence of losing altitude. Therefore, a pilot should adopt a regular and efficient instrument scanning strategy, also referred to as “cross-checking”, in order to be proficient. Only by looking periodically between instruments can a pilot ensure that all flight variables continue to be maintained at their desired values, regardless of adjustments to any single variable. In fact, instrument scanning techniques form an integral part of a pilot’s training. Different flight maneuvers tend to be associated with recommended instrument scanning techniques, so as to ensure that flight variables that are relevant to the maneuver are not neglected (for examples, see [18] pp. 6-10–6-12). In some cases, flight instructors rely on playbacks of eye-movements to identify inappropriate scanning strategy in trainee pilots [19]. Some studies have further validated that scanning behavior differs across different levels of flight experience in tasks ranging from approach and landing [20] to combat engagement [21].

The regularity of scanning instruments for task relevant error feedback has an impact on control performance. We have shown that scanning regularity distinguishes between good and bad performance, even across naïve participants with no formal training in instrument scanning techniques [22]. In this study, participants were first introduced to the control dynamics of a rotorcraft, namely a Bo-105, without explicit instructions on how they were expected to look at the instruments. They were trained to perform a flight mission that consisted of several consecutive maneuvers, such as straight-and-level flight, altitude adjustments, and airspeed adjustments. We recorded flight control performance and instrument scanning behavior. Flight control performance was measured in terms of the overall stability of airspeed and altitude maintenance, and instrument scanning was described by a probability matrix that described the statistical dependencies of gaze transitions between instruments. Control performance and instrument scanning behavior were found to be closely related. More specifically, participants who were able to maintain stable airspeed and altitude also demonstrated more regular and self-consistent patterns of gaze transitions between instruments. In contrast, unstable flight control behavior tended to be accompanied by unpredictable gaze transitions.

The statistical dependencies of gaze transitions between specific regions-of-interest (i.e., instruments) can be computed as a visual scanning entropy score, whereby a low score indicates regular (i.e., predictable) scanning behavior [23]. A recent study demonstrated that pilot anxiety can influence visual scanning entropy [24]. Here, participants were first trained to perform an approach-and-landing fixed-wing aircraft maneuver without receiving any specific instrument scanning instructions. The training objective was to ensure that participants could perform this maneuver using only cockpit instruments, under low visibility conditions. Upon attaining satisfactory performance, participants were subjected to a battery of manipulations that were designed to induce anxiety — this included monetary incentives for the best performer as well as informing participants that video-recordings of subpar attempts could be used as public teaching material. Induced anxiety, as validated by increased heart-rate and self-reported anxiety scores, had the effect of increasing visual scanning entropy across the trained flight maneuver. In other words, artificially induced anxiety had an effect of increasing the randomness of instrument scanning behavior.

A complex flight mission, such as an Instrument Landing System (ILS) approach of a fixed-wing aircraft, would require the pilot to juggle between attending to multiple task objectives, namely vertical, horizontal, and airspeed tracking. Given that instrument scanning patterns are most likely generated by the pilot’s task objectives, an analysis of instrument scanning behavior could allow us to infer how pilots allocate attention between basic task objectives during a complex mission. In a notable example [25], a Hidden Markov Model was proposed that initially assumed that pilots switch between tracking three of the above-listed flight variables during an ILS approach. The model further assumed that each of these task objectives would give rise to scanning behavior across their relevant set of instruments, as specified by the Federal Aviation Administration, USA (see Chapter 6, [18]). Fitting instrument scanning behavior to this model resulted in a description of how pilots switched their attention between the three task objectives across the comprising phases of an ILS approach, namely straight-and-level, intercept, and final descent phases. The fitted model also indicated that the least experienced pilot devoted most of his efforts towards tracking the vertical and horizontal variables at the cost of monitoring airspeed. Unexpectedly, the three pre-specified task objectives were not sufficient in accounting for all of the instrument scanning behavior that was observed in the most experienced pilot, especially during the final descent phase. Instead, the most experienced pilot was found to be tracking an additional flight variable, namely the attitude of the aircraft. This resulted in an additional and unexpected instrument scanning behavior that had to be included in the model. Interestingly, post-experimental interviews revealed that this instrument scanning behavior is well-known among experienced pilots. However, it is not typically endorsed during training for an approach-and-landing maneuver and was, thus, not included in the initial model. This example highlights how gazetracking can facilitate our understanding how pilots switch their attention between multiple single-variable control schemata in order to execute a complex flight maneuver. In addition, it allows task objectives, which were not originally modeled for a particular flight maneuver, to be raised for consideration.

4 Discussion and Outlook

To summarize, this paper began by introducing a basic control schema, which can serve as a starting point for integrating a human operator in a closed-loop control system. Given good approximations of human and machine behavior, it can be used to simulate their joint performance. Nevertheless, a realistic aviation scenario raises certain challenges to some of the assumptions that are implicitly held by such a control schema.

In this paper, I have addressed several aspects of real flight behavior that run counter to the implicit assumption, held by a basic control schema, that error feedback is perfectly transmitted to a human pilot that is merely reactive. More specifically, I have discussed the various influences of error visualization on the human pilot as well as how the human pilot acquires relevant information by shifting his attention across multiple sources of error feedback. Less than ideal transmission of error feedback can not only result in systematically biased and unstable performance; it can also have non-negligible consequences on the experienced workload and looking habits of the pilot. In addition, I have explained how regular instrument scanning behavior is a critical component of proficient flight control. Gazetracking provides us with the means to observe how the human pilot shifts his attention between tracking multiple and inter-dependent flight variables. With this in hand, we might be able to extend a basic control schema that allows for the tracking of only a single flight variable to one that is able to accommodate the full complexity of a flight maneuver.

Most of the research that is presented in this paper was conducted under controlled experiment settings, including certified flight simulators. For safety reasons, it might not be feasible to manipulate visual conditions in real flight scenarios. Nevertheless, gazetracking in real flight conditions could provide us with the opportunity of monitoring the specific error feedback that is attended to by the pilot. Recent advances in technology are starting to allow for unencumbered gazetracking. Improved calibration algorithms [26] as well as lightweight hardware (e.g., mobile eyetracking glasses by SensoMotoric Instruments GmbH) contributes toward this possibility.

While it is tempting to treat gaze fixation as a proxy for visual attention, this need not necessarily be true. A gazetracker cannot discriminate between a purposeful fixation and a blank stare. In order to determine the extent to which fixated information is actually processed, it might be necessary to evaluate cortical responses that proceed from a visual fixation. The lightweight nature and high temporal resolution of electroencephalography (EEG) presents itself as a likely candidate for this purpose. However, several technical issues have to be resolved before simultaneous gazetracking and EEG can be achieved in field conditions [27]. Predominant among these is the fact that eye-movements generate large electrical signals that can be confounded with the recording of cortically-generated activity. In spite of this, signal processing algorithms (e.g., independent component analysis) could be employed that are generally effective in separating these two sources of recorded activity. Still, reliable methods continue to be computationally expensive and only allow for off-line analysis.

To conclude, the transmission and acquisition of information is a vital aspect of how a human pilot interacts with an aircraft. The advantages of relying on a control schema to describe this interaction can be further expanded upon by a careful consideration of the issues that can impinge on how error feedback is communicated to the human pilot.