Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Background

The origin of Human Factors as a research domain can be traced back to the early days of modern aviation and cockpit design. Aviation is a safety critical activity, where a human pilot has to interact with a complex technical system, as well as the surrounding world and its physical restrictions. Traditionally this has meant a focus on safety issues relating to human factors on a low level (i.e. interaction with controls and displays), but there is a need to see the pilot-aircraft system from a more holistic perspective that includes organizational as well as systemic issues on a higher level (Harris 2011). Human factors approaches to design and interaction often describe the user and the system, or in this case the pilot and the aircraft, as separate entities exchanging input and output. Improving design of interfaces based on this idea often results in a focus on improving the interaction between user and system. The improvement is often measured in terms of time, amount or frequency of errors or other measurable performance results based on input or measurable user behavior. This approach to design and evaluation has been challenged by more holistic approaches such as cognitive systems engineering, CSE, where focus lies on goals of joint systems (i.e. the joint pilot-aircraft system) rather than on the effectiveness of the interaction between different parts of the joint system (Hollnagel and Woods 1983; 2005).

The speed of systems development cycles has increased in later years as a result of agile and iterative methods being used. Every new design solution has to be tested and a decision must be made whether the design is good enough, or if there is a need to re-design and eventually re-evaluate. In the context of developing a modern fighter aircraft cockpit design, these evaluations have to be made accurately and efficiently. One of the main reasons for the development of flight simulators was to create environments for training and exercise in order to reduce costs as well as risks. Simulators have, however, also extensively been used in order to evaluate cockpit design options during development, for instance in simulator-based design (Alm 2007). There is a long tradition of evaluating pilot behavior and performance using simulators, measuring everything from the number of interactions with controls to more abstract concepts such as situation awareness (Endsley 1988). Measures might include physiological measures like heart rate as well as other measures like questionnaires, observations and subjective rating of performance and experience.

However, outside the research community, in the industrial development of fighter aircraft, the time resources for extensive simulator research and evaluation may be limited– the demands of short lead time from requirement to delivery through quick development cycles means that the time to actually analyze data obtained through simulators may be constrained. Although the data may be recorded, the main input to the design team can often rely on more subjective measures like questionnaires, interviews and observations. The data recorded in the simulator, including measures such as technical system logs (control stick movements, button presses, menu interactions, flight data such as speed, pitch, elevation etc.) and even psycho-physiological data is more difficult to draw quick conclusions from in order to give meaningful input to the design process.

The main aim of this paper is to describe suggested methods (i.e. work in progress) for analysis of data acquired through simulations in order to determine which measures are of most relevance to the overall design process of fighter aircraft display design. The approaches described aim at connecting the simulator data to the overall joint goals of the pilot-fighter-aircraft system in accordance with the CSE approach to systems development.

2 An Approach for Fighter Cockpit Design Evaluation

The design process for the pilot interface and layout of cockpit features often follows a standard systems development process. The process includes a set of system development phases from requirements through analysis of requirements; design and prototyping to evaluation, implementation and verification of the end product (see Fig. 1).

Fig. 1.
figure 1

A schematic view of a generic design process based on standards such as the systems development and life cycle process described in ISO/IEC 15288, and the human-centered design process suggested by ISO 9241-210.

The evaluation phase in the design and prototyping stage often includes several simulator based trials where test pilots fly the simulators in order to evaluate both functional and design related aspects of the pilot interface. In agile design processes of fighter aircraft, simulations are often used to evaluate aspects of HMI (human machine interface) design in order to determine two things:

  • Is the function under evaluation good enough to be implemented?

  • How should the function be improved?

During these simulations it is possible to record a large amount of different data. The main input used for evaluation is however often the results of usability tests, questionnaires and observations by the design team. The main aim of the research project behind this paper is to find methods to identify and extract relevant input from the recorded simulator data, and relate this data to the overall goal of creating efficient pilot-aircraft interaction in order to reach the joint pilot-fighter-aircraft system’s mission objectives (i.e. allowing the pilot to perform the mission at hand).

3 Method and Results of Initial Simulation

An initial simulation with a test pilot was conducted where a set of data was recorded. The data was analyzed in order to formulate a set of analysis approaches which are described in this section. The main aim of the analysis of the acquired data was to identify data that indicates for instance a high workload which would have an effect on the pilot’s ability to perform and conduct his/her mission.

In the days prior to the simulation, the test pilot received documentation regarding the functionalities and design features in focus for the evaluation. On the day of the simulation the pilot was given a brief instruction before getting in to the simulator. The pilot was also donned with eye tracking equipment. Before the simulated mission was initiated, the pilot was allowed to freely interact with the cockpit user interface and explore the menu features in focus for the simulation. After this the simulated mission commenced and the recording of data began. The simulation lasted for approximately 30 min (ca. 10 min training/exploring and ca. 20 min of recorded data) during which the design team observed and answered questions related to the tasks and interaction.

After the simulation the design team conducted a debriefing where the pilot was asked a set of questions about the features in focus, and about general issues related to the experience. Before leaving the pilot filled out a questionnaire focused on rating the readiness of the functions evaluated. The data logged during the simulation included the following metrics: time, airspeed, altitude, pitch, bank and eye point of gaze data (see Fig. 2).

Fig. 2.
figure 2

To the left the simulator setting, to the right a test pilot in the simulator

3.1 Suggested Approaches

Based on the initial simulation data four possible levels of the approach to analysis are suggested: Event detection through anomalies; Frequency analysis; Analysis of frequency and channels used; Actual vs expected behavior. These approaches, described below are the basis for further development of a general method for analyzing simulator data.

Event detection through anomalies in the data.

This approach studies data from event logs, flight data recordings, pilot interactions etc. in order to abstract relevant points of interest for further analysis. The main focus of investigation is occurrences of anomalies, for instance in one variable, or in the combination of several parameters (see Fig. 3a). In some cases the correlation of data are of interest, and in other the lack of correlation is the event of interest. The aim of this analysis is mainly to support the evaluator in finding relevant sections of the recorded simulation for further analysis.

Fig. 3.
figure 3

a. (left) Example data set from simulator trial showing points of interest in recorded flight simulator data. b. (right) Example data set showing change of active display area (selection of display areas) in correlation with flight simulator recorded data.

Frequency analysis.

Analysis of the frequency of certain triggers, for instance display selections, button presses, menu interactions, can give an indication of general work load for the pilot. When looking at the data sets from the simulation recordings, points of interest can be identified. An example of such points of interest can be seen in Fig. 3b, which shows the pilot’s selection of display layout during the simulation.

Analysis of frequency and channels used.

This analysis focuses on describing identified possible sections of higher workload in more detail. Data from the simulator can describe what modalities are used by the pilot and in what combinations – is the pilot working with the joystick while simultaneously monitoring the HUD, is the pilot using three displays while also using hand controls etc. This gives an indication of how well the layout (and design) of instruments and panels in the cock-pit support the pilots workflow.

Actual vs expected behavior.

The final suggested approach of analysis focuses on studying what the pilot is actually doing and comparing this to the expected behavior. This analysis can be based on the sequence of interactions – for instance the number of interactions it takes for the pilot to for change a waypoint in one display, compared to the designer’s expected number of interactions. The analysis can also be based on time – the number of interactions may be irrelevant as long as the outcome is the same in the expected time interval. Time is also an essential safety aspect in the often, time critical domain of aviation, making time a relevant measure. From a usability point of view, a large number of interactions is most likely not optimal with regard to the pilots overall workload.

4 Discussion and Future Work

The purpose of the work in progress described in this paper was to develop a set of approaches to using simulator data in the design process of fighter cockpit HMI. The underlying research question for the project is how to make the most use of recorded simulator data in order to answer the design teams main questions – “Are the HMI features good enough for implementation?”, and “What should be improved for the next design iteration?”.

Simulator data can be used to measure performance of the pilot (or rather the efficiency and effectiveness of the HMI). The CSE approach, however, highlights that these, often small differences in performance on detailed levels may not be the optimal way to ensure that the pilot-fighter aircraft system as a whole reaches its goals (often the mission objectives). There is a need to view the detailed measured improvements in relation to how this interaction affects, and is affected by, surrounding interactions as well as the environment. The aim of the approaches described is therefore to determine what measures and data acquired during a simulation will generate the most useful input to the design process of fighter aircraft display design. The “most useful” is here defined as measures that can be quickly and easily obtained (with as little intrusion as possible), while at the same time still being useful for the evaluation of display and control design to support a CSE based development process. The approaches described in this paper will be further explored and tested in forthcoming simulations.