1 Introduction

In the past few decades, the role and tasks of pilots have gone from flying the aircraft by means of manual control, to an increased role as managers of automated systems. Such changes inevitably bring about new challenges in operations and have resulted in a number of unintended consequences. Examples include “automation surprises”, new attention and knowledge demands, unevenly distributed workload, a degradation of operators’ manual skill and over-trust in automation (Sarter et al. 1997; Woods and Hollnagel 2006). The Performance-Based Operations Aviation Rulemaking Committee (PARC)/Civil Aviation Safety Team (CAST) working group (FAA 2013) states that a major factor in aircraft incidents and accidents is that pilots are failing to keep up with technological changes, resulting in surprise and confusion. Their report (FAA 2013) suggests that insufficient crew knowledge of the automated systems is a factor in more than a third of the accidents and serious incidents.Footnote 1 Example situations include American Airlines flight 965 controlled flight into terrain near Cali (ACRC 1995), an over-speed incident (AAIB 2004) and a runway excursion in snowy conditions (NTSB 2002). Other recent accidents connected to difficulties with automated systems include Turkish Airlines flight 1951 in Amsterdam where the flight crashed during the approach (Dutch Safety Board 2010) and Air France flight 447 that crashed into the Atlantic (BEA 2012).

Due to complex automated system logic and non-transparent system feedback, pilots’ understanding of all ongoing processes in the cockpit is inherently incomplete (Klein et al. 2004). Incomplete models result in the construction of simplified models of the world and oversimplification (Feltovich et al. 2004; Sarter et al. 1997). In most situations, not having full understanding is not a problem, as procedures and checklists guide pilots in managing system variations and failures. However, as events unfold, such as multiple disturbances and failures, the complexity of the systems may entail difficulties in identifying subtle cues and isolating failures that, over time, may progress into serious accidents (Woods and Sarter 2000). Also, studies show that in many accidents where the automation was a contributing factor it actually operated as designed (Dekker and Woods 2002), such as the accident of flight TK1951 near Schiphol in 2009 (Dutch Safety Board 2010). These findings suggest that the understanding of pilot-automation coordination deserves further attention.

The advances of technology in modern cockpits have increased the reliability of the technology, which decreases the variations and disturbances pilots are exposed to in normal operations. Training programmes today largely focus on pre-defined skills in context-specific scenarios where pilots know what to expect. In a recent study, pilots were confronted with abnormal events in the context of a training scenario, and they quickly recognised and carried out the solutions (Casner et al. 2013). However, when presented with similar events in a different context, they failed to recognise and recall the appropriate response, implying that responses learned and practised during airline training may not generalise to more naturalistic settings (Casner et al. 2013). The ability to quickly diagnose a problem and the ability to carry out a solution are qualities that define an expert (Klein et al. 2004) and can be contrasted to the novice that has to work through the problem in a more time-consuming manner to derive a solution. To be an expert, however, requires practice in varying contexts with different combinations of problems. Skills to deal with the unexpected and to “be prepared to be unprepared” in cockpit operations are today largely left to mature though experience (Dekker and Lundström 2006).

As implied by the aforementioned studies and investigations, critical issues to improve airline safety are crew-automation coordination and crew ability to make sense of and “frame” the situation following unexpected events. In this study, we examine the pilots’ (re)-framing process to identify challenges and enablers of the sensemaking process.

This study was carried out as part of the EU FP7 “Man4Gen” research project,Footnote 2 which aimed to identify factors that affect the ability of flight crew and aircraft to handle unexpected events and maintain control of the aircraft. The research presented in this paper was carried out during the first year of the project to specify research questions for the project, frame core concepts and gather contextual details of surprise situations in cockpit operations for use in upcoming simulator experiments.

1.1 Sensemaking and the re-framing process

In psychology, there has been a tradition to study how people perceive and understand the world in controlled laboratory settings (Hoffman and Mcneese 2009). However, the transferability of models of human cognition derived through laboratory experiments in a complex and dynamic setting is being questioned, as are the assumptions researchers make on human abilities and limitations as a result of them (Hoffman and Woods 2000; Klein et al. 2003). Examples include studies that show how domain experts satisfy options rather than optimise them (Klein and Calderwood 1991; Zsambok and Klein 1997) and how people not only seek to confirm their hypothesis (commonly referred to as confirmation bias) but also seek to disconfirm their hypothesis (Pliske et al. 2004). Studies in sensemaking, macrocognition and cognitive systems engineering are further examples of research efforts in better understanding human-technology work systems in complex and dynamic settings.

The study of sensemaking is a central function of macrocognition, that is, the study of how people make sense in a real-world setting (Hoffman and Mcneese 2009; Klein et al. 2003; Malakis and Kontogiannis 2013). Underlying the development of studying macrocognitive functions in human-technical systems is the research field of cognitive systems engineering (CSE), which emerged in the early 80 s (Hollnagel and Woods 1983, 2005; Woods and Hollnagel 2006). CSE is devoted to the understanding of how complex human-technical systems maintain control in dynamic environments (Hollnagel and Woods 2005). It is a systemic approach for analysing, evaluating and designing systems, with the view that humans and machines cannot be studied as separate units in isolation of their context, but as part of a joint system. For example, studies may target how operators detect and manage anomalies (Watts-Perotti and Woods 2007; Woods and Hollnagel, 2006) and extract relevant information from multiple ongoing processes (Christoffersen et al. 2007).

A central view in CSE is that perception is active rather than passive and guided by expectation. The control loop of the contextual control model (COCOM) presented by (Hollnagel and Woods 2005) demonstrates the cyclic nature of how control is retained in a perception–action cycle, emphasising that we use the past to make sense of the present, and that the context is an intricate part of people’s assessments and how they act. The model, which builds on Neisser’s perceptual cycle (1976), is the basis for analysing the dynamic process of joint systems control and for interpreting how people take action where the context determines the actions. The core constituents of the COCOM control loop are “Events-Frame-Actions-Events-…” (see Fig. 1 in Chapter 4 for two interlinked such loops), which represent the continuous cycle of contextual feedback (events) shaping action through the current understanding (frame). Events are affected externally or by the actions taken, and the current frame guides perception. Central for the ability to control a process and adapt in an appropriate manner is sensemaking (Klein et al. 2006). Sensemaking is the process of structuring the unknown and can be described as the interaction of seeking information, ascribing meaning and action (Weick et al. 2005). Making sense of a situation is an ongoing process which is constantly (and for the most part unconsciously) being revised as the world around changes.

The notion of sensemaking was introduced by Weick (1995), who described it as a response to experiencing a surprise; the process of sensemaking is initiated when there is a discrepancy between what is observed and what is expected (Klein et al. 2010). To make sense of events thus presupposes a conceptual framework, or a mental model, which is the basis for our expectations that infer meaning to observed data. The role of expectations as an important driver in the understanding of ongoing events has been shown in studies of human-technical systems (Christoffersen et al. 2007; Woods and Hollnagel 2006) and in related areas in social sciences (e.g. Dunbar 1997). In sensemaking terms, the conceptual framework is referred to as frame (Klein et al. 2006, 2007). To construct a frame means that data are “fitted into a structure that links them to other elements” (Klein et al. 2007, p 118), which allows the identification of relationships between events and places in a coherent fashion. A person’s background and their goals guide the target of the search, and the understanding is modified based on what the search generates. For example, a farmer, a sailor and a meteorologist observing the same weather phenomena will search for cues relevant to their goal and construct an image of the weather pattern based on their individual expertise. When an observation does not fit the current frame a surprise occurs, requiring an elaboration or a re-framing of the data, an active process to fill the “gap” (Klein et al. 2010). Sensemaking is thus not the activity of solely perceiving and interpreting input from the environment after the fact (retrospective) but the continuous process of fitting what is observed with what is expected (anticipatory), an active process guided by our current understanding, as illustrated in the Data/Frame model (Klein et al. 2007). The D/F model illustration differs from the perception–action cycle of COCOM in that it highlights specific activities, or the “strategies” used, rather than the concepts that constitute the continuous cycle. In this sense, the COCOM cycle can be seen as an intricate part of each sensemaking activity, such as searching for a new frame and questioning a current frame. Chapter 4 offers a more detailed description of the D/F concepts, with a focus on the sensemaking activities identified in the current study.

Klein et al. (2007) found that key elements in data serve as anchors, that is, certain cues bring out the initial frame, which are used to guide the search for more data. In this sense, an expert is not someone who can interpret the data, it is someone who can recognise the right cues to “break” a frame and to identify a new, useful frame, to explain the discrepancies. Malakis and Kontogiannis (2013) identified performance criteria used by air traffic controllers to guide the re-framing process and found that experts were better than novices at applying criteria that enhanced the identification of subtle cues, which in turn, increased their operational flexibility.

In sensemaking, there is no end-point, no “full comprehension”; making sense is necessarily a continuous and dynamic process to match the changing environment. Weick (1995) argues that people generally do not have “the big picture”, but rely on how plausible a frame is, and use that to search for reasonable explanations. Klein et al. (2007) similarly discuss just-in-time frames, that is, we rely mainly on local cause-effect connections that we detect and not on a comprehensive mental model of an entire system. Similarly, regarding the complex automated systems in a modern airliner, it would be inconceivable for pilots to have full detailed comprehension of all technical systems.

The focus of frame construction differs from, for example, the widely used concept of situation awareness (Endsley 2006), which is a state (of knowledge) attained by an individual based on data or inferences of data in the environment and is used to make predictions about the future. Studies of sensemaking, on the other hand, are about the processes used to achieve such states (Klein et al. 2006; Klein et al. 2010; Malakis and Kontogiannis 2013). Also, missing cues and displayed information is commonly described and categorised as “loss of situation awareness” (Endsley 2006). This view suggests that inattentiveness plays a key role as something is “lost” and fails to recognise the importance of how context and expectations guide the search, identification and inferences made. Further, the view that expectations guide attention may explain why pilots manage routine training very well, but when faced with the same failures in an unexpected sequence they have trouble identifying the problem, as shown by Casner et al. (2013). In a sensemaking view, focus shifts from questioning how to get better at being “situationally aware” to understanding the process of constructing frames in different contexts (for example, flight phases and environmental conditions).

The (re)-framing process in a cockpit environment involves multiple processes, such as searching for data from multiple sources, attention switching between tasks and prioritising (see e.g. CAA 2013). Unexpected events often increase the cognitive demands on the pilots in other areas as well, including the number of tasks and checklists, the frequency of communication and coordination, the management of automated systems and the adaptation of plans (Bergström et al. 2011; Billings 1997; Dekker et al. 2008; Pruchnicki and Woods 2013; Woods and Patterson 2000). Often, there is an element of uncertainty making the retrieval of frames increasingly challenging, particularly in time-critical situations. A previously noted a tendency is to rationalise anomalies to make them fit the current frame, also described as fixation error (De Keyser and Woods 1990; Sarter et al. 1997). Getting stuck, or fixating, on a narrow interpretation of the situation makes re-framing a challenge despite contradicting and ambiguous data (Feltovich and Hoffman 2004; Lanir 1986; Sarter et al. 1997).

In this paper, mismatches between the observed and the expected (a surprise) are used as the starting point to examine the (re-)framing process of pilots in cockpit operations. Through interviews, cases of surprise in the cockpit have been gathered, which have been interpreted and analysed through the lens of a CSE and sensemaking perspective, applying concepts from the COCOM model (Hollnagel and Woods 2005) and the D/F model (Klein et al. 2007). In the following chapter, the method is presented, followed by a results chapter, including example cases. The discussion chapter brings together the findings by presenting the crew-aircraft sensemaking model which adapts the original COCOM model to a cockpit environment and emphasises the sensemaking aspects of expectations and anticipatory thinking. Further, an adapted D/F model is presented to capture the sensemaking activities found in the presented examples. The article concludes with suggestions for further research.

2 Method

Eliciting expert knowledge in complex dynamic systems is challenging given the complexity of the tasks and work context and the interviewees ability to articulate this. The interviewers’ role requires the ability to uncover the intricacies of the specific situation, know when to ask further questions and feel confident that significant aspects are covered (Miller et al. 2006). The subjectivity of an interviewee further introduces several potential disadvantages, such as memory alterations and biases due to, for example, concerns about their own performance. However, to investigate the re-framing process, interviews may be the best means to gain in-depth insights into the thought process of the pilots. Other methodologies that can be considered to investigate re-framing include, for example, a review of incidents/accidents or observations of simulated tasks. Such methodologies may allow the analyst to access more reliable data (if measured) to investigate what circumstances lead to a surprise and map the actions taken. However, they do not allow insights into the subjective experience of the individual pilot on naturally occurring but unusual situations. To this end, an interview methodology was chosen for the study.

The interview technique used was guided by two methods often applied in naturalistic decision making research, the critical incident technique (CIT) (Flanagan 1954) and the critical decision method (CDM) (Klein et al. 1989). The goal of such methods is to elicit expert knowledge from practitioners working in domains governed by complexity, time pressure and a dynamic environment. The methods offer a structured way to gather information regarding actual cases, using questions that relate to how people work in a natural environment. Further, the interviewee is given the opportunity to narratively describe an incident and the interviewer can guide what information is elicited through follow-up questions (Klein et al. 1989).

Although the focus of CIT and CDM is on critical incidents it should be noted that for this study there is no requirement or implication that the incidents described were critical in a sense that it jeopardised the safety of the flight. The focus is on capturing knowledge regarding the re-framing process following surprise irrespective of whether the incident was safety–critical or a routine function with a safe outcome. Due to the disadvantages of the methodology mentioned above (e.g. subjectivity and memory alterations), the case descriptions are seen as approximate accounts of the incidents and used mainly as a source to investigate challenges and form hypotheses. Further, no attempts were made to draw objective or quantifiable inferences as examples vary in type and depth. Instead, the focus was to identify the constraints and cognitive demands in each context described. Although the challenges identified are context- and pilot-specific, patterns of sensemaking can be identified and compared to other cases.

2.1 Procedure

The developed interview guide had two main parts. In the first part, the pilots were asked to describe recent situations where they felt surprised. Follow-up questions were used to encourage the respondents to reflect on their experience, provide contextual detail to their narrative and ensure that the most important aspects were covered. In the second part of the interview, pilots were asked to reflect on common sources of surprise in cockpit operations as well as coping strategies. Questions were used to ensure the following areas relating to surprise situations were covered: confusion and problem-solving, automation and system knowledge, manual control (and mode transitions), training, procedure applicability and team work/communication. Each interview lasted between 50 and 90 min, depending on the length and depth of the responses.

2.2 Participants

Semi-structured interviews were carried out with 20 pilots. The invitation to participate in the research was targeted to include pilots with experience on modern, 4th generation, airliners, encouraging a wide variety of backgrounds. The pilots took part voluntarily and were not compensated for their time. The experience of the pilots ranged from low-experience first officers, to experienced captains, flight instructors and training and safety managers from a number of different western European airlines. Overall, there were more experienced crew in the group; 14 out of 20 participants were instructors and examiners. The average number of flight hours of the participants was 10,892 and the average age 49.

2.3 Analysis

The analysis was carried out in three steps: (1) transcription, (2) data tagging and (3) identifying patterns of re-framing. The interviews were transcribed in full. All identified cases of surprise situations were extracted from the transcribed interview data, a total of 48 cases. The cases were subsequently tagged according to the sensemaking activities, and the key areas for investigation (see list under procedure tags). The final step of the analysis included an iterative cross-case analysis to identify patterns of what enables and disables the re-framing process following surprise. Nine categories of challenges relating to the re-framing process were identified in the analysis. Table 1 presents nine cases, each one representing one of the nine categories. Also included in Table 1 is a brief summary of each case, the main challenge and the re-framing activity taking place. In the results section, each case will be presented in more detail and in the analysis and discussion section the re-framing activities are discussed. It should be noted that several of the identified challenges can be found in a single case (e.g. a majority of the cases involve some degree of uncertainty management).

Table 1 Overview of cases, challenges and re-framing activities

3 Results

The cases elicited in the interview study offer a broad spectrum of surprise situations in cockpit operations. Emerging from the data are nine challenges relating to the re-framing process (Table 1). Each of these challenges is contextualised and discussed from sensemaking perspective using an example from each category.

3.1 Case 1: Absence of salient cues

Although a fault may be physically visible to the crew (e.g. an autopilot is turned off) it may be hard to detect if there are no salient cues that suggest that something is out of the ordinary, as exemplified in Case 1. A salient cue, in this context, is a cue that catches the attention of the perceiver. The understanding of a situation is guided by the current frame, and if a problem is not expected or salient it may be hard to detect. As mentioned in the interviews this is seen as particularly challenging when there is a lot of “noise”.

Case 1: The crew forgot to switch on the second autopilot during the approach.Footnote 3 This is only a problem if they have to make a go-around as this causes the first autopilot to drop off, which is what happened in this case. After the go-around the crew thought that they were flying on autopilot, but the aircraft was actually trimmed nose up since the autopilot automatically puts in a nose up trim bias at the start of the approach. The two situations initially have the same reaction at the go-around, the aircraft pitches nose up, but the reason why it’s happening is totally different. Rather than being actively controlled by the autopilot to pitch nose up, the aircraft is passively pitching while not being actively flown (by either autopilot or crew in this case), until the situation is corrected.

3.2 Case 2: Passive and insidious disturbances

Some pilots in the interview study felt that more passive and insidious disturbances (i.e. disturbances that build up without the crew being aware) are difficult to detect and a major challenge in modern aircraft. Passive disturbances are seen by these pilots as more likely to happen during cruise flight when crews are less actively involved in managing the flight path compared to the terminal flight phase.

Case 2: During cruise flight at high altitude aircraft can have a narrow spacing of 1000 foot. Thus, an aircraft flying at 200–300 foot above the cleared altitude in turbulence and the autopilot reacts slowly to the offset they may have 400-500-600 foot of loss or gain of altitude. Within seconds, the crew could therefore be outside the cleared altitude limits.

As in Case 1 there is a difficulty of detecting subtle cues when the problem is not expected. A surprise leading to a critical situation does not have to involve problematic interpretations of conflicting or hidden information. On the contrary, many surprise situations reported in the study are the result of external events such as a late change from ATC and unexpected weather. It was mentioned that having to change plans, that is, switch frames, always includes an element of surprise even when the response is clear, such as a go-around or taking manual control.

3.3 Case 3: Conflicting data

The ability to interpret cues when there is too much or conflicting data are the most common challenge reported in the interview study. Case 3 is an example of this.

Case 3: During the descent a crew received an audio warning that the aircraft’s airspeed sensors are providing different data, which means that the airspeed information from one of the sensors could be unreliable. To the crew’s surprise, the different airspeed indicators in the cockpit are not showing any discrepancies. The airspeed warning indication caught the crew’s attention. However, there is no visible indication of airspeed discrepancies, contradicting the warning information. Not knowing what the problem was, or even if there was an actual problem, the crew initially thought it must be a malfunction of the system. The next thing that happened is that they got a wind shear failure indication on the primary flight display, which they found awkward. They discussed what was happening at the moment, but they couldn’t see any differences. Then the autothrottle system unexpectedly increased power (during the descent) and the aircraft started to climb. At this point the captain disconnected the autothrottle and autopilot systems and flew manually. The altitude on the instrument panel blanked as a result of the disconnect. Further, they could not remember which altitude they were cleared to (as this is usually programmed into the system). After contacting air traffic control again, the crew wrote down the altitudes and clearances to continue the descent. The captain had to fly manually, basically without flight directors until they intercepted with the instrument landing system, at which point flight directions came back. They felt lucky that the weather was good.

The Captain describes: “First you are confused by the malfunctions you see, the impressions, the warnings, the speed comparison and because you start looking to airspeed on both indicators and on the standby, and there was no difference in airspeed indication, so you are confused as to why there is a comparison warning and there is no difference in speed. Because if one airspeed is unreliable you switch to the other side. The switch is my side. And that takes a lot of bits to figure out are we missing something because you believe the system first and you can’t see any reason why you have that comparison light. Then the wind shear fail light came on, then [I] thought, this is complicated and the system doesn’t know really what to make of it. So that made a big difference, the wind shear warning failure … and it was early in the morning after a long flight, you try with all your bits you have in your head to figure out what is happening and you can’t figure it out, because it is all the same and still strange things happen and confusion took about 3–4 min and then when said “ok, f*** the systems, put flight directors off, and flight indication off and fly by hand”. But you try to figure out, because you want to do something, you are in the cockpit to do something in these situation, but we couldn’t do anything, couldn’t do any selections of anything because there was no checklist, there was no common sense, there was no common sense in the whole system.”

Only once the aircraft deviates from the intended trajectory did the captain intervene to maintain control of the aircraft. As described by the pilot, the inconsistent feedback from the systems made procedures and checklist inadequate to support the crew. The crew are involved in the cyclic process of searching for a frame to map the events, and at one point they adopt a frame (malfunction of the system), which guides their next actions and interpretations. This frame is, however, discarded as other, inconsistent, failures appear. At this point, the crew go back to searching for a new frame. The crew are not able to identify a consistency in the observed symptoms and thus not able to form a hypothesis of the actual fault(s). This can be contrasted to other examples from the interview where pilots mention that observing several symptoms (elaborating a frame) increase the chance of identifying the problem.

3.4 Case 4: “Getting stuck” in a narrow interpretation

The example in Case 4 describes the typical structure for “getting stuck in a narrow interpretation”; a pilot searches for the cause of a failure and quickly finds evidence supporting the initial hypothesis. Following actions focus on taking necessary steps to manage the identified problem, and the crew fail to acknowledge information contradicting their hypothesis. Again, the case illustrates how the crew’s expectations steer their actions and what data they search for, thus missing important cues that would contradict their assessment. Several respondents mention having experienced similar situations. One respondent mentioned that “checklists and procedures encourage you to keep moving, not to understand the problem”.

Case 4: During cruise flight the captain leaves the cockpit for the toilet. After the captain has left a fuel imbalance warning pops up. After checking the amount of fuel in the wing tanks the first officer notices a 600 kg difference between the left and right tank. The first officer decides that the reason has to be a fuel leak and pulls out the corresponding checklist (note: the checklist includes check items to exclude other possible reasons for a fuel imbalance). When the captain returns to the cockpit the first officer confronts him with the fuel problem and both are convinced that there was a fuel leakage. Continuing through the checklist, the crew look for evidence to support their assessment. The continuation of the fuel leakage checklist leads to a shutdown of a healthy engine. Because of the shutdown, the crew never noticed that, albeit one tank fuel was low, the other one was still on the initial value from before take-off. The real failure was a damaged cross-feed valve.

3.5 Case 5: Sudden changes and rapid transition

The dynamic nature of cockpit operations can contribute to a routine approach turning into a potentially critical situation in a matter of seconds (also relevant in Case 2 above). In Case 5, the surprise element of pressing the wrong button creates a delay in response as the pilots try to make sense of what happened. The tempo of events is so high that the crew have difficulties in re-framing and hence, taking action. The first officer’s narrative offers an understanding for the increase in cognitive demands experienced as he concurrently monitors the captain, the aircraft status and surroundings and projects next steps (preparing to take control if necessary).

Also portrayed in this situation is the challenge of not knowing when to take control. The ambivalence of the decision to take control is mentioned in several of the interviews, commonly referring to pilots monitoring automated systems. In this example, the problem is knowing if and when to take control from another crew member.

Case 5: On approach the captain wants to disengage the autothrottle. The new captain mistakenly engages the go-around button (which is close to the autothrottle disengage button). The autopilot and autothrottle respond quickly to “go-around mode”, instantly giving full power and raising the aircraft nose 30°. When the captain has disengaged the autothrottle and autopilot the aircraft has gained 600–700 feet altitude and 30–40 knots of speed and the captain decides not to continue the approach.

The first officer describes “It was switching the picture in your mind of being in an approach into the picture of an automatic approach into the picture I have to do a manual go-around. And this took some time and during this time the aircraft accelerated and pitched up and things like this. We never reached any limit, but it was a challenging situation. […] I was a little bit on the reluctant side to give any help or advice [as the supervising captain wanting to see how he reacted]. The only communication I did was I said “you hit the go-around button” and then watched what he did. He acted correct but it took him some time and I was concerned because he didn’t have much experience on the aircraft: should I take over or should I leave it to him? I left it to him and he did it very fine, but he needed his time”.

3.6 Case 6: Coping with insufficient system knowledge

In the next example, Case 6, there is a glideslope warning, but no other indication that the aircraft is not on the intended flight path. Due to insufficient system knowledge, the crew experience an uncomfortable uncertainty at a critical flight phase.

Case 6: A crew using a Lateral Navigation/Vertical Navigation approach get a glideslope warning on the approach,Footnote 4 the aural “glideslope” callout, while they are on the calculated path. During an approach such a callout has to be rectified or lead to a go-around. There was no deviation visible in the cockpit of any part of the flight path, “everything dead centre”. The problem, it turns out, is that the calculated path depends on the QNHFootnote 5 value received from the airport. However, apparently the airport reported QNH was not so accurate, and when the captain uses the wrong QNH, the system has a mismatch between the calculated path and the targeted end-point (which is based on QNH). Since the company policy is only to use this approach procedure in daylight visual conditions, the crew could confirm they were on the glideslope, it was a spurious warning, and continue the approach.

3.7 Case 7: Multiple goals and trade-offs

As with all air travel, there are a number of overarching goals for the flight such as getting from A to B, ensuring safety, fuel efficiency and keeping on schedule. Monitoring and control activities are carried out to fulfil the goals and manage trade-offs based on current conditions. The surprise in Case 7 exemplifies how multiple activities and goals in the trade-off space can lead to confusion. This case is also representative of the challenge described in case 9 “roles and communication”.

Case 7: On a long-haul flight there were three pilots, which means that the pilots can take a rest during the flight. The captain of the flight was also an instructor, and thus allowed during cruise to sit in both the right seat as well as in the left seat. The pilots agreed to turn clockwise twice and then during landing everybody would be back in their normal seats. Due to the strong head wind they decided to fly at high speed to avoid delays. Half way they hit more turbulence than expected, moderate to severe. At the time the captain, who was pilot flying, sat in the right seat. He describes focusing his attention on the cabin, considering how the turbulence will affect the cabin crew with trolleys and nervous passengers. At the same time he was thinking about the effects in the back of the plane he told the first officer to reduce the speed, (a task that would normally be performed by the pilot flying), because when you have turbulence you also have speed variations and there is more risk of entering an overspeed. The captain had mentally resumed his command role of pilot not flying,Footnote 6 and was asking the first officer (who was pilot not flying) to make the changes to the airspeed, rather than doing it himself, as he should have done as pilot flying.

The captain describes: “I had the wrong mental state at the moment and the wrong focus of attention and the wrong priorities. I was pilot flying so I was going back in management mode and captain mode but my priority is flying the aircraft so I didn’t remember if I was pilot flying or pilot not flying which is extremely important. I think a lot of incidents they happen because there is confusion “do I have to fly the aircraft or do I have to solve the problem”.

3.8 Case 8: Coping with uncertainty

As a crew depart from the airport and sets out for a long night flight, the auto-pilot keeps turning off and the crew cannot figure out why. In this case, they have to assess the situation and decided whether they are willing to continue the flight given the uncertain conditions.

Case 8: The flight was a night flight and the weather was fine with no clouds and visual meteorological conditions. During the initial climb the crew retracted the gear and the flaps and afterwards tried to engage the autopilot. After about 5 s the autopilot disengaged automatically which lead to the execution of the autopilot re-engage checklist. This procedure demands that the aircraft be trimmed, and then re-engage the autopilot, but the aircraft was still in trim at that moment. The crew tried to re-engage the autopilot several times with the same outcome that it always disengaged after a few seconds. At this point the pilot monitoring noticed that the left gear indication was green and red at the same time which indicates that the gear is in the transition phase between extended and retracted. Contrary to the indication there was no sound of a retracted gear audible. The pilot flying levelled the aircraft off at about 5000 ft and flew manually. The pilot flying then left his left seat to look for the circuit breaker panel at the rear side of the cockpit to check for faults while calling for help from the technicians on the ground and at the same time communicating with air traffic control.

The interviewee stated that he had an uncomfortable feeling after having checked the circuit breakers without finding anything wrong. He was tired because of the night flight, had the long flight time in mind and that the remainder would have to be flown manually in case they wouldn’t find the failure with the autopilot. As the ground staff made the recommendation to recycle the circuit breakers the crew simultaneously decided to return to the departure airport and land because they had no clue what could have been the reason for that failure and didn’t want to continue to their destination in this condition. The pilot later found out that the probable cause of the failure was a sensor problem that held the aircraft in ground mode, which meant the autopilot wouldn’t have engaged.

3.9 Case 9: Roles and communication

Team sensemaking can facilitate solving a problem as the knowledge and experience of several people is available tackling a problem. However, having multiple opinions may also create problems as a course of action has to be agreed upon.

Case 9: On this long-haul flight there were four pilots in the cockpit: A captain, two first officers, and a second officer. After take-off the crew realise that the nose-wheel has not properly retracted, the cockpit indication is that it is not in the safe retracted position. The crew cycle the gear, down and up, which didn’t resolve the situation. They come to the conclusion that they have to return to the airport. In order to land safely they must dump fuel, which will take about an hour, leaving ample time to discuss and agree the safest way to land. There was also a maintenance person on the aircraft as a passenger, so the four pilots and the maintenance technician discussed the options. The maintenance technician suggested that he knew exactly what had happened, a hydraulic fault that might cause the nose gear to not retract, but also means that the nose gear is turned off-centre. Since it is night it is not possible to ask the control tower or another flight to check the alignment of the nose gear, if it was too far off-centre it may cause problems on landing, driving the aircraft off the runway or causing sparks or fire. There was a lot of discussion between the pilots about their options making it difficult to decide on a course of action. Further, the captain was not very assertive, contributing to the difficulties in assessing the options. Eventually the two first officers and second officer agreed on a strategy to land.

4 Analysis and discussion

Investigating the sensemaking process following surprise in cockpit operations is to direct the spotlight on the continuous loop of the retro- and prospective processes pilots use to explain the observed mismatch. A key aspect of the sensemaking process is to examine how frames are constructed. Two models are used to illustrate the findings of the analysis. The crew-aircraft sensemaking model outlines and connects the core constructs of the re-framing process (Fig. 1) and the D/F model (Fig. 7) highlights the activities, or “strategies”, used by the pilots as they re-frame.

4.1 The crew-aircraft sensemaking model

The crew-aircraft sensemaking model (Fig. 1) outlines the core concepts of the re-framing process and the sensemaking activities (further developed from (Rankin et al. 2013)). The crew-aircraft sensemaking model builds on (1) the contextual control model (COCOM) (Hollnagel and Woods 2005), and (2) the Data/Frame (D/F) model (Klein et al. 2007). The COCOM model offers a cyclical account of perception and action, and the D/F model described the re-framing activities following surprise. Using a cyclical model of perception–action such as the COCOM illustrates the core of the sensemaking view that feedback from the environment modifies the current frame, which in turn guides the search for more information (and actions taken). In this sense, pilots are, in a cyclic fashion, constantly moving forward to further adapt the frame. For the purpose of illustrating the joint system of the cockpit in the crew-aircraft sensemaking model (Fig. 1), two COCOM loops and an aircraft loop have been interlinked. The interconnected loops in the model represent the dynamics between the two crew members (pilot flying (PF) and pilot monitoring (PM)) and the aircraft. Two main loops for each crew member describe the retrospective and prospective processes of the re-framing process which together serve the purpose of ascribing both meaning and action (Weick et al. 2005). Red arrows for both PF and PM (jointly, concurrently and in an iterative and cyclic manner) represent functions and processes (as described by (Klein et al., 2003)). Yellow arrows describe aircraft processes and external events and disturbances. Events, feedback and cues from the process to be controlled modify and (re)construct the current frame (by PF and PM) of the situation. This part of the loop focuses on cue seeking, data gathering and problem detection.

Fig. 1
figure 1

Crew-aircraft contextual control loop. Core concepts of re-framing include the retrospective and prospective process used to construct a frame and take action (illustrated on the left side). Sensemaking activities in the re-framing process identified in the nine cases are questioning, preserving, elaborating, comparing, switching and abandoning the search for a Frame, and rapid frame-switching (illustrated on the right side)

Further, the cases have demonstrated the importance of anticipatory aspects (expectations) of sensemaking, which has been emphasised by adding the outer loops to the original COCOM model. Both pilots engage in a loop of anticipatory thinking, which focuses on functions and processes of (re-)planning and mental simulation, generating expectations based on their current frame. Expectations reciprocally affect the frame based on the pilot’s knowledge and experience. Coordination and maintaining a joint understanding, or common ground (Clark 1996; Klein et al. 2004a, b), is done through communication in the cockpit (blue arrow), which also enables both loops to be a crew effort through comparing and elaborating frames. The central sensemaking processes (prospective and retrospective) are included in the model, and the sensemaking activities from the original D/F model and those found in the cases are mentioned on the right side of the figure and include: question, preserve, elaborate, compare, switch, abandon and rapid frame-switching.

The cyclic description of the sensemaking activities is at a higher degree of abstraction, as illustrated in the D/F model (Fig. 7). The D/F model shows the collection of re-framing activities, or the “strategies”, as described in the nine cases. In the following sections, the re-framing activities are presented in more detail, followed by the resulting D/F model in Sect. 4.7.

4.2 Anticipatory thinking

The anticipatory part of sensemaking (the outer loops on either side of Fig. 1) is in focus in the cases presented, as a main ingredient in frame construction. Figure 2 illustrates how expectations (dotted circle) guide attention as pilots seek to confirm the current frame. Anticipatory thinking is used not only to cope with the unexpected after the fact, but is also a means to avoid surprises. “Staying ahead of the aircraft” (building expectations) was one of the strategies mentioned in the interviews to prevent surprises. Monitoring activities are seen as a tool to shape the mind-set to be alert, even in low workload phases. The strategy was also described as “predicting the outcome of trends by monitoring the aircraft”, which can be seen as the continuous effort to update frames through an intuitive information source where pilots are able to see trends early on and help anticipate problems before they arise.

Fig. 2
figure 2

Anticipatory thinking. Frames include expectations (dotted circle) and observed data (full circle). Expectations guide attention, and when confirmed they are incorporated into the frame

Anticipatory strategies such as the above mentioned are key to increase pilot abilities to question frames, switch frames rapidly and counteract getting stuck in frames or even being surprised in the first place, and its importance warrants further insights into what enables such strategies. As discussed by Klein et al. (2007), what differs an expert from a novice is not the reasoning process but the knowledge base they work with. Feltovich et al. (1997) found that novices identify relevant cues, but are not sure what to do with the knowledge, and that experts (compared to novice) are better at generating more anticipatory actions. Malakis and Kontogiannis (2013) identified that more experienced air traffic controllers are better than novices at applying a set of performance criteria that enhance identification of subtle cues and increased flexibility. These findings imply that it may not be sufficient to rely on simple strategies to cope with unexpected events (e.g. the oft-quoted basic piloting principle of “aviate, navigate, communicate”), it requires experience to connect the dots, discern the important cues, identify anchors and take action. It requires more elaborate frames. Given the reliable and advanced nature of modern airliners, there is a paradox between the pilot’s ability to build expectations and thereby cope with unexpected system actions, and the design of the aircraft and its operating procedures which aim to anticipate and prepare for system failures through careful design of technology, procedures and training. Receiving less exposure to variation in aircraft operations may thus be undermining pilot’s abilities to detect anomalies early on, mitigate surprises by “staying ahead of the aircraft” and cope with escalating situations. As demonstrated in the interview cases, adapting to a changing environment in the cockpit requires something different than what has been anticipated through system design, it requires detection of anomalies, framing and re-framing of situations and knowing what actions are applicable.

4.3 Question and preserve frames

Being surprised is a trigger to question the current frame, as shown in Fig. 3. Triggers may be directly visible or audible cues, or ques that require a certain expertise and attention management (Woods and Sarter 2010). Challenges associated with questioning the frame exemplify the very core of the crew-aircraft sensemaking model, that is, in order to detect inconsistent data, the individual or team has to focus their attention on the specific data that deviates from the frame at just the right time. In most of the described cases, there are clear indications such as visual and audio warnings (e.g. case 3, 6, 7, 8) or notable physical changes in the environment (e.g. case 5). However, detecting that there is a problem, that is, questioning a frame is not always straightforward.

Fig. 3
figure 3

Sensemaking activity questioning a frame. When an unexpected event occurs and the data (the cloud) do not fit the frame this leads to a questioning of the frame

Case 1 exemplifies a crew preserving the frame in a situation where cues are nonexistent or subtle. The crew believe they are flying on autopilot, when in fact the aircraft is flying in a trimmed state and the difference is difficult to detect. Reason for the mismatch (forgot to switch on second autopilot) occurred earlier and are likely to be disassociated with the current status. In Case 3, the crew initially discard the warning and preserve their original frame, assuming it is a system fault that does not warrant any further attention. Only once other unexpected behaviour starts to occur do they question and search for a new frame.

Getting stuck in a frame, or preserving a frame, and using this frame to explain discrepancies observed is further exemplified in Case 4. Initially, the crew question and elaborate the frame, determining that the cause is leaking fuel. The initial hypothesis guides the actions taken, and alternate causes for the failures are left unexplored. The findings are consistent with previous studies showing that crews often do not notice inconsistencies or faulty values, although the data are visible on the system displays (Woods and Sarter 2010). It also demonstrates that once a plausible frame is identified the search for alternative causes stops (Klein et al. 2007), something that can lead to critical events going undetected. The problem of getting stuck in one frame and failure to see inconsistencies or look for alternative explanations has been described as fixation error (De Keyser and Woods 1990).

4.4 Team sensemaking and seeking a new frame: elaborate and compare

Seeking a new frame is the process of actively searching for anchors to identify a frame that can explain what is being observed. One strategy is to gather additional information, or elaborate a frame (Fig. 4, left image). By widening the already existing frame, the mismatch can be resolved, as exemplified in Cases 2, 4, 5 and 7. The elaboration does not require any changes in the understanding of the system and ongoing process, but rather an update, also described as a situational surprise (Lanir 1986). Note that although an elaboration is made, and the mismatch is resolved, this does not necessarily imply that the elaborated frame is correct, as in Case 4 (see previous section).

Fig. 4
figure 4

Sensemaking activities Elaborating a frame (left image) and comparing frames (right image). A frame is elaborated when new data are fitted to the current frame. Frames are compared when different hypotheses are tested

Another strategy is to compare and match available frames, that is, test plausible hypotheses against the observed mismatch (Fig. 4, right image). We hypothesise that this strategy is used more frequently when the degree of uncertainty is high, such as in Case 3 and 8. The narratives in Case 3 and 8 describe the search for a new frame and the frustration of not identifying one that can account for all the inconsistencies. Comparing and matching frames can be done by the individual, or as shown in several of the Cases (3, 6, 8 and 9), as a team (as illustrated in Fig. 1).

Sensemaking as a team effort involves the co-creation of knowledge, a process which may vary in shape and form, as discussed by (Klein et al. 2010a, b). In the cockpit, both pilots are responsible for detecting and acting on issues that come up, and training, procedures and checklist are generally designed to facilitate joint understanding, or common ground (Clark 1996; Klein et al. 2004a, b). Team efforts are demonstrated in several cases where pilots share ideas to elaborate and compare frames (e.g. Case 3, 6, 8 and 9). Team sensemaking may also be a matter of helping another person identify the right frame by providing an anchor (i.e. a cue that allows identification of the right frame, or “putting the pieces together”), as exemplified in Case 5 when the first officer informs the captain that he mistakenly pushed the “go-around” button. Team sensemaking can, however, also obstruct the re-framing process. In Case 4, it can be argued that the inclusion of additional frames from the captain was obstructed by the first officer initially presenting “the problem” (i.e. the frame). In case 9, the pilots describe the challenge of deciding on a course of action when there are different opinions and no strong leader. Also playing into the team sensemaking abilities in the cockpit is the fact that the roles and responsibilities of the two pilots are different, and therefore so is the frame to anticipate events and plan actions. This is exemplified in Case 7 where one pilot is confused about the roles and overlooks critical cues connected to the current role.

4.5 Rapid frame-switching

The examples of rapid frame-switching, in cases 2, 5 and 7 show how tightly connected the prospective (expectations), and retrospective processes of sensemaking are in the sense that the current frame includes a plan for upcoming actions, and if this frame has to be updated, this may take precious seconds. As shown in Fig. 5, new data require a new frame, and the short arrow illustrates the time pressure. The challenge of this task is further increased as pilots have to switch between multiple tasks, which may result in deferring or omitting tasks, as described in detail in the studies conducted by (Loukopoulos et al. 2009). In case 2, the automation compensates for turbulence up to a point, and then shuts off, leaving the pilots to rapidly take over. The challenges of having to take over and fly manually when the automation can no longer handle a situation have been discussed in more detail in other research (e.g. Woods and Branlat 2011; Woods and Sarter 2000).

Fig. 5
figure 5

Sensemaking activity rapid frame-switching. The time available to change the course of actions takes longer than desirable due to the short time frame pilots have to evaluate, select and perform actions

4.6 Abandon the search for a frame and take action

In cockpit operations, swift actions may be required regardless of situational ambiguities. Although most system failures and contextual variations have a prepared response through checklists and procedures, as demonstrated in the cases, a major problem for the pilots is to make sense of what is going on and the inability to do may create a challenge in identifying an appropriate response (e.g. Case 1, 3, 7, 8).

Deciding on a course of action in ambiguous situation often involves deciding when (and if) to go from monitor of automated (and highly reliable) systems to more direct controller of the aircraft. Case 3 exemplifies this process where the crew initially let the automation handle it, but subsequently decide to take over when “enough” inconsistencies occur and the aircraft deviates from the intended flight path. The first warning the crew get is disregarded, as they assume it is a system failure (preserve the frame). As more faults start to appear they question the frame again and seek to elaborate their frame to explain the discrepancies. The crew are not able to identify a plausible frame, and as the aircraft deviates from its intended flight path they have to act. The crew abandon the search for a frame and take control of the aircraft through manual flying, that is, adopting a known frame (Fig. 6). Rather than continue troubleshooting to fulfil the goal of explaining the discrepancies, the crew re-structure the joint system in a way that allows them to build a new, coherent frame and take action and thus fulfil the goal of controlling the aircraft. Accepting a lack of understanding for the situation, but knowing enough not to trust the automation was described by several interviewees.

Fig. 6
figure 6

Sensemaking activity abandoning search for a frame. When unable to identify a plausible frame the search may be abandoned

Although in some cases the decision to take manual control seems clear, the respondents mention that “knowing when to take over” can be a challenge. On the one hand, it was felt that manual control should be resumed when confused about what the automation is doing. The strategy enables the pilots to reduce complexity, that is, “identify what you have, not what you don’t have”, and apply a known frame. It is seen as important to “aviate first”, referring back to the basic piloting principle in aviation to do things in order of “aviate, navigate, communicate”. However, several interviewees also felt that there are many situations when it is better to “let the automation handle the situation”, as this minimises the workload and may be of assistance in a confusing situation. Such a situation may be, for example, sudden changes caused by a mountain wave or hitting the squall line. A rule of thumb mentioned during the interviews was to “sit on your hands” and evaluate the situation. That some situations require rapid intervention (e.g. rapid frame-switching) makes it increasingly difficult to know when to stop and first evaluate the situation.

Another aspect of switching the goal from “understanding the problem” to controlling the aircraft is that the pilots’ frame is still incomplete in the sense that the crew are confused about the state of the aircraft and systems and further consequences this may have. The crew prioritisation strategy, for example, “aviate first” inevitably means that certain tasks and goals are deferred to prioritise others. Although the coping strategy is necessary, it makes the crew-aircraft system more susceptible to performance breakdowns such as “falling behind the curve” (Woods and Branlat 2011), as the prioritised activities are time-consuming and consequences of actions are uncertain.

The importance and challenges of knowing when to take manual control can be seen in recent accidents. For example, in response to an in-flight upset caused by faults in the aircraft systems, the response of the crew of Qantas flight QF72 in their manual handling of the aircraft was praised in the accident report (ATSB 2013). In contrast, the manual intervention of the flight crew of Air Canada flight AC190 in response to a wake turbulence-induced upset was suggested as a contributory factor in the incident (TSB 2008). The interviewees in the study mentioned that taking control too quickly may be overreacting and exacerbate the situation, a problem that unfortunately has been an aspect of the investigation in the recent accidents such as flights AF447 (BEA 2012) and QZ8501 (KNKT 2015).

The decision to take control is not only troublesome when it comes to the automated systems, but may also concern another crew members’ ability to cope with the current situation, as exemplified in Case 5 and 7. A decision to take over as pilot flying (PF) may also include additional considerations, such as social and organisational factors. In Case 5, for example, the high level of experience of the first officer and the low level of experience of the captain on the particular aircraft type was an important factor, as well as the fact that this was a supervision flight. The concern expressed by the first officer illustrates the multiple goals being juggled; keep the aircraft on the intended flight path, support the PF without interruption and avoid causing any potential future problems for the captain (e.g. consequences of making mistakes during supervised flight).

4.7 Linking the sensemaking activities

Figure 7 shows the relations between the sensemaking activities described in the cases above. Two sensemaking activities have been identified that are not part of the original D/F model (Klein et al. 2007); rapid frame-switching and abandoning the search for a frame. Both activities are closely connected with the ability to stay in control following unexpected events. The surprise factor may in some situation leads to discarding the data and preserving the frame, if the data are not seen as relevant. However, in most cases it will lead to a questioning of the frame. Following questioning of the frame, several activities have been identified; elaborating a frame to include the new data, rapid frame-switching to re-gain control of the situation and the iterative process of seeking a new frame through comparing frames. If a plausible frame is identified, a switch to this frame is made. If a frame cannot be identified in a timely manner, the search for a plausible frame may be abandon, and a new goal will be prioritised using a known frame (e.g. turning off the automation).

Fig. 7
figure 7

Data-Frame (D/F) model showing the sensemaking activities in the re-framing process following surprise found in the nine cases. Two sensemaking activities have been added that are not part of the original D/F model (Klein et al. 2007); rapid frame-switching and abandon search for frame

4.8 Implications for sensemaking theory

The results from this study reveal several new insights regarding the D/F theory of sensemaking (Klein et al. 2007). First, the crew-aircraft sensemaking model offers a description of the D/F theory as part of a joint system, including the connections between the two pilots and the aircraft systems, as well as between a perception–action cycle and sensemaking concepts. Although many studies target pilot’s understanding at particular points in time, that is, their situation awareness (Endsley 2006), the processes that explain how they got there is relatively understudied. A key contribution of this study is thus the application of a sensemaking perspective to cockpit surprise. Further, the crew-aircraft sensemaking model emphasises the retrospective and prospective processes of sensemaking demonstrating the relationship between frame construction, expectations (anticipatory thinking) and actions. The connections underline the significance of expectations and actions as part of the decision process to test hypotheses and seek anchors to identify new frames. Similarly, the output from the automated systems and cues in the environment (e.g. aircraft behaviour, weather) affect the crews’ ability to identify anchors and decide on a course of action.

Due to the sample size and previously mentioned drawbacks of individual accounts provided in retrospect, the generalised findings are pending further validation. However, the relevance of the key finding that expectations guide problem detection, situation assessments and the ability to take action should also be viewed in the light of related studies of joint cognitive systems, including the original studies of the D/F model (Klein et al. 2007), the study of sensemaking in air traffic control (Malakis and Kontogiannis 2013), studies of anomaly response (Watts-Perotti and Woods 2007) and how expert practitioners extract dynamic data from ongoing event Christoffersen et al. (2007). Although studies of sensemaking have commonly been case based, the research conducted by Christoffersen et al. (2007) offers empirical evidence of the importance of expectations as domain experts in human-technical systems identify key pieces of information and make sense of multiple ongoing processes. Research in social sciences further substantiates the importance of the prospective processes to detect that which is unexpected (e.g. Dunbar 1997).

Second, several of the sensemaking activities identified in the original D/F model (Klein et al. 2007) are also found in the cases described in this study (and Fig. 7), including questioning, elaborating, preserving and comparing frames. Noteworthy from the cases (in particular cases 3, 8 and 9) is the complexity and the “messiness” of the re-framing process. A re-framing process may include the complex sequence of questioning, seeking, preserving, re-questioning, elaborating and comparing frames within a short time frame. Elaborating and comparing frames appear in some cases to be performed concurrently, suggesting that in some cases the activities should not be studied individually but as part of several ongoing activities in the re-framing process. Also important to note is that the cases show that although the crew elaborate, compare and identify a frame that matches what they observe, this does not necessarily mean that the frame can account for what is actually going on. Likewise, preserving a frame does not necessarily require discarding data (as described in the original D/F model), it could also mean that no cues are available for the crew, or that the crew initially had the correct frame and that whatever caused the surprise can be understood within the available frame.

Third, sensemaking activities in addition to the ones identified in the original D/F model (Klein et al. 2007) have been included; rapid frame-switching and abandoning the search for a frame (Fig. 7). Both activities are closely connected with the ability to stay in control following unexpected events. The examples of rapid frame-switching show how tightly connected the prospective (expectations), and retrospective processes of sensemaking are in the sense that the current frame includes a plan for upcoming actions, and if this frame has to be updated, the recovery period may take precious seconds as an aircraft suddenly deviates from the unintended flight path. Abandoning the search for a frame and building a known frame is a common strategy mentioned to cope with uncertainty. Knowing that there is not enough knowledge, enough data or enough time to troubleshoot and identify the problem is critical ability. The strategy requires knowing when it is appropriate to abandon the goal of trying to resolve the ambiguities of the current frame and instead focus on the goal of controlling the aircraft, thus switching the frame altogether.

4.9 Implications for training

The basic philosophy in training programmes today is to emphasise the procedures for particular failure situations, as recommended by the regulators or manufacturers. Training scenarios largely focus on pre-defined skills in context-specific scenarios where pilots know what to expect, leaving exposure to variations, multiple system failures and unexpected events to mature through experience. Shortcomings of current training are being recognised in industry today and attempts to improve them include international industry initiatives such as evidence-based-training (EBT) (ICAO 2013). EBT is a data-driven approach which involves the identification of needs of the operation through analysis of operational flight data and allows more flexible training programmes, rather than regulatory-prescribed. While existing airline pilot training requirements “are largely based on evidence from hull losses from early generation jets, and on a simple view that, in order to mitigate risk, simply repeating an event in training programmes was sufficient” (ICAO 2013, p. I-1-1), EBT aims to not only use more current events but also to assess crew performance based on a number of key competencies. EBT, among other recent training initiatives, can be implemented under current regulations for alternative training conceptsFootnote 7 and is along with other training initiatives a sign of the aviation industry aiming generally to improve training to be more relevant and effective (see also Learmount 2011, 2014; Varney 2012).

An issue to consider regarding the emerging training programmes is the characterisation of the identified evidence-based problems (see also Klein et al. 2016), i.e. to answer the question of which evidence should be collected and how it should be characterised in order to improve training. Although a shift is made to target current issues (rather than evidence from early jets), many training programmes still focus on aiding pilots to tackle specific known problems. This approach does not focus generally on training the processes involved with problem detection, problem identification and interacting variables and conditions. Findings in this study suggest the need for training programmes and pilot examiners to support pilots to identify the connection between system parts and identification of critical cues, rather than the specific procedures for specific incidents that have occurred recently. This could be, for example, to emphasise the underlying sensemaking aspects that may be common to multiple situations, using the categorisation of these processes as presented in this article. This research suggests that training to cope with surprise (as a concept also mentioned in the ICAO (2013) report) should be preceded by a thorough understanding how crews (re-)construct their frames in response to unexpected events. This type of scenario could lead to considerations of what are critical frame-breakers, that is, relevant cues that help crews interpret and assess an ongoing situation.

Again, the focus in training should be on the process with which pilots cope with surprise and uncertainty, rather than identifying the lack of competencies or “loss of Situation Awareness (SA)”. A recent study of the application of a competency assessment tool shows large inconsistencies in how flight examiners assess the different competencies (Weber et al. 2014), suggesting that broad categories pose validity issues to making assessments. The inconsistencies were found particularly evident for the category of “situation awareness”, and Weber et al. (2014) hypothesise this result is due to difficulties in examiners’ attempts to theorise what is going on in a pilot’s head. The same study further found the assessment of SA was highly coupled to the ratings of other competencies. In the light of this article, these results are not surprising. The analysis of the cases shows that the complexity of sensemaking (i.e. the processes “leading up to SA”) involves multiple intertwined processes and is a highly contextualised activity, and thus difficult to summarise into one over-arching category. The abilities to identify cues, diagnose problems and take actions are all based on sensemaking, which is why it is necessarily coupled to other skills. The findings of the current study suggest that the underlying training issue to cope with surprise is to understand and support the process with which pilots frame and re-frame data based on their knowledge and available ques. By shifting focus to understanding the process by which pilots search for data, identify relevant ques, manage uncertainties, make trade-offs, re-frame and decide on a course of action, a more in-depth understanding for breakdowns of crew-aircraft coordination can be made to inform training design.

5 Summary of findings and future research

The findings of this study reveal several important issues regarding challenges and possibilities for pilots to maintain control in surprise situations. The cases described suggest that pilots have difficulties in making sense and re-framing following surprise, sometimes leading to difficulties in identifying an appropriate response. Crews are struggling to elaborate and build coherent frames with limited time to do so.

The main contributions of this study are:

  • Analysis of surprise situations demonstrating difficulties pilots have re-framing following surprise. To cope with most disturbances and failures, there is a prepared response in the form of procedures and checklists. However, the cases show the difficulty of understanding the situation, and as a result having trouble identifying which response is appropriate.

  • Anticipatory strategies to “stay ahead of the aircraft” are used to keep frames updated and avoid surprise. As expectations guide attention, anticipatory strategies are key to increase pilots’ abilities to question frames, to switch frames rapidly and to counteract getting stuck in frames or even being surprised in the first place. The findings imply that it may not be sufficient to rely on simple strategies to cope with unexpected events, it requires more elaborate frames that are built through experience.

  • The crew-aircraft sensemaking model offers a description of the Data/Frame (D/F) theory (Klein et al. 2007) as part of a joint system, including the interactions between the two pilots and the aircraft systems. Further, the model highlights the retrospective and prospective processes of sensemaking by illustrating the relationships between frame construction, expectations (anticipatory thinking) and taking action.

  • Identification of previously found sensemaking activities, as presented in the D/F model. The cases presented in this paper further demonstrate the complexity and the “messiness” of the re-framing process as it involves sensemaking activities concurrently or within a very short time frame, suggesting that activities should be investigated as joint activities in the re-framing process.

  • Sensemaking activities have been identified that are not part of the original D/F model; rapid frame-switching and abandoning the search for a frame. Both activities are closely connected with the ability to stay in control following unexpected events. Rapid frame-switching requires an action within a very short time frame as a response to an external event and represents the critical ability of pilots to quickly switch frames to manage surprise in cockpit. Abandoning the search for a frame is a strategy to cope with uncertainty. It is the decision to stop an active search for a coherent frame. The strategy used by the pilots is to simplify the system configuration by turning the automation off so that a different, known frame, can be applied, thus switching the goal from making sense of the situation to the goal of controlling the aircraft.

  • The findings raise important issues regarding pilot training programmes. Training programmes today often focus on aiding pilots to tackle specific known problems through procedures, and it does not focus generally on training the processes involved with tackling problems following unexpected events such as problem detection, problem identification and deciding on a course of action. Findings in this study suggest a need for training programmes and examiners to support pilots to better understand the re-framing process and factors which may facilitate or hinder the process.

Based on these findings, we suggest three main areas to improve pilot abilities to cope with unexpected events. For each area, we have outlined critical research questions:

  1. 1.

    Further investigation into what enables and obstructs the re-framing process.

    Frame-breakers What are critical frame-breakers (i.e. relevant cues or data that allow questioning of the current frame) that allow pilots to update their frames? What factors (patterns) obstruct and enable detection of abnormalities and subtle cues? This includes, for example, the coupling between sensors and symptoms, what information is (not) trusted, and differences in how experts and novices detect symptoms.

    Re-framing strategies What strategies do pilots use to make sense of situations where data elements are not clearly specified (see, e.g., Rankin et al, 2014)? Are there particular performance criteria that are used? How can the joint crew-automation system facilitate the elaboration and matching of frames? How can display design facilitate frame construction? What are key indicators (anchors) of particular problems that can help pilots identify the right frame?

    Level of system knowledge What are the frames of system knowledge today? What are patterns of breakdown due to oversimplifications (i.e. simplified models of the automated systems)? How do they manifest in surprise situations? What frames do experts have that novices lack? Are frames similar or very different between pilots? Do some (interactions between) systems create more difficulties than others (implying that frames are not well-connected)? A distinction made by the interview participants is that it is necessary to have a model for why the systems act a particular way, but not necessarily all the intricacies of how. The data provided by automated systems must facilitate frame construction as well as guide pilots in breaking frames when failures occur to allow a well-functioning interplay between crew and automation.

  2. 2.

    Exposure to surprise situations in training to develop re-framing strategies.

    Training programmes How can training programmes help prepare crews for the unexpected? Industry focus today is on anticipating and mitigating variations, faults and failures at a system level, and crews have little exposure to unexpected events. Training programmes are focused on handling known problems. Studies have shown that building elaborate frames through experience is a critical part of being able to separate important cues from “noise”, identify anchors to retrieve frames, and use frames to generate appropriate action (in a timely manner). By including more challenging and unexpected situations in pilot training frames and re-framing strategies can be developed. More elaborate frames allow pilots to anticipate (i.e. build expectations) and thus mitigate surprises. Industry should acknowledge that all situations cannot be prepared for through system design and procedures, and prepare pilots to cope with situations that fall outside of textbook scenarios.

  3. 3.

    Identification of control strategies as part of the re-framing process when aspects of the situation are not clearly specified.

    Control strategies What strategies can be used to regain control in situations governed by uncertainty? When constraints such as time or expertise do not allow necessary elaboration of a frame, what are key factors in constructing new frames (and abandoning the search for frames) to control the aircraft? What are the strength and vulnerabilities of such strategies? In following up the results of this study, an initial examination has been carried out into the potential benefit of training basic anticipatory strategies to flight crew and providing a strategy to assist them in responding to an unexpected event (Field et al. 2015; Woltjer et al. 2015).