Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In our increasingly technological world, the influence of automation is perceived in each aspect of everyday life. At work or at home, human beings are accustomed to interact with sophisticated computer systems designed to assist them in their activities. Automation certainly makes some aspects of life easier: by allowing people with disabilities to be able to move and communicate; faster: with the generalization of computerized devices and the increase of productivity; and safer: the accident rate in aviation or high-risk industry has dropped down thanks to the implementation of automated systems [1]. This is far from over, thus, no one would be surprised in future years to see a car without driver let her/him cross the road.

Automation is often a suitable solution for functions that humans cannot achieve safely or reliably. Previous studies demonstrated that high level of automation reduced human operator workload and increase the level of productivity [2]. Nonetheless, the interposition of automated systems between human operators and processes transforms the nature of human work. As a matter of fact, the role of the human actors tends to evolve from direct control to supervision. This change is far from trivial and creates new burdens and complexities for the individuals and teams of practitioners responsible for operating, troubleshooting and managing high-consequence systems.

2 Automation and OOL Performance Problem

When new automation is introduced into a system, or when there is an increase in the autonomy of automated systems, developers often assume that adding “automation” is a simple substitution of a machine activity for human activity (substitution myth, see [3]). Empirical data on the relationship between people and technology suggest that this is not the case and that traditional automation has many negative outcomes and safety consequences associated with it stemming from the human out-of-the-loop (OOL) performance problem [4, 5].

The OOL performance problem has been attributed to a number of underlying factors, including human vigilance decrements [6, 7], complacency [8, 9] and loss of operator situation awareness (SA) [10, 11]. Cognitive engineering literature has discussed at length the origins of vigilance decrements (e.g., low signal rates, lack of operator sensitivity to signals), complacency (e.g., over trust in highly reliable computer control) and the decrease in SA (use of more passive rather than active processing and the differences in the type of feedback provided) in automated system supervision and has established associations between these human information processing shortcomings and performance problems.

As a major consequence, the OOL performance problem leaves operators of automated systems handicapped in their ability to take over manual operations in the case of automation failure. Particularly, the OOL performance problem causes a set of difficulties including a longer latency to determine what has failed, to decide if an intervention is necessary and to find the adequate course of action [6]. The three following incidents from aviation, nuclear plant and finance domains illustrate such difficulties.

  • Situation 1: Aviation

    The first example concerns Flight 447 from Air France. On May 31, 2009, the Airbus A330 took off from Rio de Janeiro bound to Paris. Four hours after the departure and due to weather conditions, ice crystals obstructed the Pitot probes. Hence, speed indications were incorrect and lead to a disconnection of the autopilot. Likely following this disconnection, the crew was unable to diagnose the situation and apply the appropriate procedure. Alternating appearances and disappearances of some indicators and alarms coupled with high stress probably prevented the crew to correctly evaluate the state of the system and act appropriately (for the official report, see [12]).

  • Situation 2: Nuclear Power Plant

    The second example concerns the incident of the nuclear plant of Three Miles Island (Pennsylvania, USA), in 1979. A valve used to regulate the water inlet in the nuclear core was stuck open, although a light on the control interface indicated that the valve position was closed. However, this light did not indicate real position of the valve but instead that the closure order was given. Because of ambiguous information provided by the control interface, the operators were unable to correctly diagnose the problem for several hours [13]. During this period, a sequence of different failures and inappropriate actions lead to a partial meltdown of the nuclear core. Hopefully, the releases of radiations were not important enough to cause health and environmental damages. A major nuclear disaster was avoided.

  • Situation 3: Stock Market

    In a completely different domain, we can mention one of the costliest computer bug. Knight Capital is a firm specialized in high frequency trading (automated technic used to buy and sell stocks in fractions of a second). On August 1, 2012, the firm tested a new version of its trading algorithm. However, due to a bug, the algorithm started pushing erratic trades. Because supervisors were not up to date of the system behavior, it took a long hour to understand that the problem came from the algorithm and coasted to Knight Capital about 400 million dollars [14].

Although these previous cases are from different domains, they highlight that when the automatic equipment fails, supervisors seem dramatically helpless for diagnosing the situation and determining the appropriate solution because they are not aware of the system state prior to the failure. Numerous experimental results confirm such difficulties. For example, Endsley and Kiris [4] provided evidence that performance during failure mode following a fully automated period were significantly degraded, as compared to a failure mode following a fully manual control. Merat and Jamson [15] reported similar conclusions. In a driving simulation task, they demonstrated that drivers’ responses to critical events were slower in the automatic driving condition than in the manual condition. Because automation is not powerful enough to handle all abnormalities, this difficulty in takeover is a central problem in automation design. Moreover, with the development of autonomous cars, which should come onto our roads in a few years, everyone (not only expert operators) could be concerned by such difficulties, and the issue becomes universal.

These difficulties in takeover situations have been identified for a long time [1, 16] and different solutions have been proposed by the human factors society. Some of them consist in training human operator to produce efficient behavior in case of system failure. However, recent dramatic events indicate that such training does not ensure efficient takeover for trained situations, whereas the apparition of unexpected failure are not considered by such approach. Other solutions propose to manipulate the level of system automation, sharing the authority between the automation and the human operator (for example MABA-MABA methods, adaptive function allocation). Such approach rests on the hypothesis that new technologies can be introduced as a simple substitution of machines for people - preserving the basic system while improving it on some output measures. Unfortunately, such assumption corresponds to a vague and bleak reflection of the real impact of automation: automation technology transforms human practice and forces people to adapt their skills and routines [17].

If these traditional approaches have the virtue to partially decrease the negative consequences of automation technology, clear solutions are still missing to overcome these takeover difficulties [18, 19]. We argue that the key for designers is to focus on how automation technology transforms the human operator activity and what are the mechanisms of control involved in a supervisory task. We assume that the response to these questions remains a crucial challenge for successful design of new automated systems. In this paper, we propose a theoretical framework to explain this transformation.

3 OOL Performance Problem and System Predictability

As assumed by Norman [1], the lack of system predictability is certainly a central point in the comprehension of the OOL phenomenon and the associated takeover difficulties. With the advent in technology, current man-made complex systems tend to develop cascades and runaway chains of automatic reactions that decrease, even eliminate predictability and cause outsized events [20]. This is what we will call system opacity: the difficulty for a human operator to see the arrow from system intention to actual state and to predict the sequence of events that will occur.

However, such opacity is far from being a fatality. As pointed by Norman [1], the problem with automation is more its inappropriate design and application than over automation per se. Particularly, the lack of continual feedback and interaction appears as the central problem. Throughout the last years, computational and experimental evidences have proved the central place of feedback regarding the mechanisms that govern the control of our actions [2125]. When people perform actions, feedback is essential for the appropriate monitoring of those actions. However, adequate feedback to the human operator is most of the time absent in case of system automation. Without appropriate feedback about the state of the system, people may not know if the actions are being performed properly or if problems are occurring. As a result, when an automatic equipment fails, people are not able to detect symptoms of troubles early enough to overcome them.

Interestingly, system engineers are blind to the paradox that we have never had more data than we have now, yet have less predictability than ever. To overcome such opacity, they have to propose adequate feedback about the state of the system. However, how to design a predictable system remains a difficult problem. A possible approach is to focus on how humans understand and control their own actions. Indeed, we can assume that operators interpret the intentions and the outcomes’ actions of a system with their own “cognitive toolkit”. Thus, understanding how this “cognitive toolkit” works could be relevant to propose design principles for potentially controllable systems.

4 Science of Agency as a Relevant Framework

When we act, we usually have a clear feeling that we control our own action and can thus produce effects in the external environment. This feeling has been described as ‘‘the sense of agency’’ [26], and is recognized as an important part of normal and human consciousness. Most people can readily sort many events in the world into those they have authored and those they have not. This observation suggests that each person has a system for authorship processing [27], a set of mental processes that monitors indications of authorship to judge whether an event, action, or thought should be ascribed to self as a causal agent [28]. Laboratory studies have attempted to shed more light on this mechanism and empirical data in recent psychology [29, 30], psychopathology [31, 32] and neuroscience [33, 34] have been accumulated. Interestingly, a variety of sources of information (e.g., one’s own thoughts, interoceptive sensations, external feedback, etc.) could be involved in the authorship processing. Several indicators have been already proposed, including body and environment orientation cues [35], direct bodily feedback [36, 37], direct bodily feedforward [38, 39], visual and other indirect sensory feedback [40], social cues [41], agent goal information [42] and own behavior relevant thought [4345]. Although, the mental processes contributing to the sense of agency are not fully understood at this time, the different approaches propose that we derive a sense of being the agent for our own actions by a cognitive mechanism that computes the discrepancies between the predicted consequences of our own actions’ actual consequences of these actions, similarly to action control models [22, 24, 43]. Thus, predictability appears as a key notion regarding the mechanism of agency and researchers demonstrated that efferent signals, re-afferent signals, higher order knowledge and beliefs influence it.

Interestingly, Pacherie [46] argued that the different mechanisms underlying sense of agency for individual actions are similar to those underlying sense of agency one experiences when engaged in joint action. That is, the sense of agency in joint action is based on the same principle of congruence between predicted and actual outcomes. Sebanz, Bekkering and Knoblich [47] defined a joint action as “any form of social interaction whereby two or more individuals coordinate their actions in space and time to bring about change in the environment”. Moreover, one of the criterions needed for the accomplishment of a joint action is the ability to predict other’s actions and their outcomes. Predicting other’s actions required areas involved in the human self-action control system [48]. This capability of prediction is crucial to achieve an efficient coordination in a joint action [4951]. Vesper et al. [51] demonstrated that participants involved in a joint task were more predictable by reducing the temporal variability in order to facilitate the coordination.

If we consider that the predictability in human-system interactions and in human-human interactions operates in the same way, we can assume that the system opacity could dramatically change our experience of agency. This hypothesis receives an echo from the claim of Baron when he said:

“Perhaps the major human factors concern of pilots in regard to introduction of automation is that, in some circumstances, operations with such aids may leave the critical question, who is in control now, the human or the machine?” [52].

Recent empirical data have confirmed such degradation of our experience of agency in presence of automation [53]. Particularly, by manipulating the level of automation in an aircraft supervision task, we have demonstrated a decrease in agency (for both implicit and explicit measures) concomitant to the increase in automation. Consequently, we assume that a way to design a more controllable interface is to consider supervision as a joint action between a human operator and an artificial co-agent following the same principles as a biological co-agent. This proposition echoes that of Norman [1] when he assumed that what is needed is continual feedback about the state of the system, in a normal natural way, much in the manner that human participants in a joint problem-solving activity will discuss the issues among themselves. The use of the theoretical background of agency will make it easier to achieve this objective. This is why we argue, in this paper, for a mediated agency: an approach to HMI interactions that takes into account how the information provided by an automated system influences how an operator feels in control.

5 Agency Offers Tools and Measures

As previously assumed, system opacity is certainly a major cause of OOL performance problems. To overcome such system opacity, interactions designers could use the tools and measures, provided by the framework of agency (and by extension, the one of joint agency). Examples of such tools can be derived from the theory of apparent mental causation of Daniel Wegner [43]. His theory provided clues to determine the nature, the form and the timing of an appropriate feedback. Particularly, Wegner proposed that when a thought occurs prior to an action it is consistent with the action and the action has no plausible alternative cause, then we experience the feeling of consciously willing the action. This is what he called, priority, consistency and exclusivity principles. System engineers could use these principles to shape adequate feedbacks in order to make the automation more predictable. We already know that this capability of prediction is crucial to achieve an efficient coordination in a joint action [4951]. Recently Berberian and colleagues provided evidence that these principles could be used to design Human-machine interfaces capable of compensating the negative effects of latency on action control [54]. We assumed that automated systems following such principles would make the operators more “agent”, and then they would not be affected by the OOL performance problem. Thus, they would be faster and more reliable to take over an automated system in case of failure.

Another contribution from this framework to the human-machine domain is provided by the use of measures of agency. Although quantifying the degree of agency remains difficult, several measures have been proposed. Interestingly, we can distinguish two different kinds of measures (Explicit vs. Implicit) referring potentially to two separable agency processing systems [26, 55]. The explicit level refers to the “Judgment of agency”, that is, the capability to attribute agency to oneself or another. This judgment is influenced by beliefs and external cues [56]. One can use classic declarative methods, such as surveys and self-reports to evaluate this aspect of agency. The implicit level refers to the “Feeling of agency”, a low-level feeling of being an agent, mainly based on sensory-motor cues. Implicit markers of agency have been proposed by Haggard and colleagues, namely the intentional binding effect (IB). In a key study, they noticed that human intentional actions produce systematic changes in time perception. In particular, the interval between a voluntary action and an outcome is perceived as shorter than the interval between a physically similar involuntary movement and the same outcome event [57]. This phenomenon has been widely reported (for a review, see [ 58]). Although our understanding of the underlying mechanisms is not clear, IB may provide an implicit window into human agency. For the last decade, a lot of studies about agency were published, some used a method or another and some used both but still in simple and easy to control paradigms (for example, a visual signal appeared and the participant had to push a button). Regarding the applicability of these methods to design processes, future work will have to determine the efficiency of such methods in more complex situations encountered in the human-system interaction field. Berberian et al. [53] used both measures to evaluate the variation of agency depending on the level of automatism in an aircraft supervision task with different autopilot settings. They provided evidences that intentional binding measures are sensitive to graded variations in control associated with automation and was related to explicit judgments. This study demonstrates that IB could be used in a richer and more complex paradigm. These measures should therefore be combined to develop a framework for evaluating the OOL performance problem. Hence, ergonomists should bear in mind the different elements of the human sense of agency, and the different ways of measuring it, in order to correctly evaluate if an interface is sufficiently acceptable and controllable.

6 Conclusion

We have seen that, despite all the benefits of automation, there are still issues to be corrected (loss of control, less efficient monitoring…). This clearly establishes that some problems in human-machine interaction stem from a decreased predictability, due to an increase of complexity. However, designing relevant feedback and system-operator communication is clearly the key to avoiding these problems, but this remains a challenge. We proposed that using principles, tools and measures from the science of agency should lead to the introduction of a new methodology. “Being an agent” is a notion largely studied in neurosciences, psychology and philosophy. It would be also relevant, for the human-machine domain, to use a framework taking into account the difference between self-generated actions and those generated by other sources. This is why we argue for a mediated agency framework: an approach to HMI interactions that takes into account how the information provided by an automated system influence how an operator feels in control. We suggest that the science of agency in the field of human-machine interaction may be fruitful to elaborate concrete recommendations to design automatics system supervised by operators “in the loop” abating the negative consequence of OOL problem while maintaining the performance in the normal range. Measuring (explicitly or implicitly) the feeling of control may be important in evaluating different automated devices, and may also be relevant for evaluating operator’s performances in supervisory tasks. In the end, a better understanding of how the sense of agency evolved in the case of interactions between humans and automated system would certainly refine the different models of agency and, more generally, models of control.