Keywords

3.1 Introduction

The purpose of the previous chapter and the present one is to take inspiration from the military about the impact of autonomous systems (high level of automation, AIFootnote 1) on safety issues in civilian organisations. The military has long been a source of inspiration for civilian organisations, providing many reflections and advances in terms of safety. Regarding the impact of autonomous systems on safety, the military can be taken as an extreme case in many respects, given (a) the huge risks involved in military operations, (b) the complex, advanced sociotechnical systems that modern armies have become and (c) the speed and extent to which they are currently investing in the development and implementation of autonomous systems.

Of course, we are not advocating that civil organisations imitate the military in these matters. Taking the military evolutions analysed in the previous chapter as a starting point, our purpose is to discuss the extent to which similar evolutions will apply to industrial organisations and to what extent they will have implications for safety.

3.2 Six Key Points

We extracted six key points from Gérard de Boisboissel’s contribution in Chap. 2 (five from the chapter and one from later discussions with the author). For each of these points, we add comments from two angles: (1) human–system interaction (a micro-perspective) and (2) organisational reliability (meso-perspective).

3.2.1 Inevitability

Migration from remote control of machines to highly autonomous systems is inevitable.

Though the magnitude, pervasiveness and speed of such a migration may greatly vary from one industry to another, we agree with this observation. This is why civilian, at-risk organisations should address the following points.

3.2.2 Responsibility and Control

Humans should remain masters of the action because, unlike the machine, they can give meaning to the action and take responsibility for it. The military leader must be able to regain control of a robotic system at any time and potentially cause it to leave the “autonomous mode” in which humans themselves had authorised it to enter.

This is the eternal question about the allocation of tasks between humans and machines. Humans can stay in control of a situation, even though they delegate information-processing or execution tasks to an autonomous machine, under specific conditions, such as being fully engaged in the monitoring of the level of execution of the task. However, if the manager or the operator temporarily delegates the complete task including the decision-making, it will be difficult to immediately regain full control of the situation in case of an event. Even if the machine can deliver detailed information, the manager or operator will face an abundance of “cold” data to be integrated. Regaining mental and emotional control of the situation will be very demanding in terms of cognitive and sensory resources.

Another question is: will the humans always be willing to retake control and thus engage their own responsibility? The answer might not be straightforward, especially in civilian work contexts. Some autonomous systems leave no choice to humans (e.g. the autopilot disengaging because of incoherent data inputs). Such systems are able to diagnose their own disfunctioning. When this capacity is lacking or is severely restrained, the initiative belongs to the human. Regaining control will thus be a decision made by the human agent. We should not assume that all humans will make the decision to regain full control, at once, in all cases. The decision-makers may have (good or bad) reasons for not doing so. Among them is the anticipation of blame or, more generally, the expected attribution of responsibility between the system and the human. Recent research on this matter has established that failing humans and machines are not judged in the same way [7]. Machines are judged according to the magnitude of the harm, while humans are judged regarding their intentions. This important finding does not indicate a clear answer to our question, but it certainly suggests that, beyond the general human-responsibility principle, regaining control might imply complex human decision-making processes.

3.2.3 Trustability

Autonomous systems should be trustable, explainable and predictable. For humans to trust autonomous systems, adaptive and self-learning systems must be able to explain their reasoning and decisions to human operators in a transparent and understandable manner and must behave consistently over time and circumstances.

Explaining may look like a straightforward notion, but it is quite the reverse. An explanation is not a property of texts, narratives, figures or diagrams. Rather, it stems from a relationship between (1) the text, narrative, figure or diagram that is presented to the recipient, (2) the recipient's cognitive processes, (3) the context and its immediate demands, and (4) the recipient's goals in this context [8, p. 86]. This implies that explainability has to be designed. Such a design has to incorporate user diversity and situation variability. For example, accuracy and thoroughness are not necessarily needed to provide good explanations, that is, explanations that the persons involved will find good. In fact, in many cases providing biased or incomplete explanations might be more relevant in order to make the system explainable [8, p. 89]. Given the resulting challenge, and given that today’s knowledge about explainability and its design is very limited, explainability failures will likely occur, with significant consequences on human trust in systems. More generally, we can expect more failures involving some form of knowledge issue (epistemic accidents, as coined by Downer [4]—see also Antonsen, this volume).

Beyond explanation information, the requirements for synchronising humans and autonomous systems need to be developed to foster an efficient human–machine teaming. Inspired by the human–human cooperation principles [2], human and machine should “share a common context and background” in order to elaborate a common strategic action plan and adjust or redirect its execution in a coordinated way.

To overcome the potential ambiguity of a one-way communication line from human to machine and enable the machine to support the human efficiently, a human–machine dialogue about information, rules or ethics, could be necessary [5]. The machine has to match human intentions to reduce the risk of violating ethics rules or embarking in wrong directions because of design errors.

3.2.4 Self-Learning Machines and Human Training

As military leaders will be responsible for the proper use of a self-learning machine in the field, they must supervise the learning process prior to its regular use and ensure its control over time. To be resilient, the leaders and the operators need to be trained with the technical equipment AND without it (or in degraded mode). This is very costly.

On the one hand, a self-learning machine principle on the field can be a way to capture the variability of operations, the diversity of situations and operators’ experiences. On the other hand, being trained by a human can be risky if human perception and representation biases are integrated into the machine.

More generally, it should be recalled that, though widely advocated, the need for extensive human training, unfortunately, is still largely ignored by industrial organisations. Autonomous systems only reinforce the need to question usual forms of safety training and go far beyond mandatory training [3].

3.2.5 Cognitive Overload

Delegating is one way of avoiding the military leader’s cognitive overload. One possible solution is to create a “digital assistant” who can assist the leader in the information-processing steps.

Machines are becoming more and more sophisticated. Either human–machine interaction requires specific skills or the interaction is complex and can divert operators’ attention away from their main task, reducing the added value of the human operator. Cognitive overload is much less common in civilian organisations than in combat. Yet the idea of a specialised technician assisting the decision-maker still makes sense and in fact is rather common: think of the radiologist and the technician operating the scanner, for instance. Yet, in real life, what will happen between the decision-maker and the assistant? Barley’s classic work reminds us that (1) the actual division of labour may depart significantly from the designed work organisation; (2) the actual division of labour results from complex social processes that lead to different outcomes in different contexts; these outcomes are largely unpredictable [1]. Alleviating the cognitive load is traded for sociopolitical processes. Better mastery of action is not a guaranteed outcome.

3.2.6 Empowerment Paradox

Autonomous systems generally imply an empowerment for operators relative to leaders, because operators benefit from increased capacities and/or may dedicate themselves to higher level tasks. However, autonomous systems can also be designed so that leaders are able to recover full control at any moment—taking it from the hands of the operator. Empowered operators thus can be made powerless at any moment.

Autonomous systems displace the division of control and transform its dynamics. When and why could leaders be tempted to take direct control? With what consequences? The empowerment paradox reminds us that, if human trust in a system is certainly an issue, human trust in fellow humans will remain a major issue too. What would have happened at Fukushima Daichi if Tepco managers and Prime Minister Kan had had the possibility to take direct and full control of the onsite operations and move aside Yoshida, the local director? They probably would have taken control and made major mistakes. Managers, just like military leaders, are prone to illusions of control (i.e. the tendency to overestimate one’s degree of control over a course of action). Autonomous systems reinforce this bias, if only because, by design, increased control is one of their strongest promises—as advertised by those who manufacture and sell these systems to top executives. The issue is not intractable, though. Some professional experts are granted a degree of autonomy that can never be overridden by any hierarchy. Similar designs can be implemented for operators or lower-level managers working with an autonomous system. Even if top managers are still technically able to take full control, they would then commit a violation and expose themselves to dire consequences in case of failure.

The articulation of autonomous systems and human beings should be considered in the context of hierarchy and power in organisations. Autonomous systems could very well revive the old dream that was fueled by the development of the first computers. Humans would be in control at the very top of the organisation (overall objectives and strategy) and at the very bottom (execution of tasks). All the intermediate levels would be under the power of automatic systems. To a certain extent, this was already Taylor’s ambition, with a workforce of engineers equipped with methods for designing work processes instead of computers or autonomous systems. The sociology of organisations has taught us that employees will fight to avoid being replaced by autonomous systems or the loss of status and independence that autonomous systems might imply. Many will probably lose this fight. However, skilled workers and experts working through or with autonomous systems will detain the practical knowledge that is required to make these systems efficient and safe. They will use this knowledge to their own advantage in the power struggles with their bosses and other constituencies. Just like front-line operators quickly learn to game the rules and procedures that they are supposed to follow, knowledge workers will likely engage in various, covert manoeuvers to game the autonomous systems and retain control over key stakes for their occupational groups [6]. Wherever autonomous systems are expected to improve safety in comparison with humans, this political struggle is likely to bend the expected functioning just like it does today. Humans will game autonomous systems just like they game the rules of bureaucracy, with the resulting effects of creating safety issues. Conversely, in the same way that gaming procedures or systems can be good for safety in some situations, gaming autonomous systems will sometimes prove good for safety. The overall outcome is unpredictable.

3.3 Final Comments

Organisations are already, in some way, autonomous systems. Military leaders, managers and operators are already working with (or within) automated, autonomous systems. The machine stops, the operator calls maintenance, maintenance is provided, and the machine works again. Maintenance is an autonomous system for the operator, even though they do not think of maintenance in these terms, but rather as the individuals providing the service. There is no difference of nature between working with an autonomous system and relying on an operational unit, a functional department, a piece of software or a combination of all these—which is what everybody does every day in today’s organisations. The digital autonomous systems that will be implemented in the next 10 or 20 years will not change the nature of actions produced in and by organisations. Consequently, we will encounter the same types of safety issues that we have encountered to date. So nothing will change. And yet, of course, everything will change because these systems will profoundly affect the division of work, the distribution of knowledge, the coordination processes, the political balances and many other key variables in the functioning of organisations and in the achievement of safety. Same game, same rules, different players and maybe—that is the question—different outcomes. As K. E. Weick wrote: “Planes don’t fly. Organisations fly airplanes”. Similarly, organisations run autonomous systems that run autonomous systems. Maybe one day autonomous systems will run organisations, but this is not likely in the near future.

One key implication of this view is that the upcoming invasion of work organisations by autonomous systems should be viewed (also) through the lens of the social sciences. In fact, it seems that, in comparison with the past, the industrial world is increasingly calling on the social sciences to question the digital transformation and identify the issues for the human operator and organisations. It should be helpful to carefully consider if and how evolutions can be easily accepted (or not) and embodied by operators.