Keywords

13.1 Changes in the World and Changes in the Minds

Like many others, this book starts from the premise that the world is undergoing a radical, global change, and on the scale of the history of societies, this is occurring at an extremely fast pace. In fact, everything is changing. The long-announced global warming is confirming its inexorable nature and beginning to show the power it has to destabilise habitats, the economy, ways of life and relationships to the environment. The world population will continue to grow exponentially for decades, particularly in Africa, and stagnate and age considerably in the most economically developed regions. It will inevitably redistribute itself through mass migrations. The value production chain continues to be relocated offshore to countries with cheap labour. Its organisations are becoming globalised, financialised, fragmented and complexified in increasingly interdependent networks. Galloping digitalisation, virtualisation, machine/deep learning, along with mass connectivity and data processing are transforming attitudes towards the company, production and work. The financial stakes take such a high priority and are so huge that, for the 737 Max, the top management of Boeing imposed on its legendary engineering team technical choices that defied common sense and the basic rules of design. The Big Five are becoming the leaders of the global economy. Their stock market value far exceeds the GDP of France. That of Tesla has reached the trillion-dollar mark, making its value one hundred times higher than that of Renault and higher than all of the other car manufacturers in the world combined. Elon has become a popular first name.

The list does not end there, and the deeper meaning of these transformations is a source of debate. But while the trajectory is not yet clear, one thing is certain: the scale and momentum of the changes underway are those of the major ‘revolutions’ in history. In other words, this is a metamorphosis of society. Based on this observation, this book poses the question: “What will be the impact of this metamorphosis on industrial risk management strategies and, more precisely, on the place of human actors in these strategies?” And incidentally: “What would need to be done, today or tomorrow, so that in the future the industrial sector continues to manage its risks to the standards expected by society?”

These questions are difficult for two reasons. Because prediction is a difficult art, particularly in a period of radical changes. But also, because the very notion of safety strategy remains unclear, or even contradictory in some respects. The introduction to this book highlighted this by putting in perspective the official safety model, based on predetermination and total control, and a practical model which more realistically integrates uncertainties and necessary adaptations. And the ‘objective’ world is not the only one changing. Our mental representations of it are also changing, as the real world remains constant. The models are changing; the theories themselves are undergoing major ‘revolutions’. And in this dual process, the changes in ‘reality’ and changes in ‘theory’ interfere with one another. As seen with COVID-19, certain unshakeable certainties regarding ‘budgetary orthodoxy’ or the benefits of globalisation were… shaken. What was ‘impossible’ became necessary. Conversely, this blow to the theory will also generate or facilitate changes to the economic ‘reality’ post-COVID-19.

The same is true in the field of safety. More particularly, as regards the place of the human operator in safety, industrial views and practices have changed a great deal over the last three decades. The integration of knowledge from the human sciences into the safety model has profoundly transformed it. The view of the operator and of ‘human reliability’ has changed. The initial equation (safety = technical reliability + obedient operator) has become more complex. The operator has become a ‘fallible reliability agent’ in an environment that is itself recognised as less predictable, requiring some adaptations in real time. And thus, an ‘intelligently obedient’ operator, and more open, cooperative leaders who are willing to listen.

13.2 The Future of the ‘Compliant yet Intelligent Operator’ Injunction

One could conjecture that this model will be little affected by the transformations underway. We already know most of the processes at work: automation, virtualisation of activities by putting distance between the human operator and the real-time physical process, replacement of the human operator with robots, transformation of the actor into a supervisor. These changes have occurred in several industries, notably in aviation. They will spread to other industries and, within each of these, to more occupations higher up in the pecking order (e.g. doctors rather than nurses). This will mobilise new, more disruptive technologies and will involve human–machine interactions complexified by artificial intelligence (AI) and machine learning. Overall, the remaining operators will find it even more difficult to construct a ‘mental model’ of the machine and to predict and understand what it is doing. We already know the associated negative effects: overconfidence in the machine; loss of comprehension; issues with alertness; loss of basic know-how, which remains crucial in degraded mode. We also know that they can be partly controlled and that the final outcome is most often favourable or even highly favourable to safety.

The most reasonable conjecture is thus that the number of safety issues associated with the reliability of frontline operators will continue to drop significantly when it comes to designed-for and normal operation. One corollary is that the onus should shift from the users of the systems to their designers unless, like Tesla and their colleagues, these designers manage to convince everyone that the operators are always ‘in charge’ of said systems. However, the capacity of operators to intervene and take over control in ‘beyond design basis’ situations will also diminish drastically. And it will become less and less possible for operators to offset this tendency with a better knowledge of the systems they are handling, as these will have become too complex and inexorably ‘esoteric’ in degraded mode. Should we still feel the need to maintain some level of control over ‘beyond design basis’ situations, the ‘sense-making’ ability of the operational team will need to be improved by giving them real-time access to the necessary network of expertise, and the systems will have to be designed to include a mode of operation which does not require access to causality, similar to the ‘state-based control’ used in the nuclear plants when proper understanding is lost, in contrast with event-based control. This will imply training the designers in complexity, its consequences and its management, a great deal more than they are right now.

But at the level of the organisation, the company, and even more so at the level of society as a whole (public, media, justice political arena), the safety strategy is still largely perceived as resulting from the capacity to anticipate all situations, to predetermine the right technical and human solutions and to ensure conformity with what was anticipated. All of this through an increasingly detailed formalisation, a ‘rationalisation’ of the system, processes and activities, and through ensuring their quality (“we write down what we do, we do what is written, and this is what protects us against lawsuits if an accident does occur”). In short, a model of a programmed, deterministic and linear machine, controlled by an all-powerful ‘command and control’ system, which knows nothing of ‘beyond design basis’.

13.3 Rise and Fall of a Paradigm Shift

Yet, for the past three decades at least, some scientific schools of thoughtFootnote 1 have proposed other visions of safety, built on the recognition of the dynamic complexity of the sociotechnical systems that constitute the industrial world. This ‘complexity’ means constant and irreducible variability, turbulence, circular causality, nonlinearity between the causes and the effects, interference, resonance, long-range coupling, ‘butterfly effect’, etc. In that world, variation is part of the normal state and is the irreducible background noise to the ‘life’ of the system. The underlying metaphor is no longer that of a programmed machine, but that of a living system. Its survival does not imply the absence of deviations (on the contrary, these are an integral part of its evolution), but rather managing these, constantly compensating for them, blocking those that are unfavourable and selecting those that are favourable to adaptation and adaptability. And in this vision, safety is inseparable from the other vital objectives: it is not possible to get food or water without exposing oneself to predators. Resilience can only be thought of in terms of compromise between the different survival needs.

Unlike HOF, this systemic vision was not ‘bought’ by industrial safety. At least, not in its entirety. Though it is possible to identify certain aspects that have been partially borrowed. The COVID-19 crisis has trivialised the word “resilience” and sharpened our awareness that the only certainty is uncertainty, i.e. that unexpected and unpredictable things will happen. But as already mentioned, the safety ‘paradigm’ remains essentially tied to anticipation and predetermination. One could even say that it is becoming reinforced, with an increase in standards and compliance efforts. Incidentally, these are largely validated by the undeniable, and at times considerable, progress achieved in safety over the last decades. And today, a majority of the designers and strategists in the industrial sector are awaiting a new breakthrough in this strategy, thanks to the spectacular advances made in digitalisation, AI, big data, digital twins and deep learning, which they believe should bring a leap forward in modelling, prediction and monitoring capacities.

Yet, the analyses reported in this book clearly indicate that the current socio-technological revolution will generate intense stress characteristic of great societal transformations and the adaptability challenges they bring. Europe will probably experience a strong shortage of specialised skills due to a gap between those produced by its universities and the needs of its industrial sector. The ageing of its workforce will be at odds with the need to have several successive occupations over the span of a career. Its cultural conversion, increasingly in favour of respecting the environment and living more frugally or even ascetically, will undoubtedly not catch on in the rest of the world, at least not initially. It risks a consecutive transfer of the nerve centres of its industries—design, normalisation, financing—to Asia, thus prolonging the migration of the value production chains. With the fragmentation and globalisation of the industrial sector, the technological innovations underway and those to come will surpass the monitoring and certification capacity of regulatory bodies to guarantee safety, as already illustrated by the challenges in certifying self-learning systems. The divide will widen between those (designers, major players) who will understand the algorithms and have access to the data and those who will ‘consume’ without understanding. Self-regulation and third parties (insurers, standards bodies, professional alliances) will increasingly replace official authorities. Systemic computer network failures and cybersecurity will replace operator reliability as the central concern for ‘safety’. With the loss of intelligibility and the growing dissociation between benefit and risk, the risk aversion of the public, the users and local residents will continue to grow, and along with it distrust and suspicion, amplified by social media, in a world that is increasingly esoteric or even magical and has shifted into ‘emergency’ mode, destabilised by climate change and the boomerang effects of ecosystem destruction. More and more risks will become uninsurable.

Beyond this forecast, which will inevitably prove to be wrong and which, it is hoped, presents a pessimistic view, the socio-technological revolution underway could thus generate a real paradox. By extending the industrial fabric to a global scale while fragmenting it and multiplying the interconnections, the revolution inexorably increases its complexity and thus, by definition, the limits to its modelling, and the uncertainty and ‘fundamental surprise’ potential associated with it. At the same time, there is a growing feeling that the computing power is developing faster than the object of the modelling: since a future which can be entirely pre-calculated is now within reach with the digital twins, exhaustive predetermination and total control could be just around the corner. And there might finally be some kind of an end to safety risk management. In this, there is no doubt an illusion of the same nature as that which, at the end of the last century, believed the end of history was coming. This does not mean that all the promises of digital technology are false. It is certain that we will have factories, trains, aircraft and nuclear reactors controlled at levels one order of magnitude higher than our current best levels. But—this is a banality—this control will never be total. And most importantly, it will concern local processes, not the entire system. Accidents will become rarer, but increasingly of the ‘black swan’ type.

13.4 The Risk of a Late and Stale Evolution of Safety Management

Thus, everything is happening as though the story of safety was that of a slow ascension within the levels of the organisation, beginning with the machine, the operator and their workstation, continuing at the level of the teams, the workshops, the procedures, then the processes, the departments, the production sites. But, everything is also happening as though the safety strategy was always one level behind. The more complex the system becomes, the more the root factors, which ‘produce’ the risk and allow its modulation, shift to the next level up. We think about the workstation level when it is already the design and processes that are the issue, about the process level when it is already the company’s strategy that is the issue and about the strategic decision-making level of a company when it is the global value production chain that is generating instabilities… And while safety studies are carried out for internal and local changes within the company, this is not the case for major changes affecting the world. Safety is leaving the confines of the company, and its systemic dimension is not being dealt with at the right level. If we are not careful, the future of safety will resemble that of antibiotics. Given our sterilisation strategies, the threat of the future is not so much the original pathogen, but the fact that our defences are always one mutation behind.

Therefore, it is important to get out of safety-focused circles to talk about safety. The challenges to safety lie elsewhere, in places little penetrated by safety. One of the findings of the ‘Strategic AnalysisFootnote 2’ that underpins this book is that the major changes in the world are questioned by researchers and specialists from a variety of fields, examined from numerous angles, but not from the safety angle, at least not as a priority. Safety appears as an orphan dimension of the reflexion. The major climatic, environmental, economic and geopolitical challenges are also discussed in influential circles, in think-tanks gathering world leaders, at the COP, Davos and the like. Conversely, although systemic safety now extends beyond the traditional places of discussion, which are the company and its interactions with public requirements via regulatory bodies, there are few if any places where they are discussed. There are few if any publications, meetings, organisations that discuss the impact of the major changes on safety.

Within companies, those in charge continue to think in terms of industrial safety associated with the ‘internal’ state of the company, even though the boundaries of the latter have burst. Thus, since major changes are played out in circles where safety is not a topic that is discussed, the challenge is to raise safety issues in these places of influence, create new places of influence where they will be discussed and reinforce the few that exist. This will not be easy. The topics are dictated by the scale and urgency of the issues. It is said that Stalin responded to Pierre Laval, then French Prime Minister visiting Moscow, who asked him to make a favourable gesture towards the Vatican: “The Pope? How many divisions?” Our strategists will ask: “Safety? How many billions?” Compared to the consequences of pandemics, global warming, increased extreme weather events and cyberattacks, it will not be much. With more and more focus on a few rare black swans, it may sound even less loud through the megaphone of social media. Safety experts are going to have to take serious lessons in mass influence and lobbying. In other words, in politics.