Recognizing Complexity in Risk Management: The Challenge of the Improbable
- 7.7k Downloads
In the prevailing safety management paradigm, uncertainty is the enemy, and we seek to eradicate it through anticipation and predetermination. But this strategy generates “robust yet brittle” systems, unable to handle disturbances outside their envelope of designed-for contingencies. A paradigm shift is needed. We are immersed in uncertainty. We live with it, we have evolved with it as living beings, our cognitive and social skills have developed to handle the associated unpredictability. Managing uncertainty is the way we deal with the world’s complexity with our limited resources. We need to better understand these abilities and augment their power in order to better engineer resilience into our systems.
KeywordsSafety paradigm shift Uncertainty management Resilience
Are we on the eve of a “paradigm shift”1 in the safety domain? Do we recognize significant limitations to the ability of the prevailing paradigm to take into account observed facts? Do we have a credible alternative? Can a different social organization and risk management consensus be built around it? In this chapter, I will argue that the prevailing safety management paradigm generates “robust yet brittle” systems, unable to handle disturbances outside their envelope of designed-for contingencies, and that a paradigm shift is needed. We need to change our vision of uncertainty. We live with it, we have evolved with it as living beings, developing cognitive and social skills to handle the associated unpredictability. We need to better understand these abilities and augment their power in order to engineer resilience into our systems.
4.2 Revisiting the Concepts
4.2.1 Limitations of the Current Paradigm
There are at least three ways for a safety paradigm to reveal its limits: (i) the safety objectives announced in its name finally prove unfulfilled in large proportions; (ii) accidents deemed impossible under this paradigm do occur; (iii) this paradigm fails to reassure the civil society. Since the Fukushima Daiïchi accident, these three paths are wide open in the world of nuclear safety. The factually observed safety level does not match with the announced objectives: the observed frequency of an uncontained core meltdown in the Western technology world is about twice the target one. In spite of frequent suspicions that Tepco has been negligent, Tepco has reasonably used the available scientific knowledge, methodology and expertise to predict the magnitude of potential tsunamis off the coast of Fukushima. What we have here is not a poor implementation of an effective paradigm, but a reasonable application of a paradigm of limited effectiveness. In other domains, like aviation, or offshore oil operations, several major recent accidents (Air France 447, Qantas 32, Deepwater Horizon) have also recently illustrated this vulnerability to the ‘surprises’ triggered by unexpected, unthought-of events, or simply rare events. The usual response to this kind of vulnerability sounds like: “we have taken/learned the lesson: let’s add this scenario to the current threat list, and we won’t be fooled next time”. In other words, the usual response is a commitment to do more of what is already done: anticipating threats and responses; extending the domain of predetermination.
4.2.2 The Total Predetermination Fallacy
However, what this vulnerability clearly challenges is the safety paradigm itself, i.e. the idea that safety can be based on the expectedly exhaustive anticipation of all potential threats, and the predetermination of all the expected (safe) responses. This safety paradigm dates from the 1950s, and from the efforts made in the post-war period within the framework of the development of strategic nuclear forces, aerospace industry and nuclear electricity, to better control systems reliability and contingencies. The enemy was the emergence of unexpected behavior, including technical failures and “human error”. Methods for a systematic anticipation of hazards, failures and their effects (fault trees, FMEA), and methods for a posteriori analysis of unexpected events (root cause analysis, causal tree) were developed. And while it was quickly realized that human operators were difficult to incorporate into predictive models, the attempt to treat them similarly to technical equipment by assigning them a calculable reliability coefficient (e.g. THERP, ), has long been in vogue and continues today (e.g. HEART, ). Ideas started to evolve in the 1980s when ‘soft’ social sciences have been introduced in safety thinking to help better understand the role of individuals, teams, organizations and cultures in accidents. But while the focus of interest slowly shifted from technical to human failures, then from front line operators’ to latent, organizational failures , the core of the ‘safety model’ has remained the same. Deviations are the modern figure of risk, and are retrospectively seen as the causes of incidents and accidents, hence systematically chased in search of the modern Grail: a world where nothing goes wrong, a perfect world.
Within this paradigm, the goal is to reduce uncertainty, to extend a deterministic and if not, at least a probabilistic model of the world. Uncertainty is quantified and analytic rationality is applied to demonstrate that the system is fully controllable under predetermined operating strategies, tactics and procedures. This quantification is obtained from and through the concept of risk: a combination (usually a multiplication) of the probability and the magnitude of the predictable damage. This has allowed a considerable development of safety management. However risk quantification methods do more than facilitating decisions: unnoticed to decision makers, they make decisions themselves . Indeed, their multiplicative nature postulates an equivalence relationship between all kinds of risks, a “distant elephant” (a remote catastrophe) weighs no more than a close mouse (a probable small discomfort). Furthermore, not all uncertainties are calculable, and there is an epistemological break between the probabilistic and non-probabilistic randomness. And risk is always an interpretation, in a multi-dimensional, social and complex interpretive field. Risk-related decisions are necessarily the result of a political process. Risk quantification methods hence do a political job: like magicians diverting the public’s attention from their trick, they focus on anticipated contingencies, on calculable uncertainties, and crush the long term uncertainty into the exponential discount of the thin Gaussian distribution tail [4, 39, 40]. They contribute to the development of a social acceptability of risk coherent with a technocratic society, in which a knowledgeable elite tells to the ignorant public: “we have the scientific knowledge, we master the only valuable rationality, so we’d rather decide for you, and don’t be worried, we have full control.” We think, hence you are2... safe!
Unfortunately, as Scott Sagan nicely put it, “things that have never happened before happen all the time”. And when they happen, the superior vigilance and conformity capacities developed for the anticipated events will not help: they will often do the opposite. Ironically, the capabilities needed to cope with the unexpected are those which are eroded in the continuous attempt to prepare for the expected . Consequently, ‘unknown unknowns’, ‘fundamental surprises’  and ‘black swans’  occasionally brush off the Cartesian rational construction. When the ‘extremely low probability’ is suddenly equal to 1, when the ‘defense in depth’ barriers are all submerged by the same wave, the predetermination paradigm collapses and, to use the terms of , reveals its fallacy.
4.2.3 What Is Uncertainty?
Uncertainty is sometimes differentiated from ambiguity, described as ‘second order uncertainty’ , where there is uncertainty even about the definitions of uncertain states or outcomes. The difference is that this kind of uncertainty is located within the human concepts, rather than an objective fact of nature. Similarly, the reference to uncertainty in risk management has also recently witnessed a clarification of the difference between stochastic uncertainty and epistemic uncertainty . Stochastic (or random) uncertainty arises from the intrinsic variability of processes, such as the size distribution of a population or the fluctuations of rain fall with time. Epistemic uncertainty arises from the incomplete/imprecise nature of available information and/or human knowledge. When uncertainty results from of a lack of obtainable knowledge, it can be reduced by gaining more knowledge, for example through learning, database review, research, further analysis or experimentation. But uncertainty can also result from a more fundamental limitation of potential knowledge. Such limitations may apply to observation, even in the ‘hard sciences’: in quantum mechanics, Heisenberg’s Uncertainty Principle states that an observer cannot know both the position and velocity of a particle. It can also apply to the understanding process itself. The dominant scientific explanation is reductionism, which consists of decomposing phenomena, systems and matters into interacting parts, explaining properties at one level with laws which describe the interaction of component properties at a lower level. But can we ‘explain’ all the properties of the world (physical, biological, psychological, social...) through such a reduction process? It could be that we could in principle.3
There are some things that you know to be true, and others that you know to be false; yet, despite this extensive knowledge that you have, there remain many things whose truth or falsity is not known to you.
But the least one can say is that the reductionist strategy does not have the same usefulness for all aspects of the world. Even apparently simple systems with very few components can actually be “complex systems”  and exhibit behaviors that cannot be predicted. One of the simplest of these systems is the double pendulum. Its behavior is called a ‘deterministic chaos’: it is deterministic (the future is fully determined by initial conditions) yet unpredictable on the long term, because tiny differences in these initial conditions generate widely diverging outcomes. Lorenz  defined chaos as “when the present determines the future, but the approximate present does not approximately determine the future”. Systems with a large number of components can also exhibit unpredictable behaviors. In simple, large scale systems, microscopic complexity (individual variety, or in other words, noise) disappears at larger scales because local mutual interactions annihilates individual variations (e.g. in a fluid, at the macro level, the Brownian movement of particles is “submerged” by statistical mean values, and the liquid is still). On the opposite, in large scale complex systems, local variety (small differences) is combined and amplified. Nearby states diverge from each other at an exponential rate, which makes these systems highly sensitive to initial conditions. Microscopic complexity hence creates properties at larger scales: the “noise” determines the system’s large-scale behavior. Sand piles, weather, and insect colonies are examples of such chaotic behavior. This broader form of relationship between micro and macro levels, in which properties at a higher level are both dependent on, and autonomous from, underlying processes at lower levels, is covered by the notion of emergence . Living systems but also societies, economies, ecosystems, organizations, have such a high degree of autonomy from their parts. “No atoms of my body are living, yet I am living”.
Another important factor of complexity is recursion (and self-reference). The common application of recursion is defining a function or a procedure, using the function or the procedure within its own definition. Language is recursive: a dictionary defines all the words of a language, using the words of this language. Consciousness is a recursive concept, and probably one of the most complex to comprehend. Classical linear causality loses all its meaning within recursive systems.
4.2.4 Environment Ontologies: A Taxonomy of Complexity
Level 1: Quasi deterministic: only one future, with uncertainty on variants that do not change the strategy
Level 2: A limited number of well identified possible future scenarios, each of them having a probability difficult to assess; best strategy depends on which one will actually occur
Level 3: A continuous set of potential futures, defined by a limited number of key variables, but large intervals of uncertainty, no natural scenario; as for 2, the best strategy would change if the result was predictable
Level 4: Total ambiguity: the future environment is impossible to forecast; no means to identify the set of possible events, even less to identify specific scenarios within this set. May be impossible to identify the relevant variables to define future.
Similarly, the ‘Cynefin framework’  also provides a typology of contexts based on the level of complexity of the situations and problems that may be encountered. That framework intends to provide guidance about what sort of explanations, decisions or policies might apply . It defines five “ontologies4”, in other words five different types of worlds, considering their properties and level of complexity: known, knowable, complex chaos and disorder.
A third classification, based on the domain of validity of statistical methodologies, has been suggested by Nassim Taleb, and is illustrated in Fig. 4.1.
4.2.5 Uncertainty and Cognitive Control
Recent findings in neuroscience suggest that the reasoning system has evolved as an extension of the emotional system and is still interwoven with it, with emotion playing diverse and essential roles in the reasoning process. According to ,
Expectancies form the basis for virtually all deliberate actions because expectancies about how the world operates serve as implicit assumptions that guide behavioral choices.
Of particular interest for our discussion is Damasio’s idea that the contribution of the emotional system, far from being an archaic disruptor that would degrade the performance of the reasoning process, is an essential contributor to the global cognitive performance when it comes to managing uncertainty and complexity. “Even if our reasoning strategies were perfectly tuned, it appears [from, say Tversky and Kahneman], they would not cope well with the uncertainty and complexity of personal and social problems. The fragile instruments of rationality need special assistance”. This assistance is provided by emotions through “somatic markers”, which “mark certain aspects of a situation, or certain outcomes of possible outcomes” below the radar of our awareness. Consequently, emotions provide instant risk assessment and selection criteria (pleasure/pain) that enable decisions and action, particularly in the presence of uncertainty. “When emotion is entirely left out of the reasoning picture, as happens in certain neurological conditions, reason turns out to be even more flawed than when emotion plays bad tricks on our decisions”. So cognition does not only produce knowledge, predictability, and (provisional) certainties, it also produces an operative assessment of uncertainties, of our ignorance, of contingencies and threats that may impact our coupling to reality.
Feelings are the sensors for the match or lack thereof between nature and circumstance...Feelings, along with the emotions they come from, are not a luxury. They serve as internal guides, and they help us communicate to others signals that can also guide them. And feelings are neither intangible nor elusive. Contrary to traditional scientific opinion, feelings are just as cognitive as other percepts.
And at the same time, we simply need uncertainty. We need to keep some uncertainty to limit the amount of computational resources needed to act and survive in a world whose complexity is virtually infinite. Through this “bounded rationality” , Humans do not seek any “optimum” in either understanding or acting. They seek what is “satisficing” the achievement of their goals in the prevailing conditions. And they constantly adjust the “sufficiency” of their behavior, hence the level of investment of their mental resources, by using heuristics rather than comprehensive analytical reasoning, adjusting trade-offs (for example, the thoroughness-efficiency trade-off ). In other words, they constantly manage a “cognitive trade-off” [1, 2] in order to save their mental resources. They keep as much uncertainty as possible, while remaining sufficiently effective and reliable. In order to achieve this, they “perceive” their ongoing level of control over the situation and their current and anticipated margins of manæuvre. They feel5 when they ‘control’ the situation and are likely to continue doing so in the foreseeable future. Otherwise, they readjust efforts, change tactics or strategy or even higher level objectives. This ongoing perception and prediction of control is at the heart of the concept of confidence: it is the correct setting of confidence that allows the efficient allocation of available mental resources to the relevant issues, and thus mainly determines performance. Much of the ability to control an everyday dynamic situation is not so much in knowledge and skills than in strategies, tactics, and anticipations that allow operators to ensure that the requirements of the situation are not going to extend beyond their expertise. The talent of “superior drivers” lies in their ability to control the level of complexity and dynamics of the situation itself, so they do not have to use their (superior) skills.
In brief, our cognitive system is actually a manager of uncertainty: our cognitive system regulates the uncertainty level to handle its limitations.
4.2.6 Uncertainty and Risk Management
Far beyond that, uncertainty is also essential for evolution. Life could not exist in a totally ordered and repetitive world, because such a world would leave no room for anything new to occur. Hence randomness and uncertainty are essential for life, and a deep psychological and sociological need. We couldn’t manage relationships with others if we knew everything of their feelings and intentions. Rules would not work without some dose of ambiguity and hypocrisy . It is the uncertainty about the timing of our inevitable death that makes its idea bearable.
And paradoxically, uncertainty is essential for safety. The reason is that human behavior is self-represented through consciousness, hence recursive, and social behavior is more and more self-represented through the development of mass media, hence partially recursive. This opens the door to some form of downward causation, or more accurately, to a kind of retroactive causation, so to speak: A representation of the future can change the present . For example, polls before a vote change the results of the vote. The paradoxical effect of the prevailing safety paradigm is to develop a collective feeling of total control, hence triggering behaviors that will reintroduce or reinforce threats. Many authors have insisted on the idea that a system cannot be safe without some residual fear, a “reasonable dose of paranoia” as  put it, to keep people wary. Hence, by analogy with self-fulfilling prophecies,6 safety can be seen as a self-defeating prophecy. The challenge is indeed symmetrical to the prophets’ one: the key question is how to keep concern for risk alive when things look totally safe. As  put it, quoting Hans Jonas, what we need to do is to introduce “heuristics of fear” in order to “build a vision of future such that it triggers in the present time a behavior preventing that vision from becoming real”.
This raises the issue of “whistle blowers” credibility and the issue of “weak signals”. In brief a “weak signal” is a precursor of a serious problem with a low intensity or a low salience, so that it is not detected, and the potential anticipation earnings it was expectedly bearing are missed. Many efforts have been made recently to try and amplify systems’ sensitivity to weak signals. However, this notion of weak signal is still fully embedded in the paradigm of anticipation, and therefore meets the same contradictions and limitations when confronted with a complex world. Hypersensitivity to initial conditions and intrinsic divergence precisely multiplies “weak signals”, which are undetectable and non interpretable in real time, while crystal clear with the benefit of hindsight. Furthermore, on many occasions the signals were strong, the threat well identified, and yet the response inadequate. The problem was not the intensity of the signal but the inability of the main actors to listen to the “whistle blowers” . According to  the main challenge may not be to know, but to believe what we know, deeply enough to act upon it and accept the usually high destabilization price, before a crisis forces us to accept.
4.3 Is There a ‘Credible Alternative’?
4.3.1 Nature and Scope of Necessary Changes
As we have seen previously, the issue of uncertainty mainly boils down to the management of complexity. Basically, the contemporary prevailing safety paradigm is simplification of the world, through modeling, anticipation, predetermination. “Obviously we can only deal with it by engaging in the vicious cycle of programming and vulnerability: more programming generating more vulnerability, which requires and legitimates increased programming. And thus a concentration of power. The self-organizing capacity of society is paralyzed, leaving a straight-growing organization” . There is a key idea in this last sentence: the idea that our societies are more and more controlled, with less and less autonomy left to individual actors, but also the idea that they are less and less the result of a self-organizing evolution, and more and more the outcome of a design process.
But designed systems and evolved systems have very different properties. One of these differences has to do with the role of emergence: evolved systems ‘rely’ on emergent structures to generate new system functionalities, while designers generally work hard to suppress emergent features. Socio-technical systems are hybrids between designed and evolved systems. They exhibit emergent properties (which by the way are not necessarily adding desirable or exploitable system functionalities). A second difference between designed and evolved systems is the role of diversity. For evolved systems, diversity is one of the main mechanisms that allow adaptation to changing conditions, and reversely adaptation is one of the two main processes that create diversity. For designed systems, diversity is a source of complexity, and usually seen as a source of unreliability. A third difference between designed and evolved systems is the different means used to achieve functional robustness. Edelman and Gally  argue that designed systems rely on redundancy, while evolved systems are characterized by a degeneracy of the structure-function mapping: biological structures typically have more than one functionality, and biological functionality is achieved by more than one structure, in very different ways. They also develop functional vicariance, which means a flexible/adaptable structure-function mapping: a structure dedicated to one functionality can be reconverted to achieve a different functionality if need be.
Hence a challenger paradigm should propose both a way to get out of the simplification-vulnerability circle, hence “outmaneuver complexity”, and a way to organize the socio-technical systems differently, reintroducing a proper account of the self-organizing forces.
4.3.2 Suggesting New Trails
184.108.40.206 Resilience Engineering
Resilience has been defined as “the intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions” [17, 18]. This notion is therefore particularly relevant to think about the management of uncertainty. There are two main ways to understand resilience. The first one is to think in terms of control, the second one is to think in terms of adaptation.
In the first perspective, socio-technical systems are regarded as (self)-controlled systems, with a combination of open and closed control loops. Hollnagel  proposes a list of four “cornerstones” underlying resilience, with a view that is close to the robustness of its control function with respect to disturbances: the ability to react (the system must “know what to do” to respond appropriately in real time, including to the unexpected); the ability to monitor: (the system must be able to know “what to watch” to detect potential threats, monitor its own internal state, the state of its processes, and its environment, in order to maintain the necessary regulations to fluctuations, and to detect destabilizations that require a change of functioning mode); the ability to anticipate: (the system must have a sufficient phase advance, so to anticipate what will happen, predict the development of potential threats or opportunities, and more generally predict changes and their consequences, in order to maintain sufficient leeway to different time horizons); the ability to learn: the system must be able to expand its repertoire of responses based on its experience, but also to adapt its anticipatory strategies on the basis of the success or failure of past strategies.
Learning to live with uncertainty. The existence of unpredictable surprises should be accepted as normal. Management should not seek to systematically eradicate disturbances, but rather to deal with their effects by spreading risk through diversification of resources, operating modes and activities.
Maintaining internal diversity (including varieties with low yields) to facilitate reorganization and renewal, investing in emergency stocks and reserves, selecting components tolerant to disturbance.
Combining several types of knowledge for learning, which enables social actors to work together, even in an environment of uncertainty and limited information. Scientific understanding can be enriched by explorations of local and traditional knowledge.
Create opportunities for self-organization. Self-organization connects the previous three factors. It increases the capacity for interaction between diversity and disturbances. It nourishes the learning process, and roots it in a continuous process of brewing and trial and error of institutional arrangements and knowledge, enabling them to be tested and revised.
All complex adaptive system strive to manage a balance between their degree of specialization and short-term optimization, and the robustness of their performance outside their adaptation envelope, that is to say, a compromise between optimality and fragility. The more a system is “fit” (optimized for a given equilibrium), the more sensitive it will be to disturbances that may disrupt this balance. Resilience engineering has to do with the proper management of this optimality–brittleness trade-off, in other words, with the maintenance of the adaptive capacity of a system, whether it is first-rate adaptation (ability to maintain a balance) or second order (the ability to change and develop coping mechanisms to find a new balance). Hollnagel et al.  consequently describe the resilience of a system as being able to:
If a self-organizing system is capable of generating novelties, it is because it has the ability to adapt to random events that disturb it, to assimilate them by changing its structure. [...] But for this [...] a necessary condition [...] is a degree of undeterminacy: it is, among other conditions, because the system is partially undetermined that it can integrate disturbances that affect it, and transform them into meaningful experiences.
Recognize the signs that its adaptability is falling or inadequate given the current level of uncertainty and constraints, and given future bottlenecks;
Recognize the threat of depletion of its reserves or buffers;
Identify when it is necessary to change priorities in the management of trade-offs, and to the adopt a higher-order logic;
Change perspective, contrast perspectives beyond nominal system states;
Browse the interdependencies between roles, activities, levels, objectives;
Recognize the need to learn new ways to adapt;
Analyze its modes of adaptation and risk assessment;
Maintain its ability to adapt, generate and constantly regenerate its potential to adapt to a spectrum of situations as broad as possible.
220.127.116.11 Organizational Aspects
The High Reliability Organizations (HRO) community (e.g. [19, 20, 21, 33]) tried to define the features shared by organizations that seem to be “highly reliable” in their management of safety. The classical model of the pyramid organization is a homeostatic hierarchical control model, where the control centre (at the top) regulates the conditions of process coupling to the real environment (at the bottom) to reach its objectives by compensating for the variations it is subjected to. The HRO trend has shown that organizations capable of maintaining their reliability levels do not follow such a bureaucratic hierarchical structure, but are rather characterized by both a powerful centralized and strategic decision making process (i.e. consistent with the classical hierarchical model), and a powerful decentralised operational decision making process, which confers on operators at the bottom a strong empowerment, for safety issues in particular.
Along the same lines, the collibrationist7 movement , considers that in risk management, the idea that societies formulate objectives based on acceptable levels of risk and seek to achieve them through a rational management is a fiction, especially because there is no social rationality to define an ‘acceptable level of risk’. Therefore risk regulation is not a homeostatic process regulating a target, but the outcome of the game between many antagonist forces representing the interests of different stakeholders, through a process of ‘coopetition’ , that is to say, cooperation and competition at the same time. Rather than trying to introduce an illusory teleology, collibrationists advocate institutionalizing a game of ‘tug of war’ between the opposing tensions, so that they are included in the institutions, that all parties are identified and represented, that the conflicting values involved are publicly debated, and that all interests can defend themselves, so that the final balance is not found by crushing one of them. Control is exercised by the regulatory power by changing the constraints in action in balancing mechanisms, for example by adjusting the regulatory constraints, the taxes, the access to information, etc. in order to maintain an equilibrium between powers.
12ptSuch ‘polycentric structures’8 have a number of favorable characteristics for resilience. They generate more cross-checks, a better balance of forces, a better discussion and a more open competition of ideas, more alternatives in case of failure of the ongoing policy. In long-term, this improves the management of transactions and compromises, the distribution of responsibilities, and the coordination of behaviors. In particular, as institutions are established by those familiar with local conditions, they have a better fit and a greater adaptability to these conditions, a greater legitimacy and a higher level of participation.
At the price of redundancy and of some apparent short term waste, polycentric governance is seen by its supporters as globally more efficient, especially in the long term, to manage conflicts of interest towards common resources (the ‘tragedy of commons’), or between conflicting goals, especially when the characteristic time horizons are not the same between the different interests. Organizations usually have multiple and partially contradictory objectives, which can vary depending on the components (e.g. various job profiles) and also have different time-frames. Organizations attempt to balance their performance to achieve these various objectives, and thus, in a bounded world, tradeoffs are necessarily at stake. Resilience somehow measures the quality and robustness of these tradeoffs, i.e. their stability in the presence of disturbances. In this respect, another important resilience characteristic relates in particular to the ability to make “sacrificing” decisions, such as accepting the failure to reach an objective in the short term to ensure another long term objective, or ‘cutting one’s loss’ by giving up initial ambitions to save what is essential . A ‘sacrificing’ decision is an acceptable solution found at a higher level of the means-goal abstraction hierarchy to a conflict that could not to be solved at the initial level of that hierarchy.
In safety management, uncertainty is seen as the enemy, and the prevailing paradigm tends to eradicate it through anticipation of all situations and predetermination of corresponding responses. But uncertainty is everywhere, and the current safety strategy generates a vicious cycle of predetermination and vulnerability, more predetermination generating more vulnerability, which requires more predetermination, hence “robust yet brittle” systems, less and less able to handle disturbances outside their envelope of designed for contingencies. This will inexorably lead to more ‘fundamental surprise’ accidents. A paradigm shift is needed. Another approach to safety is possible. Uncertainty is not necessarily bad. Actually we are immerged in uncertainty, we live with it, and we need it to deal with the world’s complexity with our limited resources. We have inherited cognitive and social tools to manage it and deal with the associated unexpected variability. We need to better understand these tools and augment their efficiency in order to engineer resilience into our socio-technical systems.
In his seminal book The Structure of Scientific Revolutions, Thomas Kuhn challenged the then prevailing view of Science as a continuous accretion of facts and concepts. He advocated a vision in which episodes of conceptual continuity were interrupted by ‘scientific revolutions’, like the Copernician one, during which failures of the current theories to properly address some observed facts are recognized, and lead to a “paradigm shift”.
“We think, hence you are” is the title of Nicolas Bouleau’s blog.
The French mathematician and astronomer Pierre Laplace once nicely captured this vision. His contention was that the states of a macro system are completely fixed once the laws and the initial/boundary conditions are specified at the microscopic level, whether or not we (limited) humans can actually predict these states through computation. This is one form of possible relationship between micro and macro phenomena, in which the causal dynamics at one level are entirely determined by the causal dynamics at lower levels of organization.
In artificial intelligence and information science, ‘ontologies’ are the structural frameworks used for organizing information and for reasoning: an ontology formally represents knowledge as a set of concepts within a domain, and the relationships between those concepts. They provide a shared semantic structure to a domain, as perceived by its actors and can serve as basis for the construction of formal reasoning or methods, and can support the design of organizations and IT tools.
This perception of control is of emotional nature, it is accompanied by a feeling of satisfaction or pleasure. Conversely, the perception of a loss of control would trigger the stress response, with the associated adrenalin. The pleasure of feeling in control is larger when the situation is inherently risky and difficult to control.
A ‘self-fulfilling prophecy’ is a prediction (a representation of the future) that triggers, in the present time, actions and reactions which make that future happen. Such prophecies are a central element of ancient Greece tragedies, designed to illustrate an inexorable fate. Because he was warned by an oracle that his son would eventually kill him and marry his mother, Oedipus’s father abandoned his son on a mountain, hence creating the conditions for the prophecy to be fulfilled. Oedipus survived, grew as a stranger, and ignoring who they were, he killed his father and married his mother while he was trying to discover the mystery of his birth.
The term co-libration comes from the English verb “to librate” meaning to give a little tap on the pan of a balance to check that it has reached equilibrium.
The notion of ‘polycentric governance’ was generalized by , who earned the Nobel Prize in 2009, for her work on the governance of complex ‘socio-ecological’ systems that share common resources (such as water).
- 1.Amalberti R (1996) La conduite de systèmes à risques, 2nd edn. Coll. Le Travail Humain. PUF, ParisGoogle Scholar
- 2.Amalberti R (2001) La maîtrise des situations dynamiques. Psychologie Française 46(2):105–117Google Scholar
- 3.Bieder C, Bourrier M (eds) (2013) Trapping Safety into Rules - How Desirable or Avoidable is Proceduralization?. Ashgate, FarnhamGoogle Scholar
- 4.Bouleau N (2009) Mathématiques et risques financiers. Odile JacobGoogle Scholar
- 5.Brandenburger AM, Doubleday BJ (1997) Co-opetition. Currency Doubleday, New YorkGoogle Scholar
- 7.Courtney H, Kirkland J, Viguerie P (1997) Strategy under uncertainty. Harv Bus Rev 75(6):66–79Google Scholar
- 8.Damasio AR (1994) Descartes’ error: emotion, reason, and the human brain. Putnam Publishing, New YorkGoogle Scholar
- 9.Dunsire A (1993) Manipulating social tensions: collibration as an alternative mode of government. Technical report, Max-Planck-Institut für Gesellschaftsforschung, Köln. MPIFG discussion paper 93/7Google Scholar
- 10.Dupuy J-P (1982) Ordres et désordres: enquête sur un nouveau paradigme. Seuil, ParisGoogle Scholar
- 11.Dupuy J-P (2004) Pour un catastrophisme éclairé: Quand l’impossible est certain. Seuil, ParisGoogle Scholar
- 14.Gilbert C (2013) Entendre aussi les signaux forts. Prévenir les crises. Armand Colin, ParisGoogle Scholar
- 15.Hollnagel E (1999) The ETTO principle: efficiency-thoroughness trade-off - why things that go right sometimes go wrong. Ashgate, FarnhamGoogle Scholar
- 16.Hollnagel E (2009) The four cornerstones of resilience engineering. Resilience engineering perspectives. Volume 2: Preparation and restoration. Ashgate, Farnham, pp 117–134Google Scholar
- 17.Hollnagel E, Woods DD, Leveson N (2006) Resilience engineering: concepts and precepts. Ashgate Publishing, AldershotGoogle Scholar
- 18.Hollnagel E, Pariès J, Wreathall J (2011) Resilience engineering in practice: a guidebook. Ashgate, FarnhamGoogle Scholar
- 20.La Porte TR, Consolini P (1991) Working in practice but not in theory: theoretical challenges of high-reliability organizations. J Public Admin Res Theory 1:19–47Google Scholar
- 22.Lanir Z (1986) Fundamental surprises. Decision Research, EugeneGoogle Scholar
- 23.Le Moigne J-L (1990) La modélisation des systèmes complexes. Dunod, ParisGoogle Scholar
- 26.Mintzberg H (1996) The rise and fall of strategic planning. Free Press, New YorkGoogle Scholar
- 28.Hoffman FO, Hammonds JS (1994) Propagation of uncertainty in risk assessments: the need to distinguish between uncertainty due to lack of knowledge and uncertainty due to variability. Risk Anal 14(5):707–712Google Scholar
- 29.Pariès J (2011) Lessons from the Hudson. Resilience engineering in practice: a guidebook. Ashgate, SurreyGoogle Scholar
- 30.Pariès J (2006) Complexity, emergence, resilience. Resilience engineering: concepts and precepts. Ashgate, AldershotGoogle Scholar
- 31.Pariès J (2012) Resilience in aviation: the challenge of the unexpected. Presentation at IAEA Technical Meeting on Managing the unexpected from the perspective of the interaction between individuals, technology and organizationGoogle Scholar
- 32.Reason J (1997) Managing the risks of organizational accidents. Ashgate, FarnhamGoogle Scholar
- 33.Rochlin GI, La Porte TR, Roberts H (1987) The self-designing high-reliability organization: Aircraft carrier flight operations at sea. Naval War College Rev 40(4):76–90Google Scholar
- 34.Simon HA (1982) Models of bounded rationality: behavioral economics and business organization. MIT Press, CambridgeGoogle Scholar
- 36.Snowden DJ (2005) Multi-ontology sense-making, a new simplicity in decision making. Inf Prim Care 13:45–53Google Scholar
- 37.Snowden DJ, Boone ME (2007) A leader’s framework for decision making. Harv Bus Rev 85(11):68–76Google Scholar
- 38.Swain AD (1972) Design techniques for improving human performance in production. Industrial and Commercial Techniques Ltd, LondonGoogle Scholar
- 39.Taleb NN (2007) The black swan: the impact of the highly improbable. Random House, New YorkGoogle Scholar
- 40.Taleb NN (2008) The fourth quadrant: a map of the limits of statistics. EdgeGoogle Scholar
- 41.Weick KE, Sutcliffe KM (2001) Managing the unexpected: assuring high performance in an age of uncertainty. Jossey-BassGoogle Scholar
- 42.Williams JC (1985) HEART - a proposed method for achieving high reliability in process operation by means of human factors engineering technology. In Proceedings of a symposium on the achievement of reliability in operating plant, safety and reliability society (SaRS), BirminghamGoogle Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this book are included in the book’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.