Keywords

1 Introduction

As a socio-anthropologist of organizations I have conducted ethnographic studies by comparing similar organizations (nuclear power plants) in a same sector (civil nuclear industry), confronted with similar problems in different countries, regions and contexts (France; the U.S). I have also compared organizations in different sectors (nuclear industry, global health, health providing institutions, and more recently a police department), dealing with sensitive activities, in different countries (France, Switzerland, Japan, the U.S). This cross-national and cross-sectoral perspective using ethnographic studies led me to focus on generic organizational dynamics, which concern a wide spectrum of high-risk organizations, where safety is paramount.

Rather than proposing a cultural model of high-risk organizations, highly dependent on cultural influence, difficult to measure and empirically unfounded (Bourrier, 2005), I choose to consider the challenges that these peculiar organizations have to face daily and compare their organizational responses and mitigation strategies. As Rochlin observed:

The challenge is to gain a better understanding of the interactive dynamics of action and agency in these and similar organizations and the means by which they are created and maintained. These arise as much from interpersonal and intergroup interaction as from more commonly studied interactions with external designers and regulators. The interaction is a social construction anchored in cultural dynamics, therefore there are wide variances in its manifestation even for similar plants in roughly similar settings. (Rochlin, 1999: 1558)

The same approach applies to safety models, often presented as gold standards. Current theories in use, from “HRO”, “Resilience Engineering” to “Sensemaking” and “Ultra Safe Systems”, have been understood too often as prescriptive labels. Deviations from their theoretical principles would signal difficulties in achieving consistent delivery of safe performance. However, most of these theories are descriptive by nature. Even though they are different in their propositions and should not be considered as simple substitutes, they still belong to a certain category of theoretical attempts, post-TMI, aiming at making sense of “surprises in the field” (i.e. the complex and largely unpredictable interactions between technology and humans in very demanding systems). Despite their differences and debates, these theories all seek to move away from a stereotyped view of daily operations and, more essentially, to produce a much more complex and rich view of the social production of safety (for a nuanced presentation of these theories and their respective traditions, see Le Coze, 2016).

These schools of thought propose a useful categorization that combines a number of organizational features and promise to improve the management of high-risk organizations and allow continuous improvement (Bourrier, 2011; Le Coze, 2016). However, their respective list of propertiesFootnote 1 has never been envisioned by their creators as a definitive list. Understood as a process, safety is essentially a never-ending organizational learning process, a “dynamic non-event” in Weick’s own words. Paradoxically, and for reasons that remain to be uncovered, these theories, however well-known, have not easily travelled to the shop floor level. Concrete examples of operationalization at the plant level are still lacking (Le Coze, 2016).

To add some complexity to this already dense picture, it is also my observation that most of the time, each high-risk organization offers a unique response to monitoring their intrinsic challenges and displays only parts of the so-called “safety models”. Hybridization, borrowings, innovative mitigation strategies, and local adaptations are also central to the social production of safety. This is where the notion of “safety regime” offers an alternative for documenting organizational trade-offs that high-risk organizations develop. This includes the examination of their consequences (direct or indirect) for working teams and for the social production of safety.

2 “Safety Culture”, “Safety Cultures”, “Cultures for Safety”

“Safety culture” has been and still is a powerful catchphrase to introduce a wide variety of issues that do not fit easily into many models of risk reduction, risk management or risk governance, either those in place or those currently being developed. “Safety culture” gave a voice to the issue of variance, to the “that’s the way we do business here” philosophy and to the irreducible levels of variation, when confronted with the large spectrum of organizations in society today (Perrow, 1991; Scott & Davis, 2015). High-risk organizations constitute one subset, and share characteristics with large technical bureaucracies, or large socio-technical systems.

However, safety culture is largely an “after-the-fact” concept. When reading investigation reports after an accident, one often finds paragraphs stressing the “lack of safety culture”, commenting on a “broken safety culture” (CAIB, 2003), or a “silent safety culture” (Rogers, 1986). Nevertheless, the concept has some merit (Guldenmund, 2000), even if it was unrealistic to expect such a wide scope of services from one single concept. One way to overcome the limitations often expressed about the concept is to move from “safety culture”, as a set of norms and programs to “cultures for safety”, as a set of practices. More than a cosmetic vocabulary twist, this expression might offer space to embrace a diversity of models, responses and options, viable on the ground and debatable in practice and in theory.

Behaviors, actions, practices, beliefs, decisions, opinions, innovations, designs, hypothesis and perceptions are influenced by a variety of factors. They all constitute the fabric of a corporate culture, which simultaneously is influenced by all of the above. Corporate cultures, especially in industries where risk and uncertainty are paramount, include a safety culture component, and produce cultures for safety. Within high-risk organizations, it is highly probable that the “culture for safety” component is a key component that impacts the rest of the corporate culture. Simultaneously, these companies and their plants are also in business to deliver services and products to clients. Safety cannot absorb and monopolize their entire business. Safety is a pre-condition, not a goal in itself. As scholars pointed out long ago: in companies without safety, there is no socially acceptable business. The “Social License” that they are granted explains why they tend to be heavily regulated and sometimes even over-compliant (Gunningham, Kagan, & Thornton, 2004; La Porte & Thomas, 1995).

The safety culture concept masks the fact that professions, trades and units inside a high-risk organization develop different sets of safety knowledge according to their specific needs and positions within the organization. These sets are a mix of rules (prescribed, rehearsed, shared), norms and gold standards (of the trade), customs (“the way we do things around here”), innovations and brilliant improvisations, when the situation demands it. Some rules are taught during the apprenticeship, some are discovered in situ, on the job, and stabilized collectively, and others rely on personal and intimate know-how, which an expert masters over time.

Hence, different safety cultures/cultures for safety coexist in an organization. Safety culture/culture for safety is therefore a synthesis of generic convictions and the management’s visions, intertwined with professional norms and each professional’s experience. It presents itself more as a dialectic than as a set of principles to be followed from A to Z. The key message here is that in various parts of the organization, safety is produced, thought through, and sometimes bargained about. Safety is the object of intense trade-offs between concurrent objectives: regulatory demands, budget constraints, production pressures, planning constraints, technical innovations, workforce fatigue, to name but a few.

Rochlin (1993) and Moricot (2001) once argued that “friction” between trades, professions and units had something to offer for the sake of safety. The conditions under which such a sometimes controversial debate can be supported cannot be prescribed and decided upon once and for all. Diversity of safety cultures is often seen as an obstacle, whereas it could well be interpreted as a sign of a mature and dynamic organization.

3 On the Limited Usage of “Safety Models” at the Shop Floor Level

A lot of insight and inspiration has been provided by theories originating in the eighties and nineties, such as “HRO”, “Resilience”, “Sensemaking”, and “Ultra-Safe Systems”. Their creators (La Porte, Roberts, Rochlin, Schulman, Hollnagel, Levenson, Weick, Sutcliffe, Amalberti…) sought to explain how and why some organizations did better, were safer, learned better, than others. How to explain the variability? What could account for the differences? What was transferable from one industry to another?

Their detailed descriptions, empirical fieldwork, and conceptual propositions nourished and still nourish scholars across disciplines and industries (Grabowski & Roberts, 2016; Haavik, Antonsen, Rosness, & Hale, 2016; Le Coze, 2016). However, one is compelled to observe that little has filtered back to the industry level. A lot of what has been discovered and put on the table has mainly served as labelling opportunities for some companies: “We—at such and such company—are an HRO”). There has been a lack of traction at the workplace level to implement some of the insights present in the pioneering works.

We offer one possible (partial) explanation for this delay in the practical application of concepts from these safety models (Bourrier, 2017): using them as gold standards failed to address the complex situations encountered on the ground. Their powerful inspirational capacity has been misused, by freezing the options and reducing the complexity, rather than revealing and opening up the subtle trade-offs that may deserve reinforcement or constructive criticism. Paradoxically, though their initial intention was to make sense of a much more diverse picture than what “traditional” safety science studies were offering (for example, a much more complex understanding of human error and decision-making or cognition), their contemporary usage has led to a reduced set of properties. Their level of abstraction does not always help to understand the pressing dilemmas that workers and teams face daily, nor help interpret their corresponding local adaptations.

In the next section, we examine how we came to think of “safety regimes” as a way to escape from the current stalemate.

4 Introducing “Safety Regimes”

The social production of safety is the result of complex transactions and is influenced by several factors, ranging from demographics of the workforce, type of labor relations, economic and financial resources at hand, regulatory frameworks, technical options and innovation. In this approach, daily work activities for example are central to the production of safety, but they evolve and develop in a precise institutional context, in a specific hierarchical structure, and they espouse an official division of tasks and responsibilities, which in turn creates incentives as well as deterrence.

Evidently, organization theory has long established (Scott & Davis, 2015) that this formal set of structural constraints, no matter how carefully designed, does not tell the whole story of organizational life: rules and formal structure are by nature incomplete and subject to local adaptations willingly worked out by concerned actors. Hence, safety is the product of diverse forces converging sometimes in stable equilibria and sometimes in more vulnerable ones.

Safety can also be approached through the lenses of the “regime” concept, which we have freely borrowed from other fields. In International Relations, regimes are commonly defined as

sets of implicit or explicit principles, norms, rules, and decision-making procedures around which actors’ expectations converge in a given area of international relations (Krasner, 1985, p. 2).

This concept taken from political science is fruitful for our discussion. It should not be confused with a regime in the political sense like a fascist regime or a police regime. Our intent is to capture some features of safety management that depend on the alignment and the hybridization of key organizational characteristics.

The regime concept is used by political economists when they characterize modern Capitalism and its models (Amable, 2003), and by public health experts to characterize the driving forces in the field of Global Health. For example, Lakoff (2010) distinguishes between a “global health security” regime and a “humanitarian biomedicine” regime. They represent two distinct faces, philosophies and rationales for action, in the field of global health, in opposition and sometimes in coordination. He argues that these two regimes give different responses to a set of essential and common issues for global health actors, pertaining to:

  1. 1.

    type of threat;

  2. 2.

    source of pathogenicity;

  3. 3.

    type of organizations and actors;

  4. 4.

    type of techno-political interventions;

  5. 5.

    target of intervention;

  6. 6.

    ethical stance.

Closer to our subject, scholars in Law and Safety science like Baram, Lindøe, and Braut (2013) use the term “Risk Regulatory Regime” to distinguish Norway and the US with respect to their way of regulating off-shore oil and gas companies. They have systematically compared the Norwegian Continental Shelf and US Outer Continental Shelf through five features:

  1. 1.

    legal framework;

  2. 2.

    cost-benefit analysis methodologies;

  3. 3.

    legal standards;

  4. 4.

    inspections and sanctions, and

  5. 5.

    involvement of the workforce.

Inspired by these attempts, our intention is first to identify the key dimensions—that we call also “dilemmas”—lying at the core of how a high-risk organization works and how the organizational responses given to these dilemmas largely shape the “safety regimes”. A given “safety regime” produces a certain organizational culture, which in our view encompasses a safety culture. These “safety regimes” have a certain quality of consistency and stability, but are not perfect constructions designed to stay immutable. They are dynamic and their evolution remains partly uncertain and undetermined.

Evolutions affecting these regimes sometimes go unnoticed. They may even lead to misalignment, which might pave the way for “latent human failure conditions” (Reason, 1990), “normalization of deviance” (Vaughan, 1997), or “drift into failure” (Dekker, 2011). In this view, safety culture is a sort of limited proxy which helps understand the type of safety regime one is dealing with. However, studying safety culture by itself and for itself is unlikely to be fruitful. We argue that it needs to be complemented by a thorough investigation of the safety regime in place. This cannot be successfully done without an understanding of the organizational and inter-organizational dynamics at play.

The next section focuses on presenting six categories of problems that structure the type of safety regime which will eventually emerge. Our intention in the rest of this chapter is to clarify the key dimensions that are fundamental to safety regimes.

5 Six Crucial Dimensions

As far as high-risk organizations are concerned, the following issues have to be managed and constantly re-assessed (La Porte, 1996). They constitute complex organizational challenges in all sectors, including the nuclear industry, healthcare, public health interventions and policing. Contingent organizational responses to these dilemmas structure the nature of safety regimes.

First issue: Working with rules and procedures and expanding bodies of regulations is a given in most contemporary organizations (Graeber, 2015), and even more so within high-risk organizations. Yet what are the implications for daily operations of having to document each and every stage of their process?

At a certain level, one could consider that one of the key (cultural) features of these organizations where safety is paramount, is strong and ever-increasing reliance on procedures. The ever-growing scope of proceduralization seems endless and unavoidable (Bieder & Bourrier, 2013). As a consequence, the following crucial questions are constantly on the table: Which groups are in charge of creating, updating rules and procedures, and how is it done? How is the “classic” tension between “prescription” and “autonomy” organized? Ergonomists, human factors specialists along with work sociologists have long demonstrated that procedures are important, but they are imperfect and sometimes counterproductive, when the situation calls for the creation of ad hoc solutions. Also of utmost relevance is how best to maintain a questioning attitude towards rules and procedures when compliant behaviors are most of the time rewarded and sought after. Furthermore, how is the coherence of diverse sets of rules both maintained and challenged?

To these crucial questions, there is no single answer (Bourrier, 1999a, 1999b). For example, for HRO theorists along with Sutcliffe and Weick’s observations (2007), one way of mitigating these challenges is to defer to experience. This is called “migrating decision-making”, which means that the expert on the ground, closest to the problem, decides on the proper and immediate course of action. At the same time, leaders remain responsible for these decisions, even if in retrospect they are found to be less than adequate. Along with a preoccupation with failure and a sensitivity to operations, deference to expertise ensures that the people who are directly concerned with a problem end up being in a position where they offer their views, their opinions and more importantly their own solutions. But as we know, many organizations favor a different perspective and prefer strict compliance and hierarchical decision-making.

Second issue: How to best plan, schedule, and anticipate activities and at the same time stay alert in order to avoid the complacency that such a planning culture inevitably produces?

This second feature leads us to the core of the culture of these industries: a culture of preparedness, through anticipation, planning and scheduling. Sennett (1998) once brilliantly explained why routine was important in the workplace. Preparation is key, especially in hostile work environments, where risks are present and should be reduced to the minimum possible. Yet this very culture of preparedness, also present in other professional sectors such as public health (Nelson, Lurie, & Wasserman, 2007) is also an identified obstacle to learning how to face the unexpected. To tackle this limitation, HRO theorists and their followers suggest resisting the tendency to simplify.

Often, safety culture programs tend to oversimplify the complex interactions that workers face in their daily jobs, leaving them dubious about the real benefit of such high-level programs which do not capture the rich details of their daily efforts. The false promise of scenario planning has already been documented by Clarke (1999) and is currently constantly re-assessed in the light of special crises and catastrophes, ranging from pandemic influenza A (H1N1) to Fukushima. It should be recalled here, that preparing, scheduling and planning is an important component of any organization dealing with complex operational conditions. However, these tasks should not be understood as distinct phases, but rather as integral to the job while constantly taking into account their limitations.

Weick argued in a seminal article (1987) that storytelling inside high-risk industries was crucial to allow for the circulation and sharing not only of problematic events and how they unfolded, but also of inventive and resourceful options to solve tricky problems. This commitment to resilience is often quoted as an HRO principle. Hollnagel and his colleagues (Hollnagel, Woods, & Levenson, 2006) have the same position when they argue that not enough is shared about resilient strategies. Sharing not only what went wrong, but also what was correctly mitigated is of crucial importance to restoring stability. The tendency of these types of organizations to focus on problems masks the fact that they also have important and unknown resources to leverage some tensions. The situation encountered by Fukushima-Daïchi operators and their management should not be forgotten: sometimes, nothing holds, and capabilities to improvise have to be mobilized by terrified actors left in the dark, resorting to their own meager devices (Guarnieri, Travadel, Martin, Portelli, & Afrouss, 2015; Kadota, 2014).

Third issue: How to best cope with the uncertainties and risks inherent to their process and their institutional environment and, at the same time, maintain products and services at a socially and economically reasonable cost?

Because risks and uncertainties are not known once and for all (they can pile up: e.g. Fukushima), cultivating vigilance for unforeseen combinations of events is important. This can only be done through the constant challenge of rules and procedures in order to interrogate their intrinsic limitations. HRO theorists noticed that a balance has to be established between relying too much on very obedient people, with no inclination to challenge their working environment, and encouraging hotheads, who are always ready to break rules and procedures to set their own norms of performance: HROs do not look for heroes, but for questioning minds.

Despite decades of emphasizing the importance of a “questioning attitude” (INSAG 4) to any safety culture program, striking a balance between compliance and improvisation is probably one of the most difficult issues in people management today.

Fourth issue: How to best enable cooperation among various units, crafts, trades and the intervention of many contractors in order to avoid a silo culture and the formation of clans? How best to depend on highly skilled employees, where almost no substitution is possible among themselves (hence no rotating option), while not getting trapped in their worldviews, entrenched wars and corporatist interests?

As Roberts and Rousseau (1989: 132) explain, with the example of aircraft-carriers in mind, these work environments display a

hyper-complexity (due to) an extreme variety of components, systems, and levels and tight coupling (due to) reciprocal interdependence across many units and levels.

Anyone interested in organizational design is challenged by this characteristic: the degree of specialization and expertise among the different units, departments, crafts and contractors inevitably provokes a difficulty in communicating each other’s concerns. Work is done in silos (Perin, 2005), “structural secrecy” prevails and knowledge does not travel easily throughout the hierarchical structure (Vaughan, 1997). Pockets of crucial knowledge (tacit, formal or informal) can stay hidden for a long time and it does not percolate to the relevant teams or people easily. Vaughan has eloquently demonstrated how the culture at NASA, which relies so much on hard data supported by mathematical modelling, had not allowed other types of evidence to be shared and worked on. As a result, pending issues, deemed illegitimate and hence unresolved, were allowed to strongly contribute to the accidents of Challenger and Columbia. Attention to the circulation of events, stories and narratives is of crucial importance to counter the traps of “structural secrecy”.

Therefore, constantly battling against clans, entrenchment and conventional wisdom is vital for maintaining and improving safety performance. A constant effort has to be made to navigate between trades and reconcile their worldviews to hold on to the big picture. This is typical of the role of management in these types of organizations. The same is true when dealing with the widespread level of subcontracting practices in these industries. Subcontracting requires a lot of reorganization within the client company. Organizing a degree of leadership and “follow-ship” (i.e. capacity to accept to be led by others when requiredFootnote 2) is a task in itself. Finally, maintaining conditions that allow distributed cognition is a constant challenge that needs to be monitored.

Roberts and Rousseau (1989) coined the expression “having the bubble” to best describe this extraordinary capability to be continually aware of events occurring at different levels. Weick calls it sensemaking activities. In the middle of the 1990s he argued that organizational failure and catastrophic events are best understood as the collapse of collective sensemaking (Weick, 1993). “Mindfulness” and “heedful interrelations” are key properties to maintain collective dedication to safe performance (Weick, Sutcliffe, & Obstfeld, 2008). The quality of the organizational attention on organizing processes is central to their discussion.

Fifth issue: How to best organize the control and supervision of each and every work activity and still count on workers’ willingness to adapt the rules when they are not applicable? At least three levels of control can be identified in these organizations:

  1. 1.

    workers themselves must perform their first level of control;

  2. 2.

    first line controllers check that work activities present the correct specifications;

  3. 3.

    a third line of control based on documents, ensures that both workers and first line controllers correctly documented the control they performed on the job.

To ensure these various duties, variations exist, depending on a country’s legislation, but also on concrete organizational options. Some companies hire contractors to perform first line controls, whereas others entrust teams of in-house dedicated rotating workers to perform such controls on their own colleagues. This heavy presence of controls, no matter which organizational configuration is in place, generates  risks of their own that deserve more study: too much supervision and control potentially creates lack of autonomy, complacency and resentment, lack of confidence, lack of ownership, and dilution of responsibilities. Obviously, too little, or inadequate control also has its drawbacks. The question remains: how are these risks and the associated equilibria carefully studied?

Sixth issue: How to best deal with the strict scrutiny of authorities and regulators?

After Chernobyl, it appeared clearly that the impact of regulators, through their demands and recommendations (or absence thereof), is of crucial importance in the life of heavily regulated socio-technical complex systems. When reading ex post reports, it is now common to examine the role of regulators. The nature and the quality of the relationships between regulatees and regulators is often identified as a contributing factor leading to the accident. How does the “culture of oversight” affect safety in the end (Wilpert, 2008)? Here again, the heavy hand that some regulators might impose could be detrimental to safety by reducing room to manoeuver. However, their absence from the scene and their remote access to plants could produce a frustrating and unsatisfactory “paper safety”.

6 Conclusion: Regime Change

These questions cannot be solved definitively. They require constant reflection and need to be regularly re-assessed. I once advocated for the creation of organizational observatories to be able to follow the concrete trade-offs that are being made during daily operations (Bourrier, 2002). The concrete and contingent organizational answers given to these problems significantly affect the social construction of safety.

There is no uniform response, yet most high-risk organizations face the same challenges (but do not have the same resources to deal with them). What are the design options that these organizations and networks of organizations, can take, have taken, and are projecting to take in the future? The production of safety is a direct by-product of the organizational regime in place, meant to evolve, and being currently more and more distributed and fragmented among networks of organizations and partners. It colors the kind of safety that is contingently produced in the end.

This is why instead of looking for generic “safety culture” programs and models, it is probably time to accept and welcome the fact that there are different “safety regimes” to ultimately produce safety, none of which are perfect. Some regimes are costly, not only financially but in human terms; others favor strict compliance at the expense of ownership; others promote ownership at the expense of problem sharing and disclosure; some regimes promote compliance, and ultimately create apathy among the teams; some regimes promote autonomy and lack transparency, favoring deep pockets of informal undocumented knowledge; by-the-book compliance can lead to a lack of innovation and can lead to complacency; favoring ad hoc solutions and bricolages, however innovative, can also favor cutting-corners strategies, etc. Depending on the strengths and weaknesses of each trade-off, their final combination will, in the end, impact upon the production of safety and shape the intrinsic qualities of the safety regime in place (cf. Fig. 1).

Fig. 1
figure 1

Trade-offs within safety regimes

Finally, each regime carries its own possibilities and limitations. Mitigating risks does not mean suppressing them, because this is not possible. However, regularly assessing the concrete responses to the six dilemmas detailed above offers a space to accommodate and welcome the variance one is witnessing throughout industries, companies and plants.

The Harvard Business Review in its April 2016 edition titled: “You can’t fix culture, just focus on your business and the rest will follow”. This blunt statement after decades of articles published on the importance of building strong corporate culture, promising a thriving business, is somewhat refreshing. The same is probably true with “safety culture”: You can’t fix safety culture, just focus on your organization and risks and the rest will follow.