Keywords

1 A Brief Historical Perspective on Culture and Safety

The impact of culture on the performance of organizations has become a growing concern in western industries with the globalization of companies. This has led to the development of a whole line of research, particularly well illustrated by the seminal cross-cultural work of Hofstede (1980, 1991). Defining a culture as

the collective programming of the mind which distinguishes the members of one group from another,

Hofstede tried to identify the impact of national (ethnographic) cultures on organizational (corporate) cultures. His conclusions laid the foundation for a considerable body of work that has examined the role of national cultures in relation to safety, particularly in aviation. A Boeing study (Weener & Russel, 1993) showed that for the years 1959–1992, the proportion of accidents in which the crew was considered a causal factor varied in a ratio of about one to five with respect to the region of origin of the airline. Merritt (1993, 1996) replicated Hofstede’s work to explore cross-cultural similarities and differences with respect to attitudes toward flight management and the link to safe operations. Her findings paralleled Hofstede’s in revealing significant differences in attitudes toward authority and the extent to which people preferred to make decisions individually or via consensus. Helmreich and Merritt (1998) explicitly searched for correlations between national particulars with respect to Hofstede’s cultural dimensions, and the accident rates of airlines. They found that (only) two of these dimensions were correlated with safety performance: power distance and collectivism/individualism.Footnote 1

Many then concluded that differences in national culture caused pilots from Asia, Africa, or South America to be less safe than those from the USA or Europe. However, the same research also included outcomes in dissonance with this vision. The 1993 Boeing study also showed that non-western operators did not suffer a higher accident rate than western ones when compared on the same routes. Helmreich and Merritt (1998) clearly rejected the link between national culture and accident rates:

Some authors have correlated national culture with accident rates and concluded that pilots in certain countries are safer than others. We take umbrage with the simplicity of this statement. The resources allocated to the aviation infrastructure vary widely around the globe. {…} Accident rates are a function of the entire aviation environment, including government regulation and oversight, and the allocation of resources for infrastructure and support, not just pilot proficiency (p. 104–5).

To test the relationship between accident rates and infrastructure, Hutchins, Holder, and Pérez (2002) performed a correlational analysis across major regions of the world on common measures of infrastructure quality and a measure of flight safety. They show that flight safety is correlated at the 0.97 level with daily caloric intake.

If a nation does not have the wealth required to create and distribute food, it is unlikely to be able to invest in modern radar systems, ground-based navigation and approach aids, runway lighting, weather prediction services, or the myriad other institutions on which safe civil aviation operations depend.

As noted by the authors themselves, this does not mean that culture plays no role in the organization of (safety-related) behavior on the flight deck. But it means that culture is only one of a large number of interacting behavior drivers, so that its relative effects on behavior are unknown and may remain so.

Furthermore, there is a chicken-and-egg issue between culture, infrastructure, and behavior. Culture is commonly seen as a set of shared behavioral attractors (values, beliefs, attitudes) literally written in people’s minds. In this vision, culture shapes behavior. But conversely, people also behave in certain ways because they make sense of their situations, define their own goals to serve their interests, and act accordingly. When environments, goals and interests are similar, behaviors tend to be similar. They reinforce each other through imitation, and crystallize into binding stereotypes that become values and attitudes. In this vision, environments and behaviors generate culture. So culture shapes behaviors which mold infrastructure that influence behaviors that crystalize into culture. They are linked as the ingredients of an autopoietic system (Maturana & Varela, 1980). A forest does not last as a forest merely because trees reproduce themselves. A forest permanently regenerates, through the transformation, the destruction and the interaction of its components, the network of components’ production processes and the environmental conditions needed for its regeneration.

The focus has later shifted from national to organizational culture. The idea that organizational or corporate culture—defined as the reflection of shared behaviors, beliefs, attitudes and values regarding organizational goals, functions and procedures (Furnham & Gunter, 1993)—can by itself shape safety behavior, hence safety performance within an organization, is indeed an attractive assumption. However, this definition suffers from the same fundamental ambiguities as ethnographic culture. First, it does not solve the circularity between culture, behavior, and environments, and we find definitions that simply include what people think (beliefs, attitudes and values), and others that also include how people act (behaviors). A reference in the latter category, Schein (1990, 1992) suggested a three-layered model including (i) core underlying assumptions, (ii) espoused beliefs and values, and (iii) behaviors and artefacts. Second, the notion of corporate culture postulates by definition some autonomy from national cultures, but does not define the extent of this independence. Through the reference to shared beliefs and values, it also assumes a certain level of internal cultural consistency within a given organization, but it is not clear how this postulated ‘cultural color’ treats obvious internal sub-cultures, i.e. the differences between trades and groups within the same organization. Last but not least, the assumption about its impact on safety is unproven, although plausible.

2 The Birth of “Safety Culture”: Not Rocket Science but a Useful Concept

The reference to the term “safety culture” by AIEA in the aftermath of the 1986 Chernobyl disaster (INSAG-1, 1986; INSAG-4, 1991; INSAG-7, 1992) can be seen as a further attempt to clarify the link between culture and safety. Safety culture was defined as

that assembly of characteristics and attitudes in organizations and individuals which establishes that, as an overriding priority, nuclear plant safety issues receive the attention warranted by their significance.

The number of available definitions in the academic and corporate literature shows both the success of the concept and its ambiguities. The common point of these definitions is that safety culture is the sub-set of corporate culture that influences safety (this establishes a link to safety, at least in theory). As with corporate culture, these definitions mainly differ according to whether or not they include behavioral patterns. The AIEA definition belongs to the “non-inclusion” family, and mainly reflects the concern and commitment to safety (usually called safety climate). The other family of definitions includes behavioral patterns and reflects both commitment and competence to manage safety. The definition given by the UK Health and Safety Commission is a prominent representative of this family:

(Safety culture is) the product of individual and group values, attitudes, competencies, and patterns of behavior that determine the commitment to, and the style and proficiency of, an organisation’s health & safety programmes. (HSC, 1993)

The difficulty with the former family is that (safety) behaviors do not result from cultural influence only. The difficulty with the latter is that cultural influence does not determine (safety) behaviors in a straightforward, deterministic way. Managers tend to prefer the former, because it explicitly refers to what they seek to influence: behaviors.

It is difficult to manage something if it is not assessable, but the assessment of safety culture poses further challenges. As noted by Hutchins et al. (2002)

{…} in order to assess the value of a culture to {…} safety, one would have to cross all available cultural behavior patterns with all conceivable {…} circumstances. In every case, one would have to measure or predict the desirability of the outcome produced by that cultural trait in that particular operational circumstance. Constructing such a matrix is clearly impossible.

Instead, the use is made of surveys measuring attitudes or self-reported behaviors against attitudes or behaviors that have been considered by safety experts as leading to desirable (or undesirable) safety outcomes. As the industry lacks the observational data to match attitudes with real behaviors in operational contexts, and even more to match behaviors with safety outcomes, this kind of assessment grid is merely a mirror of the questionnaire designers’ current vision of the influence of attitudes and behaviors on safety in their own culture. In other words, safety culture can hardly be regarded as a scientific concept, and when it comes to assessing it, safety culture is more or less implicitly defined as “what is measured by my survey”.

3 Safety Culture and Safety Paradigms

It does not follow that ‘safety culture’ is an irrelevant or unworkable concept for safety management. Safety culture assessment surveys provide an interpretation of behavior-related safety issues (Cooper, 2000). Taken with due precaution considering their conceptual ambiguity, these “quantified” pictures can be an effective starting point for discussing behavioral dimensions of safety management within an organization. Indeed, the assessment process can actually start with the efforts to interpret the outcomes of the survey. The apparent “objectivity” of the survey results, discussed during interviews and focus groups, helps the organization’s members to step back and look at themselves as if in a mirror. Even if the mirror is highly distorted, it triggers the perception of an image—or a caricature—of the organization. The collective sense-making process about this image can bring about the questioning needed and the potential triggers for a change.

In my experience, a key issue is that these pictures do not offer much reliable and objective meaning per se: the answers to many typical survey questions can lead to several different, plausible and often contradictory, interpretations. For example, the following assertionsFootnote 2 are extracted from a Eurocontrol safety culture questionnaire: “Sometimes you have to bend the rules to cope with the traffic”; “Balancing safety against other requirements is a challengeI am pulled between safety and providing a good service”. Would disagreement mean adherence to rules and giving a high priority to safety, hence a “good” safety culture, or would it mainly represent a high degree of jargon and an unrealistic perception of actual safety challenges? The arbitration between these alternative interpretations requires additional data about real work, behaviors and infrastructures, at the scale of a work group. It also requires an interpretation grid—a gauge—to make sense of the answers for the different trades. The variability across trades within the same organization may well be much higher than the inter-organizations variability within the same trade. Hence I am skeptical about the meaning of cultural benchmarks based on the same questionnaire across different organizations.

Safety culture questionnaires necessarily convey underlying and implicit assumptions about what enables an organization to stay in control of its safety risks. Usual assumptions include the commitment of managers and staff to safety, clear and strictly-obeyed rules and procedures, open and participatory leadership, good synergy within teams, open communication between colleagues—and through hierarchical layers, transparency about failures and incidents. All these assumptions reflect the opinions of safety experts about what attitudes and behaviors lead to desirable safety outcomes. They appear rational and to be common sense. They appeal to managers because they are manageable, and in line with the established order: management is responsible for designing and defining the “right” behaviors, leading and “walking the talk”; front line operators’ are responsible for complying with the prescriptions and reporting difficulties and failures. They refer to safety indicators based on measurable and controllable events frequencies. They have made the fortune of DuPont and a few others.

However they lack an explicit reference to a clear safety paradigm. They describe organizational (managerial, cultural) features that are expected to foster safety, but they do not explicitly mention the underlying beliefs about what makes a system safe. They address the syntax of safety management rather than its semantics. As a consequence these assumptions are difficult to “falsify”—in Kuhn’s (1996) terminology—by factual evidence. Hence they tend to become unquestionable dogmas. The famous assertion of a constant ratio between unsafe behavior, minor injuries, and fatal accidents (Heinrich, 1931; Bird & Germain, 1985) is a first example. This belief has been used worldwide throughout the industry for decades to prevent severe accidents through chasing daily noncompliance and minor incidents. Yet, it has been refuted by many researchers (Hopkins, 1994, 2005; Hovden, Abrechtsen, & Herrera, 2010) and characterized as an “urban myth” by Hale (2000). A recent study conducted by BST & Mercer (Krause, 2012; Martin, 2013) on occupational accidents in seven global companies (ExxonMobil, Potash Corp, Shell, BHP Billiton, Cargill, Archer Daniels Midland Company and Maersk) clearly shows a de-correlation between the evolution of the fatal and non-fatal accident rates over a given period. Barnett and Wang (1998) reached similar conclusions about the link between airlines incident rates and the mortality risk of passenger air travel over a decade (1987–1996) in US flight operations. In plain language, it means that safety strategies about severe accidents based on the Bird pyramid are at least partially flawed and inefficient, whatever their intuitive attractiveness and commercial success.

A second example is the moral posture embedded in most safety culture assessments concerning errors and violations. The acceptance that “errors are inevitable” is seen as a positive safety culture trait, while the acceptance for intentional deviations is seen as very negative. However, the respective contributions of errors and violations to the safety risk, when quantified,Footnote 3 did not necessarily support the above judgement. Within the Line Operations Safety Audit (LOSA) program in aviation (Helmreich, Klinect, & Wilhem, 2000), specifically trained senior pilots observe from the jump seat anonymous crews managing safety risks during real flights, and assess the risk generated by external and internal threats, actions, and inactions, in the various situations faced. Not surprisingly, deviations could be observed on 68% of flights and the most frequent were violations. More interesting is the assessment of the associated risk: only 2% of the violations were classified as consequential, in contrast with 69% for proficiency-related errors. As this was not in line with the dominant beliefs in aviation—violations must matter—further analyses were conducted to demonstrate that

those who violate place a flight at greater risk. {…}. We found that crews with a violation are almost twice as likely to commit one of the other four types of error and that the other errors are nearly twice as likely to be consequential.

Interestingly, the reverse hypothesis that violations could be a consequence of errors (e.g. attempts to mitigate errors) has not been envisaged….

4 Safety Management Modes

There is no one single strategy to make a system safe, which would work regardless of the system, its design, its business model and its environment (Amalberti & Vincent, 2014; Grote, 2012). The Bird triangle may work in some contexts while not in others. A total compliance with procedures may be an absolute safety condition in some contexts, and a threat in others. So there is a need for a generic grid of safety management strategies, which would allow for, and make sense of, different weights of the syntactic dimensions of safety culture (compliance, transparency, autonomy, accountability…). Safety management is inseparable from uncertainty management (Wildawsky, 1988; Westrum, 2006; Grote, 2007). It is also totally dependent on the way an organization generates, through its different layers and through its design, the behaviors that maintain the system in a safe state. The observed diversity of safety management strategies can hence be seen as the result of a combination of two key features: (i) the nature and level of predetermination in the management of uncertainty, and (ii) the nature and level of centralized control on front line operators.

These two features are usually considered interdependent—hence they are merged—in safety management theories. Anticipation and predetermination are considered to imply a centralized and hierarchical bureaucracy with a high level of control over operators. Conversely, resilience and responsiveness would imply a flexible, self-organizing organization. Amalberti (2001, 2013) describes a linear continuum of safety management modes ranging from “resilient” systems to highly normalized “ultra safe systems”. Journé (2001) suggests that the articulation of uncertainty management and organizational features leads to the definition of two “safety management systems”: a mechanist model, based on rational anticipation and bureaucratic organizational control, and an organic model, based on resilience, a decentralized organization, and the self-organizing capacities of autonomous teams. Grote (2014) proposes a more sophisticated correspondence grid between uncertainty management strategies (reducing/absorbing/creating uncertainty) and organizational control modes on operators. However, her approach still seems to be based on interdependent associations between uncertainty management modes and organization features. My contention is that these two features are much less interdependent than usually assumed. Instead, they define two independent dimensions, hence a two-dimensional space that can be summarized with four main combinations, defining four basic safety management modes illustrated by Fig. 1.

Fig. 1
figure 1

Basic safety management modes

In quadrant 1, a combination of high predetermination and strong organizational control enables a centralized risk management. The system is designed to be safe, and the strategy is to stay within its designed-to-be-safe envelope, which is continuously refined and expanded through in-service experience feedback and quality improvement loops. Predetermination of responses, planning, compliance with norms and standards, as well as hierarchical control, reduce many of the existing variability dimensions. Front line operators are highly standardized through selection and training, and are interchangeable. The power and responsibility for safety belong to the central organization.

In quadrant 2, in contrast, a combination of low predetermination and low organizational control leaves each frontline operator or team with the responsibility for managing the trade-offs between safety and performance. These are generally open systems, operating in an environment characterized by a high level of unpredictability. Their responses cannot be easily predetermined or standardized. Norms and regulations are only partially effective for safety. They must be complemented by strong adaptation expertise. Safety mainly emerges from adaptive processes and self-organization. The power and responsibility for safety belong to front line managers and operators.

In quadrant 3, a strong hierarchical organizational control is exerted over front line operators (by means of authority, intensive training, strict compliance with rules and procedures…). But operations need to be highly adaptive, because the anticipation possibilities are low, due to a high degree of uncertainty in the situations faced. In these systems, a highly effective, “maestro” type, operational hierarchy evaluates situations, makes decisions and adapts the responses, commanding highly trained and disciplined front line actors, acting in a tightly coordinated and standardized way. The power and responsibility for safety mainly belong to the operational commanders.

Finally, in quadrant 4, a combination of high predetermination and low organizational control allows the system to operate in an environment with strong constraints of operational conformity, while handling high variability in the details of operational situations. The decision power is delegated to local structures directly coupled to real time activity. But their overall behavior is to a large extent predetermined. These are highly cooperative systems, in which global behavior emerges from networking the activity of multiple autonomous cells. Front line players have similar skills, they follow rules and procedures, but their real time behavior is controlled by a strong team culture. The power and responsibility for safety mainly belong to operational teams.

Figure 1 also gives a few examples of potential representative domains of activity for each of these safety management modes. However, it is important to note that these examples must be taken as a caricature of a much more complex reality. They only refer to the dominant safety philosophies in each area. In fact, the different components or business units of a large organization would spread across several modes as illustrated by Fig. 2.

Fig. 2
figure 2

Illustrative representation of a given organization

5 Safety Culture and Safety Management Modes

A variety of safety management modes should be regarded as both normal and desirable within a large organization. Indeed the balance between predetermination and adaptation should be coherent with the actual level of endogenous and exogenous uncertainty. And the organizational control on individual safety behaviors must be in coherence with the social realities of the organization: the management of safety cannot be based on a type of social relationship significantly diverging from the overall management style. But the levels of uncertainty may be very different from one activity to another, even for apparently similar activities. For example, in Air Traffic Control Services, aerodrome control must handle much more uncertainty than en route control, because it needs to accommodate general aviation and private pilots. Similarly, the power distribution between trades or across hierarchical levels, usually resulting from a long confrontational history, may be very different within various components of an organization. Hence the senior management should recognize the need for the corresponding diversity, and explicitly foster it rather than try to reduce it.

But then, should there be a corresponding diversity within the organizations’ safety culture? What is the relationship between safety culture and safety management modes? It is not a one-to-one relationship. In the long term, each safety management mode tends to generate its own sub-culture. However, reversely, organizational cultures tend to persist for a long time, and may prevent the development of safety culture traits consistent with an emerging safety management mode. As can be readily observed during mergers, cultural misalignments can persist for years within the resulting company. Similar cultural misalignments can be encountered within a given company, between central and regional structures, corporate level and business units, trade or front line practices and managerial expectations. They manifest themselves through latent conflicts such as this typical example: managers invoke safety to try and reinforce their authority on their staff. Symmetrically, their staff resist procedures and Unions defend indefensible unsafe behaviors, as a resistance weapon against authority.

Beyond this, a significant part of the underlying safety management values and rationalization is imported from national beliefs and demands, as well as from international standards. The currently dominant vision of the “ideal” safety management mode in these standards is a soft (non-punitive and very Anglo-Saxon) version of the normative-hierarchical mode. However, evolving toward this mode does not necessarily mean a safer system or a march toward a higher safety culture maturity. Despite its undisputable contribution to historical safety progresses, the “total predetermination” strategy has also shown limitations. Recent catastrophic accidents (AF447, Fukushima, Deep Water Horizon) have illustrated the increasing vulnerability of large sociotechnical systems to the unexpected and the need for a refined safety paradigm. However, two powerful socio-cultural mechanisms continue to feed the trend towards more norms and compliance. The first is the dominant “positivist” culture of designers and managers, who perceive safety as the result of a deterministic, top-down, command-and-control process. The second is the increasing pressure of legal liability on the different players, including policy makers, requesting everyone to demonstrate that “risks are under total control”. So people may develop a vision of safety based on ‘nit-picker’ compliance, not so much because they rely on objective safety performance outcomes in their activity domain, but rather because they seek to minimize their liability.

In brief, Safety Culture inevitably and inextricably incorporates dimensions of organizational and national cultures that do not directly emerge from the reality and rationality of safety management modes, and can even be in conflict with them. Coherence is desirable, but conflictuality is not necessarily something bad. As discussed earlier, the culture-behavior-performance relationship is not a linear one. As with the bow, the strings and the violin, it is rather a resonance, whose equilibrium point cannot be foreseen. Tension and friction are needed to play music. And even a dose of bluff: one must sometimes ‘preach the false for the true knowledge’, demand total obedience to get intelligent compliance, value errors to build confidence. Hence a safety policy should not be based only on beliefs, but also on facts. In the semantics of safety cultures, evidence-based safety management should frame, if not replace, assumptions and dogmas. This in turn implies that, in each activity domain, ‘work as really done’ is properly assessed and relevant metrics are developed—and implemented—to measure things such as the level of uncertainty, the contribution of non-compliance to safety risk, or the statistical correlation between the frequency of small deviations and the likelihood of disaster.

What is at stake behind the notion of uncertainty is not only its extension, but also the very nature of uncertainty. In simple systems the impact of events is well known, so decisions only depend on the probability of occurrence, which is usually well expressed by Gaussian distributions. In complex systems, the probabilistic structure of randomness may be unknown or misjudged (the distribution tail may be much thicker than expected), and there is an additional layer of uncertainty concerning the magnitude of the events. In this case the risk associated with the unexpected may be far greater than the known risk. Focusing on anticipations and failing to “manage the unexpected” thus echoes the story of the drunk looking for lost keys under the lamppost “because the light is much better here”. Ironically, safety management—which is about managing uncertainty—may be eventually impaired by the illusory byproduct of its success: a rising culture of certainty.

6 Conclusion

The concept of ‘safety culture’ may not be a scientific one, but this does not preclude it from being quite useful for the management of safety. Safety culture assessment surveys provide an interpretation of behavior-related safety issues. Taken with due precaution this can be an effective starting point to discuss behavioral dimensions of safety management within an organization, in order to initiate a change. However, even from this pragmatic perspective, the usual acceptation currently describes organizational features expected to foster safety, but does not explicitly mention the underlying beliefs about the safety strategy to keep the system safe. Yet, there is no one single strategy to make a system safe. Depending on endogenous and exogenous uncertainty, there must be a variety of strategies providing different trade-offs between predetermination and adaptation, as well as different ways of exerting control on the behavior of front line operators. Coherent combinations define safety management modes.

But there is no one-to-one matching between safety management modes and safety cultures. A safety culture inevitably incorporates ‘local’ as well as organizational and national dimensions that do not directly emerge from the rationality of safety management modes, and can even be in conflict with them. This is not necessarily a problem, but an arbitration judge is needed, and factual evidence is best. In the semantics of safety cultures, evidence-based safety management should take priority over assumptions. This in turn implies that in each area of activity, ‘work as done’ is properly understood and relevant metrics are developed and implemented in order to measure the level and nature of uncertainty, i.e. the correlation between small deviations and the likelihood of disaster.