Abstract
Organisational culture is assumed to be a key factor in large-scale and avoidable institutional failures (e.g. accidents, corruption). Whilst models such as “ethical culture” and “safety culture” have been used to explain such failures, minimal research has investigated their ability to do so, and a single and unified model of the role of culture in institutional failures is lacking. To address this, we systematically identified case study articles investigating the relationship between culture and institutional failures relating to ethics and risk management (n = 74). A content analysis of the cultural factors leading to failures found 23 common factors and a common sequential pattern. First, culture is described as causing practices that develop into institutional failure (e.g. poor prioritisation, ineffective management, inadequate training). Second, and usually sequentially related to causal culture, culture is also used to describe the problems of correction: how people, in most cases, had the opportunity to correct a problem and avert failure, but did not take appropriate action (e.g. listening and responding to employee concerns). It was established that most of the cultural factors identified in the case studies were consistent with survey-based models of safety culture and ethical culture. Failures of safety and ethics also largely involve the same causal and corrective factors of culture, although some aspects of culture more frequently precede certain outcome types (e.g. management not listening to warnings more commonly precedes a loss of human life). We propose that the distinction between causal and corrective culture can form the basis of a unified (combining both ethical and safety culture literatures) and generalisable model of organisational failure.
Introduction
Scholars have long been interested in the role of culture as a causal factor in institutional failures, defined as a significant physical, financial, or social loss (Perrow 1999; Rasmussen 1997; Reason 1990; Turner 1978; Vaughan 1999). Institutional failures can be diverse in nature (e.g. accidents, scandals, bankruptcies), and culture is used to explain the shared values, beliefs, and assumptions which guide behaviour within an organisation and lead to poor outcomes (Schein 1984; Schneider et al. 2013; Ouchi and Wilkins 1985). Research on the cultural factors that lead to organisational failure has, largely, coalesced into two distinct paradigms: safety culture and ethical culture. These, respectively, examine how the management of risk and ethics within an organisation shape attitudes (e.g. of employees towards incident reporting or whistleblowing) and practices (e.g. risk-taking, unethical conduct) that contribute to large-scale failures (e.g. accidents, corruption) (e.g. Cooper 2000; Guldenmund 2000; Kaptein 2008). However, the extent to which theories of safety culture and ethical culture explain why organisational failures occur, and have identified the key psychological dimensions that account for problematic behaviour, is nascent. This is because studies of safety culture and ethical culture have tended to be prospective, for example using cross-sectional surveys to examine the relationship between employee beliefs (e.g. on norms for safe and ethical conduct) and behaviours (e.g. safety compliance, reporting ethical breaches). The role of safety culture and ethical culture in causing organisational failures is less well-established, and we investigate this in the current article through undertaking a systematic review of case study analyses using culture to understand institutional failures. We examine the utilisation and similarity of concepts from the safety and ethical culture literature to explain these failures, and propose a broader and more generalisable model on the role of causal and corrective organisational culture in institutional failures.
Organisational Culture and Its Relationship with Institutional Failure
There are pockets of consensus regarding how to define organisational culture. It is generally accepted that culture provides the rather stable and shared system of values, beliefs, and assumptions which provides approved modes of thought and behaviour, is resistant to change, and maintained through social interaction (Schall 1983; Schein 1984; Schneider et al. 2013). Schein (1984) further suggests culture is stratified by different levels of meaning, where the deepest level comprises the underlying and pervasive assumptions which organisational members tacitly accept, the intermediary level comprises what they espouse to believe, and the highest level consists of visible or audible patterns of behaviour and artefacts which are a manifestation of the other levels. Studies of culture divide according to whether they orient ethnographically to organisations “as cultures,” or measure culture through surveys and questionnaires as a variable or “something an organisation has” (Smircich 1983, p. 347, original emphasis).
Various attributes and types of culture have been associated with financial performance (e.g. Denison 1984; O’Reilly et al. 2014). Barney (1986) suggests culture can afford a sustained competitive advantage if characterised by uncommon qualities which cannot be imitated by other organisations. Similarity in survey responses, as an indicator of cultural strength, has also been linked to performance (e.g. Denison 1990; Gordon and DiTomaso 1992). However, as Reason (1998) highlights, the very same processes of internal integration and external adaptation which maintain group cohesion and thus comprise the core function of a culture (Schein 2010) can threaten an organisation’s survival when applied to goals which undermine good practise. Namely, through normalising maladaptive behaviour, “cultures create problems as well as solving them” (Kroeber and Kluckhohn 1952, p. 57). This duality corresponds to the sub-field of sociology which draws on Merton (1936, 1940, 1968) and Durkheim (1895/1966) to investigate how the same processes which produce positive organisational outcomes, are also responsible for the ‘dark side’ that generates mistakes, misconduct, and disaster (Vaughan 1999).
Institutional failure is “a physical, cultural, and emotional event incurring social loss, often possessing a dramatic quality that damages the fabric of social life” (Vaughan 1999, p. 292). Concretely, it is typically used to refer to large-scale avoidable failures, for example accidents (e.g. Chernobyl, Deepwater Horizon) or scandals (e.g. Enron, Barings Bank), that have consequences for those within an organisation (e.g. employees), stakeholders (e.g. passengers, investors, the public), the environment (e.g. pollution), and integrity of an institution itself (e.g. collapse, or huge reputational damage). Within the diverse conceptual models that are used to explain failure, for example by Turner (1978; Turner and Pidgeon 1997), Reason (1990, 2016), Perrow (1984, 1999), and Rasmussen (1997), organisational culture is often a key element.
Turner (1978) was first to describe failure as a socio-technical phenomenon, rather than an event which is divine, coincidental, or purely technical (Turner and Pidgeon 1997). He conducted a systematic qualitative analysis of 84 British accident and disaster reports published between 1965 and 1975, developing a six-stage developmental sequence model of failure (‘man-made disaster’) as being preceded by several preconditions which develop during a ‘disaster incubation period.’ The incubation period involves the slow ‘accumulation’ of events which deviate from the culture’s beliefs and norms regarding hazards. This accumulation can continue for many years and is enabled by people’s incorrect assumptions about hazards, problems in information-handling, rigidities of perception, and inappropriate or outdated formal procedures. The incubation period ends when a ‘precipitating incident’ such as an explosion, fire, or plunge in share prices, exposes the actual state of affairs. Culture is at the centre of Turner’s model which equates failure sociologically to a cultural collapse (Pidgeon and O’Leary 2000). Yet, culture operates at a meta-level in the divergence between what people believe is an accurate perception of affairs, and what is actually true. This highlights how disaster occurs despite people believing they are taking the necessary precautions against failure, but does not identify common ways in which it is precipitated by specific cultural problems.
Reason’s (1990, 2016) model of accident causation also considers the role of organisational culture. Reason posits an organisation’s layers of defence are somewhat akin to layers of Swiss cheese: each has gaps representing weaknesses, through which an accident ‘trajectory’ can pass if gaps momentarily align. Defence weaknesses are constantly moving, making their alignment—and thus failure—a rare occurrence (Reason 1998). Reason makes the useful distinction between the errors or violations at the ‘sharp-end’ of operations which ‘trigger’ failure (‘active failures’)—the final slice in the Swiss cheese model—and the latent error- and violation-producing conditions. Latent conditions include for example an emphasis on cost-cutting (e.g. understaffing), aspects of organisational structure and how business is conducted, inadequate hardware in terms of tools and equipment, poor system design, and procedures which are unclear or not applicable. They persist undetected and are a product of culture, as well as management decisions, and organisational processes. However, only culture is ubiquitous enough to influence all aspects of defence. Culture “can not only open gaps and weaknesses but also—and most importantly—it can allow them to remain uncorrected” (Reason 1998, p. 297).
Culture is less focal in Perrow’s (1984, 1999) theory of normal accidents which suggests failure is a normal and unpreventable part of organisational systems working with high-risk technology (e.g. nuclear power plants, air traffic control) because they are characterised by ‘tight coupling’ (e.g. little scope for slack, delays, and alternative procedures) and high ‘interactive complexity’ of system components. Perrow gives external forces of capitalism a larger role in failure than culture. He notes,
If culture plays a role, as many argue it does (…), it is not the most important one, and while efforts to change the culture to one that favors high reliability operations are certainly of high priority, restricting the catastrophic potential of our enterprises is of higher priority. (…) Rather than look to national cultures, or even to cultures of companies and the workplace, we might look at plain old free-market capitalism (Perrow 1999, p. 416, original emphasis).
Perrow defines culture in positive terms as a source of reliability, and thus does not associate culture with the negative potential of adverse outcomes. Instead, this potential is attributed to external production pressures of the outside economic system. However, the ways in which an organisation manages the two opposing goals of efficiency and safety can shed light on what is valued and tolerated within an organisation (i.e. culture), and would explain why not all those operating in the same economic conditions experience failure.
Rasmussen (1997) includes culture as a possible preventative mechanism. According to Rasmussen (1997), organisations are constantly under pressure to maintain an acceptable workload, safe performance, and avoid economic failure. These are depicted as boundaries, where the organisation is at the centre, managing the tensions between them. The organisation moves toward the ‘functionally acceptable' (i.e. safe) performance boundary when it reduces employees’ workload or increases productivity. Errors and accidents occur if the organisation moves so far that it crosses the boundary of functionally acceptable performance. Rasmussen proposes that informational campaigns for safety culture can counter the pressures of efficiency and thus improve control of performance. Safety culture is defined in terms of people’s knowledge of the boundary of functionally acceptable performance. Like Perrow, this highlights the positive role of culture in fostering an awareness of risk and danger, but does not highlight the negative contributing aspects of culture.
In conclusion, different and seminal models of organisational failure view failure as something which emerges gradually and sequentially over several contributing factors (Reason 1990; Turner 1978). Organisational culture permeates these models through providing an explanatory framework for understanding the drivers of behaviour within an organisation (e.g. the values that underlie, and are expressed, through cost-cutting, incentivisation, system design, procedures), and explaining how norms and values towards risk (e.g. normalisation and tolerance) determine how managers and employees identify and respond to hazards. Subsequent research on the role of organisational culture in institutional failures has tended to focus on two domains: safety culture and ethical culture.
Safety Culture and Ethical Culture
Safety culture and ethical culture are the main cultural dimensions applied to investigate the relationship between culture and institutional failures (Cooper 2000; Guldenmund 2000; Kaptein 2008; Treviño and Weaver 2003). Although they focus on different sets of values (e.g. the importance of safety, adhering to ethical standards), behaviours (e.g. risk-taking, dishonesty), and outcomes (e.g. accidents, scandals), safety culture and ethical culture have many parallels in how they are used to explain organisational failures. For instance, both stress the importance of senior leadership in setting standards, supervisors in guiding behaviour, organisational and group norms in determining what practices are acceptable, giving employees the knowledge and skills to behave effectively, and ensuring employees can speak-up (and are listened to) when they raise concerns (Ardichvili and Jondle 2009; Guldenmund 2000; Kaptein 2011; Neal and Griffin 2002; Reader and O’Connor 2014; Zohar 2010). However, both models diverge in their origins, and the variables they use to explain organisational failures.
Interest in safety culture came from a shift in focus from models of causation to how crisis and risk management might be improved to provide institutional resilience (Pidgeon and O’Leary 2000). Safety culture relates to the norms and practises surrounding health and safety within an organisation (Cooper 2000; Guldenmund 2000), and is highly related to safety climate (perceptions on the priority of safety) (Zohar 2010). Pidgeon and O’Leary (2017) suggest a ‘good’ safety culture is characterised by senior management’s commitment to safety, a shared concern for hazards and how they impact people, realistic norms and procedures for managing risk, and continual processes of reflection and organisational learning. Safety culture gained traction in the 1980s to account for large-scale failures such as the Chernobyl nuclear disaster (Pidgeon 1998) and Piper Alpha oil rig explosion (1993) where shared patterns of belief and behaviour were found to have played a significant role in the disasters. In both cases, a prioritisation of other concerns (e.g. productivity) by senior managers led to operational decisions that weakened safety (e.g. on the use of resources, reduced safety inspections, pushing safety capabilities), the normalisation of unsafe practices (e.g. unsupervised staff undertaking maintenance routines), and a lack of preparedness for managing safety emergencies. By focussing accident investigations on the system and cultural context of organisations, safety culture theory departed from earlier research which had attributed accidents to fallible mental processes (e.g. forgetfulness, negligence) in individuals working directly with the system (Reason 2000). Rather, safety is understood as part of an organisation’s value-system, with the consideration of safety in everyday practices being a product of, and revealing, these values (Guldenmund 2000).
Interest in the domain of ethical culture has come about due to organisational failures that are considered a consequence of unethical conduct (e.g. scandals of Enron, LIBOR, and Odebrecht). Similar to safety culture, the concept of ethical culture emerges from the rationale that unethical acts within an organisation are likely to reflect values within an organisation for ethical conduct, rather than individual failings. This diverges from the perspective that unethical behaviour is determined by individual factors such as a person’s sensitivity to moral issues (Rest 1979), level of moral judgement (Kohlberg 1969), and guilt proneness (e.g. Cohen et al. 2012). Within the ethical culture framework, ethical behaviour is conceptualised as determined by immediate job pressures, institutional values and norms on the importance of ethics (e.g. for indicating the appropriateness of behaviours), and the embedding of these values into formal systems (e.g. rules and polices) (Treviño 1986; Treviño et al. 2014). As with safety culture, ethical culture is conceptualised as a subset of organisational culture, with specific domains of activity—for instance on transparency or the sanctioning of unethical behaviour—constituting an ethical culture (Kaptein 2008; Kish-Gephart et al. 2010). Furthermore, ethical culture is influenced by the values and behaviours of leaders (see Ardichvili and Jondle 2009), and associated through various dimensions with reported (un)ethical behaviour and intentions (e.g. Kaptein 2011; Sweeney et al. 2010; Zaal et al. 2019). Several contributing factors to ethical culture have been identified, including an organisational commitment to employees (Fernández and Camacho 2016), the presence of ethics programmes such as a dedicated ethics department or committee (Martineau et al. 2017), the role-modelling of ethical practises by management (Kaptein 2011), and reward systems that reinforce ethical behaviour (Treviño et al. 1998). Ethical climate, understood as the perceptions of organisational values for ethics (Victor and Cullen 1988), has been distinguished from ethical culture, which relates more to the systems of control that shape ethical behaviour (Kaptein 2011; Treviño et al. 1998). As the dimensions of ethical culture and ethical climate are highly related (Treviño et al. 1998), we, like others (e.g. Ardichvili and Jondle 2009), regard ethical climate as a sub-category of ethical culture.
The fields of both safety culture and ethical culture have arisen to explain organisational failures, and to investigate this, researchers in both fields have relied on psychometrically validated surveys (Guldenmund 2007; Kaptein 2008). These are used to measure employee perceptions of the culture (e.g. management commitment to safety, reporting safety incidents, clarity of expected conduct, reporting ethical concerns), with responses being associated at an individual, unit, or organisational level with adverse outcomes. For instance, research shows safety culture to be associated with safety behaviours, reporting, and lost-time injuries (Beus et al. 2016; Christian et al. 2009; Petitta et al. 2017), and ethical culture to be associated with ethical choices and reports of unethical behaviours (e.g. intentions to report misconduct) (Schaubroeck et al. 2012; Kish-Gephart et al. 2010; Kaptein 2011). Associations between safety culture and ethical culture and larger-scale organisational failures (e.g. corruption, process safety failures) are absent due to their rarity (e.g. in comparison to individual reports on behaviour), unpredictability (e.g. in identifying and accessing a failing organisation), and the challenges of expecting employees to recognise and report on sensitive topics like safety and ethics (e.g. Antonsen 2009a; Arnold and Feldman 1981; Fischer and Fick 1993). Furthermore, whilst survey methods have provided valuable insight on ‘what’ cultural dimensions are associated with adverse outcomes, they have not necessarily shown ‘how’ various aspects of culture interact to create the conditions for failure. This is important because failure is defined by the nature of its sequential development through several events over time (Reason 1990; Turner 1978).
In summary, researchers have provided and validated conceptual models for measuring safety culture and ethical culture, and these models are the most widely used to understand the cultural conditions under which institutional failures occur. However, for reasons of methodology and data availability, both concepts are limited in the extent to which their underlying components are demonstrated and understood (e.g. in terms of sequence within an event) to have a role in explaining large-scale and avoidable institutional failures (e.g. accidents, scandals). Indeed, it is not clear that safety culture and ethical culture are entirely distinct in explaining failure: for example, investigations of hospital failures in the UK (e.g. Mid Staffordshire hospital, Shrewsbury and Telford hospital) have revealed a combination of poor safety culture (e.g. staff not reporting on sub-standard care) and poor ethical culture (e.g. dismissing patient concerns, concealing poor care) to have led to unnecessary patient deaths (Francis 2013; Lintern 2019). There may be aspects of organisational culture important for explaining institutional failures that are generalisable and not unique to either safety culture or ethical culture (e.g. reporting on concerns), or factors that are not considered within either model. To better explain the role of organisational culture in institutional failures, and specifically the contribution of safety culture and ethical culture, we undertake a systematic review of articles investigating the relationship between culture and institutional failures.
Current Study
In this study, we undertake an inductive content analysis of case studies investigating the role of organisational culture in institutional failures. Case study analyses draw on a multitude of secondary sources to understand causes of an institutional failure, including investigative reports, organisational documents, as well as first-person accounts, which only become available after a failure because of a need to establish what occurred (Cox and Flin 1998; Feagin et al. 1991). This mitigates the previously mentioned limitations with survey-based methods, and supports an approach which can consider the perspectives of all people and sub-cultures involved, enabling analysis of ‘how’ culture led to failure (Glendon and Stanton 2000; Tellis 1997). Given these methodological affordances, case studies may provide insight on the most commonly identified aspects of safety culture and ethical culture that underlie organisational failures, alongside identifying cultural factors not established within survey-based models of safety culture and ethical culture. Moreover, their inductive and retrospective nature offers the opportunity to determine whether failures of safety and ethics differ by the cultural factors they involve, or if the conceptual boundaries of safety culture and ethical culture become less distinct when considering failure. In this study, we systematically identify and analyse case studies of institutional failure in order to address four research questions. We define institutional failure as an event with multiple causes which had developed over time (Turner 1978; Turner and Pidgeon 1997).
First, we identify and extract the aspects of organisational culture reported as contributing to institutional failures. Our aim is to determine whether a common set of cultural factors can be established—from across the multiple and independent case studies—as leading to institutional failure. We think that establishing what cultural factors have been used to explain failure provides a logical starting point for our analysis because these cultural factors have not been systematically catalogued before and it will enable comparisons with existing models of safety culture, ethical culture, and failure.
Second, we examine how different cultural factors are used to explain institutional failure. We explore whether, as specified in various models of institutional failure (e.g. Reason 1990; Turner 1978), culture is used to both account for the organisational conditions that underlie failure (e.g. norms, values), and problems in responding to threats that endanger an organisation (e.g. a developing accident). This exploration of how culture is used to explain failure will shed light on the mechanisms by which culture contributes to failure. These mechanisms are beyond the scope of survey-based methods which measure cultural elements to the degree that they are present or absent (see Reason 2000).
Third, we consider the cultural factors identified in case studies of institutional failure in relation to the safety culture and ethical culture models. We investigate the extent to which the cultural factors identified by analyses of institutional failure map onto existing models of safety culture and ethical culture. We identify cultural factors not typically included with models of safety culture and ethical culture, and consider whether they indicate other aspects of organisational culture which may be important for explaining institutional failure. As a third step, this establishes the extent to which retrospective studies diverge from survey-based studies of safety culture and ethical culture.
Fourth, we examine whether the cultural factors identified as contributing to institutional failures vary according to failure-type (safety or ethics) and failure-outcome (e.g. loss of life, environmental damage). Here, we are interested in the extent to which models of safety culture and ethical culture are exclusive and explanatory of institutional failures, or whether a more generalisable model of institutional failure might be derived.
Method
This is the first systematic review of case studies which use culture to account for the causation of institutional failure. Accordingly, there was no protocol available and the development of search terms was challenging given the literature on culture and failure is ill-defined and prone to differences in terminology. For stage 1, search terms were designed to ensure the primacy of organisational culture as a theoretical framework (see Fig. 1). Using Scopus and Web of Science, studies were identified if ‘organisational culture’ or an equivalent term (e.g. ‘organisational climate’) based on Schneider et al. (2013) featured in the title, and ‘failure’ or an equivalent term (e.g. ‘disaster’) featured in the title, abstract, or keywords. No date parameters were applied. Assessing the output revealed some issues with this first search. First, limiting the occurrence of ‘culture’ to title only included some studies not found otherwise, but excluded publications which mention culture only in the abstract or keywords. Second, studies which refer only to ‘culture’ or ‘climate’ were not identified because the search terms required more specificity (i.e. ‘organisational culture’). Third, it was evident that several studies name a high-profile failure rather than use a generic term like ‘failure.’
For stage 2, the search for ‘organisational culture’ and ‘failure’ was expanded to keywords in Scopus, and title, abstract, and keywords in Web of Science. Studies were also identified if they specified failures (e.g. ‘Challenger’) in keywords (Scopus) or title, abstract, and keywords (Web of Science), or if they used specific text strings such as ‘culture at’ and ‘learn* from’ in the title or abstract (Scopus) and title, abstract, and keywords (Web of Science). Books and book chapters were initially included in this second Scopus search given knowledge of some seminal publications in the field (e.g. Vaughan 1996), but ultimately screened out for a more manageable extraction process. Differences in search fields across databases are owing to differences in search capabilities of Scopus and Web of Science. After removal of 771 duplicates, the final corpus consisted of 3,491 publications. The same search terms without the inclusion of ‘culture’ or ‘climate’ yielded 18,419 articles in Web of Science, and thus we can estimate that about 23% of case study research on organisational failure utilises culture concepts. Through screening of titles and abstracts, studies were included if they contained a case study of one or more institutional failures and employed organisational culture as a main theoretical framework. A hand search was conducted to ensure no relevant case studies had been omitted, and this led to the addition of articles by Bennett (2020), Merenda and Irwin (2018), Jung and Park (2017), and Reason (1998). The final corpus consisted of 58 articles.
Data Extraction and Analysis
The full-text of included articles was retrieved and extraction was carried out according to the four research questions. To establish what cultural factors are cited as contributing to institutional failure, a content analysis of included articles was carried out at the sentence-level. “Content analysis is a research technique for making replicable and valid inferences from texts (or other meaningful matter) to the contexts of their use” (Krippendorff 2013, p. 24). This content analysis was inductive because no pre-existing model of culture or climate was used to identify the cultural factors cited by case studies. Avoiding pre-conceived cultural factors was considered important for methodological integrity given the untraditional nature of this review. Cultural factors were collapsed based on similarity over three rounds of consolidation into 23 categories.
To establish how case studies use cultural factors to explain failure, each factor was additionally coded for the presence of preceding cultural causes and subsequent cultural outcomes (i.e. which it elicited or supported). Cultural factors were not double-coded unless they indexed distinct values or behaviours, and where it was necessary to capture the unconnected causes or outcomes of a single cultural factor.
To establish whether the cultural factors identified in case studies correspond to the items and dimensions of models of safety culture and ethical culture, we examined two reviews of safety culture and climate scales (Flin et al. 2000; Guldenmund 2000), two widely-cited models of ethical culture (Kaptein 2008; Treviño et al. 1998), and conducted additional literature searches where a cultural factor was not captured by these resources (e.g. Hessels and Wurmser 2020; Singhapakdi et al. 1996).
Finally, each failure analysed was coded according to type (i.e. safety or ethics) and outcome (e.g. environmental damage). The cultural factors cited in relation to each failure type and outcome were examined for patterns to see if any cultural factors are exclusively involved in safety failures or ethical failures and whether some aspects of culture more frequently precede specific failure outcomes.
Results
Fifty-eight articles containing 74 case studies of 57 unique institutional failures were identified. The most common journal of publication was Safety Science (12.07%, n = 7; see “Appendix”). The culture models of safety culture (36.21%, n = 21), organisational culture (17.24%, n = 10), and corporate culture (13.79%, n = 8) were most frequent. Overall, the failures investigated were diverse, including: nuclear disasters, oil rig explosions, doping in professional sport, financial fraud, failures to adapt, poor planning, accidents in public transport, the hiring of incompetent staff, institutional abuse, espionage, hazardous spills, and fires. Case studies analysed failures of safety (55.41%, n = 41), ethics (41.89%, n = 31) and a third category of strategy (2.7% of case studies, n = 2).
Case studies predominantly investigated fraud (27.03%, n = 20), oil and gas spills (12.16%, n = 9), space shuttle disasters (12.16%, n = 9), nuclear disasters (8.11%, n = 6), and rail accidents (6.76%, n = 5). Recurrent failures were the accounting fraud at Enron (8.11%, n = 6), the disintegration of the Space Shuttle Columbia as it was entering the atmosphere (6.76%, n = 5), the disintegration of the Space Shuttle Challenger at 73 s after it lifted-off (5.41%, n = 4), and the Fukushima nuclear disaster (5.41%, n = 4). Overall, case studies spanned 15 sectors. Ten were encompassed by the Global Industry Classification Standard (MSCI and S&P Global Market Intelligence 2018) (see Fig. 2).
Establishing Whether a Common Set of Cultural Factors Contribute to Failure
To establish whether case studies identify a common set of cultural factors, we systematically coded and collapsed all cultural factors invoked by case studies. Twenty-three cultural factors were identified as contributing to failure, indicating that failure arises from a generally common set of values and practises (see Table 1).
Priorities
The most common cultural factor was a problem in how priorities were ranked within the organisation. This predominantly involved safety or ethics being less prioritised in favour of profitability or productivity. For example, a focus on production at a meat supplier motivated poor hygiene practises and the repackaging of expired meat products, eventually causing a tragic E. coli O157 outbreak in South Wales (Griffith 2010). The focus on profit could also manifest as aggressiveness and an orientation towards competitiveness. At Enron, the lowest performing employees were annually let go in a ‘rank and yank system’ which created a ‘cut-throat’ culture of competition between employees that normalised accounting fraud (Cuong 2011; Froud et al. 2004). In all cases, productivity had short-term benefits with unforeseen long-term outcomes. This is illustrated by Reason (1998) who describes how competition between British warships in nineteenth century peacetime led to polishing practises which removed the watertight quality of doors, contributing to naval disasters such as the HMS Camperdown. As Reason (1998) notes, “peacetime ‘display culture’ not only undermined the Royal Navy’s fighting ability, it also created gleaming death traps” (p. 298). Being the most prevalent cultural factor in failure, it would be useful to know its preconditions. However, in the few cases where an underlying cause was given, the failure to prioritise safety and ethics was the outcome of forces outside an organisation’s control, including societal or national norms (e.g. neoliberalism; Behling et al. 2019; Kee et al. 2017), privatisation (Dien et al. 2004), and legislation (Behling et al. 2019).
Management
Inadequate management was also a common cultural factor. Occasionally this involved aspects of leadership style and personality, such as the CEO being distant and unavailable to employees (Johnson 2008), exhibiting excessive confidence (Amernic and Craig 2013; Cervellati et al. 2013), divisiveness or dogmatism (Fallon and Cooper 2015). It also involved leaders pursuing inappropriate business strategies, such as undercutting competitors, resulting in an artificial market monopoly (Goh et al. 2010). However, inadequate management more frequently involved managers and supervisors not verifying that procedures and jobsites (i.e. operating standards) were maintained by employees. It also manifested in overly generous reward structures which encouraged unethical behaviour (e.g. Froud et al. 2004; Molina 2018). The large role attributed management must be considered in relation to inadequate board supervision which was less frequently cited. It may be that board effectiveness and composition are less consequential than adequate executive management to the prevention of adverse events (cf. Baysinger and Butler 1985; Uzun et al. 2004).
Training and Policy
The many problems related to training and policy reflect Reason’s (1990) idea that failure is triggered by the errors or violations of individuals dealing immediately with the system (Reason 1990; see also Robson et al. 2012; Schulte et al. 2003). Indeed, inadequate training of employees resulted in a workforce which lacked the ability to properly carry out tasks, causing behaviour that precipitated or facilitated failure. Unsurprisingly, inadequate training was often the outcome of cultural factors which reduced its apparent necessity, including cost-cutting and a focus on profit (e.g. Froud et al. 2004; MacLean et al. 2004), insufficient regulation (e.g. Crofts 2017; Kim et al. 2018), and a shared belief that failure was not possible anyway (e.g. Cervellati et al. 2013, 1993; Reason 1998). For example, operators at Chernobyl had not been effectively trained in the dangers of nuclear power, and thus did not exercise appropriate caution (Reason 1998).
Problems of policy related to the content or availability of information about organisational procedures (e.g. Rafeld et al. 2019; Reader and O’Connor 2014). On the one hand, overly-proscriptive policy could deter employees from violating procedure even when a situation demanded it (e.g. Broadribb 2015). For example, a stringent culture of procedural compliance deterred the pilots of Swissair flight 111 from landing the aircraft as quickly as possible when inflight smoke was detected (McCall and Pruchnicki 2017). Policy could also give license to unethical practise through vagueness. As Sims and Brinkmann (2002) note, “[w]hen people are not sure what to do, unethical behaviour may flourish as aggressive individuals pursue what they believe to be acceptable” (p. 334).
Change
Institutional change was the least-cited contributing factor to failure, indicating that failure may be more commonly the outcome of enduring beliefs and behaviour. Change contributed to failure where it clashed with or weakened an existing culture, for example a new CEO was a poor cultural fit (e.g. Johnson 2008) or administrative changes disrupted employees’ accustomed occupational roles (Lederman et al. 2015).
Speaking-Up System and Bullying
The availability of a medium was not the most important factor in employees speaking-up about problems to management. Case studies more frequently attributed employee silence to a fear of victimisation from bullying (e.g. Crofts 2017; Johnson 2008; Froud et al. 2004) and material retaliation such as being fired (e.g. Beamish 2000; Guthrie and Shayo 2005). This reflects the health problems associated with being bullied at work (e.g. Nielsen and Einarsen 2012; Einarsen and Skogstad 1996) and suggests that organisational responses to whistle-blowers are an important determinant of speaking-up behaviour.
Establishing whether Culture is Used in the Dynamic Sense of Failure Models
To establish whether usage of the concept of culture was congruent with the dynamic failure models of Reason (1998) and Turner (1978), we counted the number of factors cited and coded for possible sequential relationships between them. With a mean of 7.28 cultural factors cited per case study, usage of culture was in line with these failure models. Cultural factors were also frequently linked sequentially (87.84%, n = 65) and case studies typically cited more than one sequence of cultural factors (59.46%, n = 44). This reflects popular models of failure development which suggest the complex and dynamic nature of failure development (Reason 1990; Turner 1978) and is a divergence from survey-based studies of safety culture and ethical culture which commonly measure culture by its absence (see Reason 2000).
Sequences of cultural factors tended to consist of two cultural factors (77.03%, n = 57). Fewer case studies cited sequences of three (22.97%, n = 17), four (2.7%, n = 2), five (1.35%, n = 1), and six cultural factors (1.35%, n = 1). Importantly, sequential cultural factors were not necessarily consecutive. For example, in a case study of the Piper Alpha oil rig explosion, it was the combination of (a) inadequate regulation, (b) the primacy of production, and (c) disbelief that failure was possible, which together led to (d) design decisions that created (e) an unsafe physical work environment ( Paté-Cornell 1993).
Case studies used culture to explain failure in two ways. Most often culture described problematic values and practises that were endogenous to the culture and directly contributed to failure (93.24% of case studies, n = 69). This encompassed 15 of the 23 cultural factors, including: change, disbelief, employee satisfaction, the external environment, homogeneity, planning, priorities, procedure, management, regulation, resources, role-modelling, supervision, teamwork, and training and policy. These cultural factors can create the preconditions for failure to occur. Although we are not suggesting a new concept, they can be grouped into the category of ‘causal culture’ for ease of interpretation.
Culture was also used to explain why an organisation failed to correct a problem before it was able to develop into failure (72.97% of case studies, n = 54). Issues of corrective culture manifested as ‘problems dealing with problems.’ This encompassed eight of the 23 cultural factors, including: bullying, listening, learning, problem acceptance, problem response, rhetoric, speaking-up, and speaking-up system. These may be organised into the two categories of voicing and hearing. Voicing factors refer to the failure of employees to voice concerns about institutional problems to people in authority. Hearing factors refer to the failure by management to act on information received about institutional problems. This distinction maps onto the two critical phases of an organisation’s ‘adaptive capacity’: first, disruptive events need to be identified and relayed appropriately, and second, information must be heeded with the proper deployment of resources (Burnard and Bhamra 2011). Problems of corrective culture are distinct from other cultural factors because they do not advance the development of a failure. Instead, they represent missed opportunities to address a problem and potentially avert failure.
Case studies cite an average of 4.99 causal cultural factors and 2.3 corrective cultural factors. As depicted in Fig. 3, the maximum number of causal and corrective factors cited by any case study is 17 and nine, respectively. Twenty case studies (27.03%) cite only causal factors, whilst five case studies (6.76%) describe failure arising from corrective factors alone. Indeed, 66.22 percent of case studies describe a combination of causal and corrective cultural factors. This indicates that failure commonly involves more aspects of causal culture, but that failure is typically also preceded by at least one missed opportunity to avert failure.
In summary, where studies of safety culture and ethical culture have prospectively measured culture through the perceptions of organisational members, case studies have retrospectively analysed how cultural factors interact in a sequential way to produce failure. Case studies thus reveal that, rather than exerting a top-down influence on behaviour, cultural factors divide according to whether they represent causes of failure or missed opportunities to avert failure. Indeed, a majority of case studies highlight that failures could have been prevented.
Establishing How Retrospective Case Studies Diverge from Survey-based Models of Safety Culture and Ethical Culture
Twenty of the 23 cultural factors which contribute to failure are typically part of models of safety culture (Flin et al. 2000; Guldenmund 2000; Hessels and Wurmser 2020) or ethical culture (Kaptein 2008; Singhapakdi et al. 1996; Treviño et al. 1998) (see Table 2). This indicates that case study analyses and survey-based studies largely overlap in the cultural factors they identify. However, three cultural factors—listening, bullying, and homogeneity—were novel, and are important to identify.
Listening is not typically measured by survey-based studies of safety culture and ethical culture. This is noteworthy given that a failure to listen to signs or information about problems is mentioned by a third of case studies. A diverse literature outside of culture recognises the importance of listening in organisations (e.g. Gillespie and Reader 2016). Some draw on the Foucauldian concept of ‘parrhesia’ (true speech) to analyse whistleblowing and listening (e.g. Catlaw et al. 2014; Vandekerckhove and Langenberg 2012; Weiskopf and Tobias-Miersch 2016). For instance, Vandekerckhove and Langenberg (2012) suggest that parrhesia in the workplace may not be heard because information typically travels up a hierarchy, requiring that a series of people have the courage to speak-up to the next person above them. Others use the term ‘deaf effect’ to understand when organisations persist with projects despite reports of trouble (Cueller et al. 2006; Keil and Robey 1999). Jones and Kelly (2014) suggest that when whistle-blowers are listened to, it can create a mutually-reinforcing cycle that benefits organisational learning, whilst ‘organisational disregard’ can create a norm of silence, preventing people from coming forward about other issues (see also Mannion and Davies 2018). Burris (2012) focusses on the information communicated and finds that managers are more likely to agree with information that supports rather than challenges the organisation’s goals, and that this relationship is mediated by the perceived loyalty and threat of voicing employees. Perceptions of management listening have also been linked to psychological safety and creativity (Castro et al. 2018).
The case studies indicate that not listening (i.e. not heeding information) was the outcome of culture in two ways (see Table 3 for examples). First, not listening occurred because the information received (i.e. X is unsafe or unethical) conflicted with the receiver’s taken-for-granted assumptions about the world as a cultural member (i.e. X is safe, X is ethical, or failure cannot happen), resulting in a breakdown of intersubjectivity (e.g. Grice 1975). Choo (2008) refers to this misalignment between information and available cognitive frame as an ‘epistemic blindspot.’ In terms of excluding employees from decision-making, this misalignment could cause employee involvement to be regarded as extraneous. Second, not listening occurred because the information, whilst decoded ‘correctly’ in terms of intent and content, conflicted with competing cultural values and demands. These include a pressure to maintain performance levels (e.g. Broadribb 2015; Mason 2004) and having to manage with limited resources (e.g. MacLean et al. 2004). Competing values and demands may also account for those cases where managers excluded employees from decision-making. Involving employees may conflict with organisational goals if employees communicate inconvenient information or deplete valued time.
Bullying is the second cultural factor not typically a part of models of safety culture and ethical culture. Bullying deterred employees from speaking-up about organisational problems (e.g. Dimeo 2014; Patrick et al. 2018). Bullying had pragmatic qualities somewhat similar to gossip: it was a form of social control directed at someone perceived to have transgressed the rules, which seemed to reinforce the group’s normative boundaries of right and wrong (Gluckman 1963; see also Waddington 2016). Bullying is naturally more hostile than gossip, and was indelibly dysfunctional because it enabled unsafe or unethical practises to persist and remain acceptable to (most) group members. As such bullying can have adverse outcomes for organisations, as well as individuals (e.g. self-esteem, Randle 2003; job satisfaction, Quine 1999).
The final cultural factor not typically a part of models of safety culture and ethical culture is workforce homogeneity (i.e. the absence of diversity). In the case studies, homogeneity was predominantly fostered through recruitment and promotion practises which rewarded unethical behaviour with a sense of belonging (Crawford et al. 2017) or (continued) employment (Fallon and Cooper 2015; Sims and Brinkmann 2002, 2003). Homogeneity has been studied as a source of cultural strength, associated with more reliable performance in stable environments (Sørensen 2002), job satisfaction, organisational commitment, and less role stress (Barnes et al. 2006). Conversely, diversity research shows that demographic diversity amongst members of the board and management has a positive effect on financial performance (e.g. Erhardt et al. 2003), promotes cognitive diversity (see Horwitz 2005), and mitigates groupthink (Janis 1972). Indeed, strongly shared values and norms can be maladaptive (Syed 2019), but there is need to further conceptualise homogeneity as a cultural phenomenon that is conducive to failure.
In summary, whilst the cultural factors identified by case studies largely map onto models of safety culture and ethical culture, listening, bullying, and homogeneity as cultural phenomena need further research as they relate to adverse organisational events. However, the large overlap suggests failure and less severe events arise from a common set of cultural factors.
Establishing whether Cultural Factors vary by Failure and Outcome Type
Cultural Factors by Failure-Type
To establish whether the 23 cultural factors occur across safety and ethical failures, we examined the distribution of cultural factors by failure type. All causal and corrective factors occurred at least once in both failure types, with the exception of inadequate board supervision which was exclusive to ethical failures. A few more discrepancies emerge when frequencies are scaled against the different proportions of safety failures and ethical failures (see Figs. 4 and 5). Safety failures more often involved issues related to the external environment, learning, listening, procedure, resources, and teamwork. Ethical failures more often involved issues related to bullying, employee satisfaction (e.g. extremes of morale and trust), homogeneity, management, problem acceptance, problem response, regulation, rhetoric, role-modelling, speaking-up, speaking-up system, and training and policy.
Cultural Factors by Outcome Type
When the tallies of cultural factors are scaled against the outcomes they precipitated (i.e. tallied by outcome type and divided by the total number of case studies investigating that outcome type), some tentative patterns can be discerned (see Fig. 6). First, the cultural factor of listening most often occurred in failures which resulted in a loss of human life (e.g. accidents of public transport, the NASA space shuttle disasters). Second, deficiencies in regulation and training and policy most often preceded failures that could have long-term health consequences for workers or the public (e.g. Chernobyl, Three Mile Island) or which caused environmental damage (e.g. oil spill from Deepwater Horizon). Third, inadequate management, supervision, homogeneity and role-modelling most frequently preceded administrative outcomes which caused financial loss and/or damage to an organisation’s brand through scandal (e.g. doping in sports).
Discussion
Implications of this Review for Models of Failure
This review analysed 58 publications which use case study methods to investigate how organisational culture contributes to institutional failures. The findings advance existing models of failure in three ways. First, we found that case studies identify 23 contributing factors of culture. Several of these cultural factors have been established by the failure literature. For example, the most common cultural factor involved organisations prioritising productivity over ethics and safety. Such goal conflict is central to the theories of Perrow (1999), Reason (1990), and Rasmussen (1997). Where an underlying cause was provided, this goal conflict was owing, as Perrow (1999) suggests, to factors of capitalism, including privatisation (Dien et al. 2004) and permissive legislation (e.g. Behling et al. 2019). Yet, in a majority of cases it was not apparent why this goal conflict emerged. Perhaps, as Rasmussen (1997) suggests, organisations systematically move towards greater efficiency. Second, failure was preventable in a majority of case studies, but problems of corrective culture prevented the timely and effective resolution of issues. The distinction identified here, between those aspects of culture which cause failure and those which impede its aversion, adds to Rasmussen’s (1997) theory that organisations continually manage the boundaries of safety, economic failure, and workload. Causal culture pushes an organisation toward failure, creating disequilibrium. Corrective culture pulls an organisation back from failure, maintaining equilibrium. Third, most case studies described failure as arising from a sequential set of factors as described by the failure models of Reason (1990) and Turner (1978). Namely, cultural factors were not normally treated as being present or absent to varying degrees (Reason 2000), but as interacting with other cultural factors in dynamic ways. For example, inadequate regulation could lead to inadequate training, and inadequate training could lead to employees who feel unqualified to speak-up about problems and so remain silent. Dimensions of culture do not exist in isolation from one another: problems in one dimension can produce problems in other dimensions. Where Reason (1990) suggests ‘safety culture’ both opens and closes institutional weaknesses, we suggest the ‘opening’ of weaknesses is achieved by causal factors of culture, whilst the ‘closing’ of weaknesses is achieved by corrective factors of culture. Only a minority of case studies did not report both causal and corrective factors of culture, and thus we suggest that failure is characteristically preceded by both types of cultural factor. Namely, failure must involve at least one causal factor of culture which enables a problem to arise, and usually at least one corrective factor of culture which enables the problem to persist and develop into failure. This model needs to be substantiated with further research which explicitly identifies the causal and corrective cultural factors involved in failures.
Implications of this Review for the Constructs of Safety Culture and Ethical Culture
Large-scale failures are difficult to track prospectively using traditional methods. As such we sought to explore the proposed role of safety culture and ethical culture in large-scale failures through a review of case study analyses. We found that the cultural factors identified in survey-based studies of safety culture and ethical culture overlap on 20 of the 23 cultural factors implicated in failures. This establishes the role of safety culture and ethical culture in causing failure, which has not been substantially established. Yet, it is important to recognise that safety culture and ethical culture differ in terms of their motivations and consequences. Safety culture generally relates to the management of physical risk towards employees (e.g. occupational injuries) and organisational processes (e.g. an oil rig). A combination of regulatory factors, and the insight that safety failures are always damaging to an organisation (e.g. in aviation), underlie the drive to have a culture that prioritises safety. Ethical culture is arguably more complex, because it relates to the upholding of both legal codes and what is regarded as morally acceptable within a society. Ethical failure is defined by subjective and shifting beliefs about right and wrong (both within and outside an organisation), and can be less apparent. From this perspective, the outcomes of safety culture are more stable and objective than ethical culture, although both concepts attempt to capture the conflict that can arise in organisations through having goals that simultaneously emphasise performance (e.g. productivity, profit) and avoiding risk-taking.
The three cultural factors involved in failure which are not normally included in existing models of safety culture and ethical culture were listening, homogeneity, and bullying. Of particular importance is listening which was disproportionately linked to a loss of human life. Problems of listening occurred in more than a third of case studies and were nearly as prevalent as well-established preconditions to failure (e.g. procedural deviations, inadequate training). The problems of listening are disconcerting, particularly considering that speaking-up about problems often comes from a passion for the organisation (Kenny et al. 2020). Whilst Turner (1978, Turner and Pidgeon 1997) describes how communication problems could prevent individuals from seeing an imminent failure, these relate to variables of the message (e.g. clarity) and individual cognition. Problems of listening are different because the message is received, often repeatedly, but not acted upon because of a culturally-prescribed worldview or competing cultural factors. Further research into problems of listening is needed to identify predictive variables and possible measures to counteract them. For instance, drawing on Foucault, Catlaw et al. (2014) suggest the ability to listen and enact change in response to ‘parrhesia’ (i.e. information about problems) requires “a certain kind of relationship with ourselves that is grounded in an attentive examination of how we live our lives” (p. 199; original emphasis). In contrast to listening and bullying, homogeneity as a factor in failure has been highlighted before (Syed 2019), but further research is needed to understand homogeneity as a cultural phenomenon. In the case studies analysed here, bullying maintained unsafe or unethical norms by deterring employees from speaking-up about problems, and punishing those who did. As such bullying is particularly detrimental to corrective culture because it fosters not only employee silence, but also fear. More research is needed to understand how bullying facilitates unsafe and unethical behaviour.
Differences in Failure and Outcome Types
All but one cultural factor occurred at least once in a failure of safety and ethics. This indicates that failures, irrespective of type and outcome, more-or-less stem from a common set of values and practises. There were a few discrepancies. For example, ethical failures had more issues of corrective culture, perhaps reflecting that they involve greater obfuscation than safety failures, although safety failures did involve more problems of listening and learning. Their overall overlap points to the need for an integrative model which incorporates elements of both safety culture and ethical culture.
We also found that some cultural factors may be more conducive to specific outcomes. The largest discrepancies arose with seven cultural factors, including: homogeneity, listening, management, role-modelling, regulation, supervision, and training and policy. For example, loss of human life was more often preceded by listening problems such as management not listening to employees and wanting to hear their input. These findings indicate that specific outcomes may tend to involve different combinations of cultural factors. They also signal the importance of these seven cultural factors as areas for organisations to address.
Contextualising the Concept of Corrective Culture within the Wider Literature
Problems of corrective culture can be conceived in terms of what Argyris (1976) called ‘single loop learning’ in contrast to ‘double loop learning’ that enables large-scale corrections. Organisations which engage in single loop learning favour control as their preferred ‘behavioural strategy': they are characteristically defensive, and inhibit free choice and information. Problems which conflict with the organisation’s accepted views of itself are not normally reacted to, irrespective of whether they are detected. It is clear how every factor of corrective culture that we identified evinces this behavioural strategy in response to problems.
Corrective culture is also complementary to Westrum’s (2004, 2014) notion of information flow. Westrum (2004) suggests different culture-types exhibit different patterns of response to warning signals, ranging from the suppression and isolation of the message-sender (pathological culture) to the launch of a public inquiry to identify the root causes of a problem (generative culture). This review adds to Westrum’s (2004, 2014) model by cataloguing the values, beliefs, and practises which stifle information flow and obstruct measures to self-correct and avert failure.
Practical Implications of this Review
The findings of this review indicate that interventions to prevent failure, and evaluations of an organisation’s ability to prevent failure, should focus on corrective culture. An organisation will inevitably encounter problems of causal culture from time to time, but these will not develop into failure so long as the organisation is able to swiftly identify and appropriately deal with them.
Organisations can improve corrective culture through simulations, management, and introducing relevant cultural artefacts. Role-play simulations of bullying (e.g. amongst nurses, Ulrich et al. 2017), whistleblowing, and listening could be used to develop employees’ understanding of these corrective aspects of culture and readiness to deal with them. Managers could promote the value of corrective action by displaying an active interest in corrective culture, promoting the occurrence of corrections and rewarding employees who have detected and raised problems (e.g. with social recognition). Corrective culture could also be improved by addressing relevant cultural artefacts. For example, an organisation could evaluate the effectiveness of its whistleblowing hotline.
Organisations might also assess their corrective culture through cultural measurement and testing whether the corrective culture is working. Corrective culture could be measured by survey, with questions to assess employees’ confidence in raising issues, whether employees are listened to when they do raise concerns, and the presence of recurring problems in an organisation. Unobtrusive indicators of culture (‘UICs’, Reader et al. 2020) could be used to assess the extent to which corrective culture manifests in artefacts and behaviours. For example, one could assess whether incident reports are recorded in the meeting minutes of the executive team or board, and if the organisation has a telephone number for customer service. Online customer reviews could be analysed for content related to factors of corrective culture (e.g. listening). Responding only to selected people is not a good indication of corrective culture, and so handling of complaints from outsiders (e.g. customers), employees, and management could be compared to establish selective listening. The volume of complaints received by an organisation could also be assessed: as shown in the incident reporting literature, a lack of complaints may reflect a poor corrective culture rather than a lack of problems.
Limitations
This review has some limitations which need be noted. First, case study research on culture and failure does not represent a coherent literature. This posed some challenges when developing search terms to identify qualifying publications. Given limited time and resources, search terms could not be as inclusive as originally planned because this yielded an unmanageable number of publications to screen. Second, case studies vary in quality. For example, one third of case studies (n = 23) lack a methods section and thus do not identify their data sources at all, or name their data sources through in-text citation only. As such, more inclusive search terms and stricter criteria for exclusion could improve this review. Third, although 64% of the articles reviewed were published after 2010, the results of this review could be affected by its relatively high proportion of earlier articles. We find that, compared to earlier articles, those published after 2010 identify more problems of management, regulation, and training and policy. As this difference is not accounted for by a substantial difference in the proportion of safety failures (51.35%) and ethical failures (45.95%) analysed by more recent articles, it may reflect changing usages of safety culture and ethical culture concepts to analyse failure. Fourth, the results of this review could be affected by recurring cases (e.g. space shuttle disasters repeatedly involved listening problems).
Conclusion
There are several models of institutional failure and report-based research on the aspects of safety culture and ethical culture associated with injuries, accidents, and wrongdoing. We synthesised, for the first time, case studies which retrospectively investigate the relationship between culture and failure. We found that case studies combine models of failure and culture to explain these large-scale events. As case studies have the affordance of retrospect to identify ‘how’ failures unfold, we were able to see that culture manifests both as underlying causes of failure, as well as in organisational problems of dealing with problems (corrective culture). We also found substantial overlap in the components of safety culture and ethical culture which case studies identified, adding weight to the application of these theoretical constructs to understanding failure. Finally, this review demonstrated that ethical failures such as accounting fraud and safety failures such as nuclear disasters are, for the most part, not distinguishable by the values and practises which preceded them. Indeed, failures often contain problems that cross the boundaries of both safety culture and ethical culture (e.g. speaking-up), indicating the need for a more integrative approach to studying adverse events. We propose that distinguishing causal and corrective culture, and conceptualising their sequential relations, can form the basis for such an integrative approach.
References
Amernic, J., & Craig, R. (2013). Leadership discourse, culture, and corporate ethics: CEO-speak at News Corporation. Journal of Business Ethics, 118, 379–394. https://doi.org/10.1007/s10551-012-1506-0.
Antonsen, S. (2009a). Safety culture assessment: A mission impossible? Journal of Contingencies and Crisis Management, 17(4), 242–254. https://doi.org/10.1111/j.1468-5973.2009.00585.x.
Antonsen, S. (2009b). Safety culture and the issue of power. Safety Science, 47(2), 183–191. https://doi.org/10.1016/j.ssci.2008.02.004.
Ardichvili, A., & Jondle, D. (2009). Integrative literature review: Ethical business cultures: A literature review and implications for HRD. Human Resource Development Review, 8(2), 223–244. https://doi.org/10.1177/1534484309334098.
Argyris, C. (1976). Single-loop and double-loop models in research on decision making. Administrative Science Quarterly, 21(3), 363–375. https://doi.org/10.2307/2391848.
Arnold, H. J., & Feldman, D. C. (1981). Social desirability response bias in self-report choice situations. Academy of Management Journal, 24(2), 377–385. https://doi.org/10.5465/255848.
Barnes, J. W., Jackson, D. W., Jr., Hutt, M. D., & Kumar, A. (2006). The role of culture strength in shaping sales force outcomes. Journal of Personal Selling & Sales Management, 26(3), 255–270. https://doi.org/10.2753/PSS0885-3134260301.
Barney, J. B. (1986). Organizational culture: Can it be a source of sustained competitive advantage? The Academy of Management Review, 11(3), 656–665. https://doi.org/10.2307/258317.
Baysinger, B. D., & Butler, H. N. (1985). Corporate governance and the board of directors: Performance effects of changes in board composition. Journal of Law, Economics, & Organization, 1(1), 101–124.
Beamish, T. D. (2000). Accumulating trouble: Complex organization, a culture of silence, and a secret spill. Social Problems, 47(4), 473–498. https://doi.org/10.2307/3097131.
Behling, N., Williams, M. C., Behling, T. G., & Managi, S. (2019). Aftermath of Fukushima: Avoiding another major nuclear disaster. Energy Policy, 126, 411–420. https://doi.org/10.1016/j.enpol.2018.11.038.
Bennett, S. (2020). The 2018 Gosport Independent Panel report into deaths at the National Health Service’s Gosport War Memorial Hospital. Does the culture of the medical profession influence health outcomes? Journal of Risk Research, 23(6), 827–831. https://doi.org/10.1080/13669877.2019.1591488.
Beus, J. M., McCord, M. A., & Zohar, D. (2016). Workplace safety: A review and research synthesis. Organizational Psychology Review, 6(4), 352–381. https://doi.org/10.1177/2041386615626243.
Broadribb, M. P. (2015). What have we really learned? Twenty five years after Piper Alpha. Process Safety Progress, 34(1), 16–23. https://doi.org/10.1002/prs.11691.
Burnard, K., & Bhamra, R. (2011). Organisational resilience: Development of a conceptual framework for organisational responses. International Journal of Production Research, 49(18), 5581–5599. https://doi.org/10.1080/00207543.2011.563827.
Burris, E. R. (2012). The risks and rewards of speaking up: Managerial responses to employee voice. Academy of Management Journal, 55(4), 851–875.
Castro, D. R., Anseel, F., Kluger, A. N., Lloyd, K. J., & Turjeman-Levi, Y. (2018). Mere listening effect on creativity and the mediating role of psychological safety. Psychology of Aesthetics, Creativity, and the Arts, 12(4), 489–502. https://doi.org/10.1037/aca0000177.
Catlaw, T. J., Rawlings, K. C., & Callen, J. C. (2014). The courage to listen. Administrative Theory & Praxis, 36(2), 197–218. https://doi.org/10.2753/ATP1084-1806360203.
Cervellati, E. M., Piras, L., & Scialanga, M. (2013). Corporate culture and frauds: A behavioral finance analysis of the Barclays-LIBOR case. Corporate Ownership & Control, 11(1–5), 447–463. https://doi.org/10.22495/cocv11i1c5art1.
Choo, C. W. (2008). Organizational disasters: Why they happen and how they may be prevented. Management Decision, 46(1), 32–45. https://doi.org/10.1108/00251740810846725.
Christian, M. S., Bradley, J. C., Wallace, J. C., & Burke, M. J. (2009). Workplace safety: A meta-analysis of the roles of person and situation factors. Journal of Applied Psychology, 94(5), 1103–1127. https://doi.org/10.1037/a0016172.
Cohen, T. R., Panter, A. T., & Turan, N. (2012). Guilt proneness and moral character. Current Directions in Psychological Science, 21(5), 355–359. https://doi.org/10.1177/0963721412454874.
Cooper, M. D. (2000). Towards a model of safety culture. Safety Science, 36(2), 111–136. https://doi.org/10.1016/S0925-7535(00)00035-7.
Cox, S., & Flin, R. (1998). Safety culture: Philosopher’s stone or man of straw? Work & Stress, 12(3), 189–201. https://doi.org/10.1080/02678379808256861.
Crawford, J., Dawkins, S., Martin, A., & Lewis, G. (2017). Understanding the organizational climate of unethical leadership in the Australian Football League. Journal of Leadership Studies, 11(2), 52–54. https://doi.org/10.1002/jls.21525.
Crofts, P. (2017). Criminalising institutional failures to prevent, identify or react to child sexual abuse. International Journal for Crime, Justice and Social Democracy, 6(3), 104–122. https://doi.org/10.5204/ijcjsd.v6i3.421.
Cueller, M. J., Keil, M., & Johnson, R. D. (2006). The deaf effect response to bad news reporting in information systems projects. e-Service Journal, 5(1), 75–97. https://doi.org/10.2979/esj.2006.5.1.75.
Cuong, N. H. (2011). Factors causing Enron’s collapse: An investigation into corporate governance and company culture. Corporate Ownership & Control, 8(3–6), 585–593. https://doi.org/10.22495/cocv8i3c6p2.
Denison, D. R. (1984). Bringing corporate culture to the bottom line. Organizational Dynamics, 13(2), 5–22. https://doi.org/10.1016/0090-2616(84)90015-9.
Denison, D. R. (1990). Corporate culture and organizational effectiveness. New York: Wiley.
Dien, Y., Llory, M., & Montmayeul, R. (2004). Organisational accidents investigation methodology and lessons learned. Journal of Hazardous Materials, 111(1–3), 147–153. https://doi.org/10.1016/j.jhazmat.2004.02.021.
Dimeo, P. (2014). Why Lance Armstrong? Historical context and key turning points in the ‘cleaning up’ of professional cycling. The International Journal of the History of Sport, 31(8), 951–968. https://doi.org/10.1080/09523367.2013.879858.
Durkheim, E. (1966). The rules of sociological method. New York: Free Press. (Original work published 1895.)
Einarsen, S., & Skogstad, A. (1996). Bullying at work: Epidemiological findings in public and private organizations. European Journal of Work and Organizational Psychology, 5(2), 185–201. https://doi.org/10.1080/13594329608414854.
Erhardt, N. L., Werbel, J. D., & Shrader, C. B. (2003). Board of director diversity and firm financial performance. Corporate Governance: An International Review, 11(2), 102–111. https://doi.org/10.1111/1467-8683.00011.
Fallon, F., & Cooper, B. J. (2015). Corporate culture and greed—The case of the Australian Wheat Board. Australian Accounting Review, 25(1), 71–83. https://doi.org/10.1111/auar.12031.
Feagin, J. R., Orum, A. M., & Sjoberg, G. (1991). A case for the case study. Chapel Hill: The University of North Carolina Press.
Fernández, J. L., & Camacho, J. (2016). Effective elements to establish an ethical infrastructure: An exploratory study of SMEs in the Madrid region. Journal of Business Ethics, 138, 113–131. https://doi.org/10.1007/s10551-015-2607-3.
Fischer, D. G., & Fick, C. (1993). Measuring social desirability: Short forms of the Marlowe-Crowne social desirability scale. Educational and Psychological Measurement, 53(2), 417–424. https://doi.org/10.1177/0013164493053002011.
Flin, R., Mearns, K., O’Connor, P., & Bryden, R. (2000). Measuring safety climate: Identifying the common features. Safety Science, 34(1–3), 177–192. https://doi.org/10.1016/S0925-7535(00)00012-6.
Francis, R. (2013). Report of the Mid Staffordshire NHS Foundation Trust public inquiry: Executive summary (Vol. 947). London: The Stationery Office.
Froud, J., Johal, S., Papazian, V., & Williams, K. (2004). The temptation of Houston: A case study of financialisation. Critical Perspectives on Accounting, 15(6–7), 885–909. https://doi.org/10.1016/j.cpa.2003.05.002.
Gillespie, A., & Reader, T. W. (2016). The Healthcare Complaints Analysis Tool: Development and reliability testing of a method for service monitoring and organisational learning. BMJ Quality & Safety, 25(12), 937–946. https://doi.org/10.1136/bmjqs-2015-004596.
Glendon, A. I., & Stanton, N. A. (2000). Perspectives on safety culture. Safety Science, 34(1–3), 193–214. https://doi.org/10.1016/S0925-7535(00)00013-8.
Gluckman, M. (1963). Papers in honor of Melville J. Herskovits: Gossip and scandal. Current Anthropology, 4(3), 307–316.
Goh, Y. M., Brown, H., & Spickett, J. (2010). Applying systems thinking concepts in the analysis of major incidents and safety culture. Safety Science, 48(3), 302–309. https://doi.org/10.1016/j.ssci.2009.11.006.
Gordon, G. G., & DiTomaso, N. (1992). Predicting corporate performance from organizational culture. Journal of Management Studies, 29(6), 783–798. https://doi.org/10.1111/j.1467-6486.1992.tb00689.x.
Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics (Vol. 3: Speech acts, pp. 41–58). New York: Academic Press.
Griffith, C. J. (2010). Do businesses get the food poisoning they deserve? The importance of food safety culture. British Food Journal, 112(4), 416–425. https://doi.org/10.1108/00070701011034420.
Guldenmund, F. W. (2000). The nature of safety culture: A review of theory and research. Safety Science, 34(1–3), 215–257. https://doi.org/10.1016/S0925-7535(00)00014-X.
Guldenmund, F. W. (2007). The use of questionnaires in safety culture research—An evaluation. Safety Science, 45(6), 723–743. https://doi.org/10.1016/j.ssci.2007.04.006.
Guthrie, R., & Shayo, C. (2005). The Columbia disaster: Culture, communication & change. Journal of Cases on Information Technology, 7(3), 57–76. https://doi.org/10.4018/jcit.2005070104.
Hessels, A. J., & Wurmser, T. (2020). Relationship among safety culture, nursing care, and standard precautions adherence. American Journal of Infection Control, 48, 340–341. https://doi.org/10.1016/j.ajic.2019.11.008.
Horwitz, S. K. (2005). The compositional impact of team diversity on performance: Theoretical considerations. Human Resource Development Review, 4(2), 219–245. https://doi.org/10.1177/1534484305275847.
Janis, I. L. (1972). Victims of groupthink: A psychological study of foreign-policy decisions and fiascos. Boston: Houghton Mifflin.
Johnson, C. (2008). The rise and fall of Carly Fiorina: An ethical case study. Journal of Leadership & Organizational Studies, 15(2), 188–196. https://doi.org/10.1177/1548051808320983.
Jones, A., & Kelly, D. (2014). Deafening silence? Time to reconsider whether organisations are silent or deaf when things go wrong. BMJ Quality & Safety, 23(9), 709–713. https://doi.org/10.1136/bmjqs-2013-002718.
Jung, J. C., & Park, S. B. (2017). Case study: Volkswagen’s diesel emissions scandal. Thunderbird International Business Review, 59(1), 127–137. https://doi.org/10.1002/tie.21876.
Kaptein, M. (2008). Developing and testing a measure for the ethical culture of organizations: The corporate ethical virtues model. Journal of Organizational Behavior, 29(7), 923–947. https://doi.org/10.1002/job.520.
Kaptein, M. (2011). From inaction to external whistleblowing: The influence of the ethical culture of organizations on employee responses to observed wrongdoing. Journal of Business Ethics, 98, 513–530. https://doi.org/10.1007/s10551-010-0591-1.
Kee, D., Jun, G. T., Waterson, P., & Haslam, R. (2017). A systemic analysis of South Korea Sewol ferry accident—Striking a balance between learning and accountability. Applied Ergonomics, 59(Part B), 504–516. https://doi.org/10.1016/j.apergo.2016.07.014.
Keil, M., & Robey, D. (1999). Turning around troubled software projects: An exploratory study of the deescalation of commitment to failing courses of action. Journal of Management Information Systems, 15(4), 63–87. https://doi.org/10.1080/07421222.1999.11518222.
Kenny, K., Fotaki, M., & Vandekerckhove, W. (2020). Whistleblower subjectivities: Organization and passionate attachment. Organization Studies, 41(3), 323–343. https://doi.org/10.1177/0170840618814558.
Kim, Y. G., Kim, A. R., Kim, J. H., & Seong, P. H. (2018). Approach for safety culture evaluation under accident situation at NPPs; an exploratory study using case studies. Annals of Nuclear Energy, 121, 305–315. https://doi.org/10.1016/j.anucene.2018.07.028.
Kish-Gephart, J. J., Harrison, D. A., & Treviño, L. K. (2010). Bad apples, bad cases, and bad barrels: Meta-analytic evidence about sources of unethical decisions at work. Journal of Applied Psychology, 95(1), 1–31. https://doi.org/10.1037/a0017103.
Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347–480). Chicago: Rand McNally.
Krippendorff, K. (2013). Content analysis: An introduction to its methodology (3rd ed.). Thousand Oaks: Sage Publications.
Kroeber, A. L., & Kluckhohn, C. (1952). Culture: A critical review of concepts and definitions. Cambridge: The Peabody Museum of American Archaeology and Ethnology.
Lederman, R., Kurnia, S., Peng, F., & Dreyfus, S. (2015). Tick a box, any box: A case study on the unintended consequences of system misuse in a hospital emergency department. Journal of Information Technology Teaching Cases, 5(2), 74–83. https://doi.org/10.1057/jittc.2015.13.
Lintern, S. (2019, November 21). Shrewsbury maternity scandal: Hundreds of families whose babies died or have been left with brain damage in hospital to be contacted by trust. The Independent. Retrieved from https://www.independent.co.uk/news/health/shrewsbury-maternity-scandal/shrewsbury-maternity-scandal-nhs-baby-deaths-disabled-families-contacted-a9212116.html.
MacLean, J. P., Campbell, G., & Seals, S. (2004). Analysis of the Columbia Shuttle disaster—Anatomy of a flawed investigation in a pathological organization. Journal of Scientific Exploration, 18(2), 187–215.
Mannion, R., & Davies, H. (2018). Understanding organisational culture for healthcare quality improvement. BMJ. https://doi.org/10.1136/bmj.k4907.
Martineau, J. T., Johnson, K. J., & Pauchant, T. C. (2017). The pluralist theory of ethics programs orientations and ideologies: An empirical study anchored in requisite variety. Journal of Business Ethics, 142, 791–815. https://doi.org/10.1007/s10551-016-3183-x.
Mason, R. O. (2004). Lessons in organizational ethics from the Columbia disaster: Can a culture be lethal? Organizational Dynamics, 33(2), 128–142. https://doi.org/10.1016/j.orgdyn.2004.01.002.
McCall, J. R., & Pruchnicki, S. (2017). Just culture: A case study of accountability relationship boundaries influence on safety in HIGH-consequence industries. Safety Science, 94, 143–151. https://doi.org/10.1016/j.ssci.2017.01.008.
Merenda, M. J., & Irwin, M. (2018). Case study: Volkswagen’s diesel emissions control scandal. Journal of Strategic Innovation and Sustainability, 13(1), 53–62.
Merton, R. K. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894–904. https://doi.org/10.2307/2084615.
Merton, R. K. (1940). Bureaucratic structure and personality. Social Forces, 18(4), 560–568. https://doi.org/10.2307/2570634.
Merton, R. K. (1968). Social theory and social structure. New York: The Free Press.
Molina, A. D. (2018). A systems approach to managing organizational integrity risks: Lessons from the 2014 Veterans Affairs waitlist scandal. The American Review of Public Administration, 48(8), 872–885. https://doi.org/10.1177/0275074018755006.
MSCI, & S&P Global Market Intelligence. (2018, September 28). Global Industry Classification Standard (GICS): Definitions of GICS sectors effective close of September 28, 2018. Retrieved from https://www.msci.com/gics.
Neal, A., & Griffin, M. A. (2002). Safety climate and safety behaviour. Australian Journal of Management, 27(1_suppl), 67–75. https://doi.org/10.1177/031289620202701S08.
Nielsen, M. B., & Einarsen, S. (2012). Outcomes of exposure to workplace bullying: A meta-analytic review. Work & Stress, 26(4), 309–332. https://doi.org/10.1080/02678373.2012.734709.
O’Reilly, C. A., Caldwell, D. F., Chatman, J. A., & Doerr, B. (2014). The promise and problems of organizational culture: CEO personality, culture, and firm performance. Group & Organization Management, 39(6), 595–625. https://doi.org/10.1177/1059601114550713.
Ouchi, W. G., & Wilkins, A. L. (1985). Organizational culture. Annual Review of Sociology, 11, 457–483.
Paté-Cornell, M. E. (1993). Learning from the Piper Alpha accident: A postmortem analysis of technical and organizational factors. Risk Analysis, 13(2), 215–232. https://doi.org/10.1111/j.1539-6924.1993.tb01071.x.
Patrick, B., Plagens, G. K., Rollins, A., & Evans, E. (2018). The ethical implications of altering public sector accountability models: The case of the Atlanta cheating scandal. Public Performance & Management Review, 41(3), 544–571. https://doi.org/10.1080/15309576.2018.1438295.
Perrow, C. (1984). Normal accidents: Living with high-risk technologies. New York: Basic.
Perrow, C. (1999). Normal accidents: Living with high-risk technologies (2nd ed.). Princeton: Princeton University Press.
Petitta, L., Probst, T. M., & Barbaranelli, C. (2017). Safety culture, moral disengagement, and accident underreporting. Journal of Business Ethics, 141, 489–504. https://doi.org/10.1007/s10551-015-2694-1.
Pidgeon, N. (1998). Safety culture: Key theoretical issues. Work & Stress, 12(3), 202–216. https://doi.org/10.1080/02678379808256862.
Pidgeon, N., & O'Leary, M. (2000). Man-made disasters: Why technology and organizations (sometimes) fail. Safety Science, 34(1–3), 15–30. https://doi.org/10.1016/S0925-7535(00)00004-7.
Pidgeon, N., & O’Leary, M. (2017). Organizational safety culture: Implications for aviation practice. In N. Johnston, N. McDonald, & R. Fuller (Eds.), Aviation psychology in practice (2nd ed., pp. 21–43). Oxon: Routledge.
Quine, L. (1999). Workplace bullying in NHS community trust: Staff questionnaire survey. BMJ, 318, 228–232. https://doi.org/10.1136/bmj.318.7178.228.
Rafeld, H., Fritz-Morgenthal, S. G., & Posch, P. N. (2019). Whale watching on the trading floor: Unravelling collusive rogue trading in banks. Journal of Business Ethics. https://doi.org/10.1007/s10551-018-4096-7.
Randle, J. (2003). Bullying in the nursing profession. Journal of Advanced Nursing, 43(4), 395–401. https://doi.org/10.1046/j.1365-2648.2003.02728.x.
Rasmussen, J. (1997). Risk management in a dynamic society: A modelling problem. Safety Science, 27(2–3), 183–213. https://doi.org/10.1016/S0925-7535(97)00052-0.
Reader, T. W., & O’Connor, P. (2014). The Deepwater Horizon explosion: Non-technical skills, safety culture, and system complexity. Journal of Risk Research, 17(3), 405–424. https://doi.org/10.1080/13669877.2013.815652.
Reader, T. W., Gillespie, A., Hald, J., & Patterson, M. (2020). Unobtrusive indicators of culture for organizations: A systematic review. European Journal of Work and Organizational Psychology. https://doi.org/10.1080/1359432X.2020.1764536.
Reason, J. (1990). Human error. Cambridge: Cambridge University Press.
Reason, J. (1998). Achieving a safe culture: Theory and practice. Work & Stress, 12(3), 293–306. https://doi.org/10.1080/02678379808256868.
Reason, J. (2000). Safety paradoxes and safety culture. Injury Control and Safety Promotion, 7(1), 3–14. https://doi.org/10.1076/1566-0974(200003)7:1;1-V;FT003.
Reason, J. (2016). Organizational accidents revisited. Farnham: Ashgate Publishing Limited.
Rest, J. R. (1979). Development in judging moral issues. Minneapolis: University of Minnesota Press.
Robson, L. S., Stephenson, C. M., Schulte, P. A., Amick III, B. C., Irvin, E. L., Eggerth, D. E., … Grubb, P. L. (2012). A systematic review of the effectiveness of occupational health and safety training. Scandinavian Journal of Work, Environment & Health, 38(3), 193–208. https://doi.org/10.5271/sjweh.3259.
Schall, M. S. (1983). A communication-rules approach to organizational culture. Administrative Science Quarterly, 28(4), 557–581. https://doi.org/10.2307/2393009.
Schaubroeck, J. M., Hannah, S. T., Avolio, B. J., Kozlowski, S. W. J., Lord, R. G., Treviño, L. K., … Peng, A. C. (2012). Embedding ethical leadership within and across organization levels. Academy of Management Journal, 55(5), 1053–1078. https://doi.org/10.5465/amj.2011.0064.
Schein, E. H. (1984). Coming to a new awareness of organizational culture. Sloan Management Review, 25(2), 3–16.
Schein, E. H. (2010). Organizational culture and leadership (4th ed.). San Francisco: Jossey-Bass.
Schneider, B., Ehrhart, M. G., & Macey, W. H. (2013). Organizational climate and culture. Annual Review of Psychology, 64, 361–388. https://doi.org/10.1146/annurev-psych-113011-143809.
Schulte, P. A., Okun, A., Stephenson, C. M., Colligan, M., Ahlers, H., Gjessing, C., … Sweeney, M. H. (2003). Information dissemination and use: Critical components in occupational safety and health. American Journal of Industrial Medicine, 44(5), 515–531. https://doi.org/10.1002/ajim.10295.
Sims, R. R., & Brinkmann, J. (2002). Leaders as moral role models: The case of John Gutfreund at Salomon Brothers. Journal of Business Ethics, 35, 327–339. https://doi.org/10.1023/A:1013826126058.
Sims, R. R., & Brinkmann, J. (2003). Enron ethics (or: Culture matters more than codes). Journal of Business Ethics, 45, 243–256. https://doi.org/10.1023/A:1024194519384.
Singhapakdi, A., Vitell, S. J., Rallapalli, K. C., & Kraft, K. L. (1996). The perceived role of ethics and social responsibility: A scale development. Journal of Business Ethics, 15, 1131–1140. https://doi.org/10.1007/BF00412812.
Smircich, L. (1983). Concepts of culture and organizational analysis. Administrative Science Quarterly, 28(3), 339–358. https://doi.org/10.2307/2392246.
Strauch, B. (2015). Can we examine safety culture in accident investigations, or should we? Safety Science, 77, 102–111. https://doi.org/10.1016/j.ssci.2015.03.020.
Sweeney, B., Arnold, D., & Pierce, B. (2010). The impact of perceived ethical culture of the firm and demographic variables on auditors’ ethical evaluation and intention to act decisions. Journal of Business Ethics, 93, 531–551. https://doi.org/10.1007/s10551-009-0237-3.
Syed, M. (2019). Rebel ideas: The power of diverse thinking. London: John Murray.
Sørensen, J. B. (2002). The strength of corporate culture and the reliability of firm performance. Administrative Science Quarterly, 47(1), 70–91. https://doi.org/10.2307/3094891.
Tellis, W. M. (1997). Application of a case study methodology. The Qualitative Report, 3(3), 1–19.
Treviño, L. K. (1986). Ethical decision making in organizations: A person-situation interactionist model. The Academy of Management Review, 11(3), 601–617. https://doi.org/10.2307/258313.
Treviño, L. K., Butterfield, K. D., & McCabe, D. L. (1998). The ethical context in organizations: Influences on employee attitudes and behaviors. Business Ethics Quarterly, 8(3), 447–476. https://doi.org/10.2307/3857431.
Treviño, L. K., den Nieuwenboer, N. A., & Kish-Gephart, J. J. (2014). (Un)Ethical behavior in organizations. Annual Review of Psychology, 65, 635–660. https://doi.org/10.1146/annurev-psych-113011-143745.
Treviño, L. K., & Weaver, G. R. (2003). Managing ethics in business organizations: Social scientific perspectives. Stanford: Stanford University Press.
Turner, B. (1978). Man-made disasters. London: Wykeham Publications.
Turner, B., & Pidgeon, N. (1997). Man-made disasters (2nd ed.). London: Butterworth Heinemann.
Ulrich, D. L., Gillespie, G. L., Boesch, M. C., Bateman, K. M., & Grubb, P. L. (2017). Reflective responses following a role play simulation of nurse bullying. Nursing Education Perspectives, 38(4), 203–205. https://doi.org/10.1097/01.NEP.0000000000000144.
Uzun, H., Szewczyk, S. H., & Varma, R. (2004). Board composition and corporate fraud. Financial Analysts Journal, 60(3), 33–43.
Vandekerckhove, W., & Langenberg, S. (2012). Can we organize courage? Implications of Foucault’s parrhesia. Electronic Journal of Business Ethics and Organization Studies, 17(2), 35–44.
Vaughan, D. (1996). The Challenger launch decision: Risky technology, culture, and deviance at NASA. Chicago: The University of Chicago Press.
Vaughan, D. (1999). The dark side of organizations: Mistake, misconduct, and disaster. Annual Review of Sociology, 25, 271–305.
Victor, B., & Cullen, J. B. (1988). The organizational bases of ethical work climates. Administrative Science Quarterly, 33(1), 101–125. https://doi.org/10.2307/2392857.
Waddington, K. (2016). Rethinking gossip and scandal in healthcare organizations. Journal of Health Organization and Management, 30(6), 810–817. https://doi.org/10.1108/JHOM-03-2016-0053.
Weiskopf, R., & Tobias-Miersch, Y. (2016). Whistleblowing, parrhesia and the contestation of truth in the workplace. Organization Studies, 37(11), 1621–1640. https://doi.org/10.1177/0170840616655497.
Westrum, R. (2004). A typology of organisational cultures. BMJ Quality & Safety, 13(Suppl II), ii22–ii27. https://doi.org/10.1136/qshc.2003.009522.
Westrum, R. (2014). The study of information flow: A personal journey. Safety Science, 67, 58–63. https://doi.org/10.1016/j.ssci.2014.01.009.
Zaal, R. O. S., Jeurissen, R. J. M., & Groenland, E. A. G. (2019). Organizational architecture, ethical culture, and perceived unethical behavior towards customers: Evidence from wholesale banking. Journal of Business Ethics, 158, 825–848. https://doi.org/10.1007/s10551-017-3752-7.
Zohar, D. (2010). Thirty years of safety climate research: Reflections and future directions. Accident Analysis & Prevention, 42(5), 1517–1522. https://doi.org/10.1016/j.aap.2009.12.019.
Funding
This work was supported by the AKO Corporate Culture PhD Scholarship for EJH from the AKO Foundation, a charitable trust.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflicts of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hald, E.J., Gillespie, A. & Reader, T.W. Causal and Corrective Organisational Culture: A Systematic Review of Case Studies of Institutional Failure. J Bus Ethics 174, 457–483 (2021). https://doi.org/10.1007/s10551-020-04620-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10551-020-04620-3
Keywords
- Institutional failure
- Organisational disaster
- Organisational culture
- Safety culture
- Ethical culture
- Case study research
- Listening