Design for the Value of Safety

  • Neelke DoornEmail author
  • Sven Ove Hansson
Living reference work entry


Two major methods for achieving safety in engineering design are compared: safety engineering and probabilistic risk analysis. Safety engineering employs simple design principles or rules of thumb such as inherent safety, multiple barriers, and numerical safety margins to reduce the risk of accidents. Probabilistic risk analysis combines the probabilities of individual events in event chains leading to accidents in order to identify design elements in need of improvement and often also to optimize the use of resources. It is proposed that the two methodologies should be seen as complementary rather than as competitors. Probabilistic risk analysis is at its advantage when meaningful probability estimates are available for most of the major events that may contribute to an accident. Safety engineering principles are more suitable to deal with uncertainties that defy quantification. In many design tasks, the combined use of both methodologies is preferable.


Design Risk Probabilistic risk analysis Safety factor Uncertainty, Safety engineering 


Enhancing safety and avoiding or mitigating risks have been a central concern of engineering as long as there have been engineers. Already in the earliest engineering codes, it was established that engineers should hold paramount the safety of the general public (Davis 2001). Following the definition of design as an “activity in which certain functions are translated into a blueprint for an artifact, system, or service that can fulfill these functions” (Van de Poel and Royakkers 2011, p. 166), a distinction is usually made between functional and nonfunctional requirements, the latter referring to requirements that have to be met but that are not necessary for the artifact, system, or service to fulfill its intended function. Contrary to most of the other values discussed in this volume, the value of safety is almost always conceived as a ubiquitous though often implicit functional requirement. Even if it is not stated explicitly in the design requirements, the need to make the design “safe” is almost always presupposed. The importance that is assigned to safety will differ, though, and there are different ways to take safety into account during design. In this chapter, we will discuss two main approaches to designing for the value of safety: safety engineering and probabilistic risk analysis. We first define the key terms, also in relation to the different engineering domains (section “Definitions”). After that, we present the two main approaches (section “Current Approaches”), followed by a discussion of the pros and cons of both approaches (section “Discussion of the Two Approaches”). The approaches are illustrated with two examples from civil engineering (section “Experiences and Examples”), followed by a critical evaluation, including some open issues (section “Critical Evaluation”). In the concluding section “Conclusions,” we summarize the findings.


Technological risk and safety is an area in which the terminology is far from well established. The definition of key terms not only differs between disciplines and contexts (such as engineering, natural sciences, social sciences, and public discussion), it often differs between different branches and traditions of engineering as well. These differences depend largely on lack of communication between different expert communities, but there is also a normative or ideological element in the terminological confusion. Different uses of “risk” and “safety” tend to correlate with different views on how society should cope with technological risk.


To start with the notion of risk, it is important to distinguish between risk and uncertainty. This distinction dates back to work in the early twentieth century by the economists J. M. Keynes and F. H. Knight (Keynes 1921; Knight 1935[1921]). Knight pointed out that “[t]he term ‘risk’, as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different.” In some cases, “risk” means “a quantity susceptible of measurement,” while in other cases “something distinctly not of this character.” He proposed to reserve the term “uncertainty ” for cases of the non-quantifiable type and the term “risk” for the quantifiable cases (Knight 1935[1921], pp. 19–20).

This terminological reform has spread to other disciplines, including engineering, and it is now commonly assumed in most scientific and engineering contexts that “risk” refers to something that can be assigned a probability, whereas “uncertainty” may be difficult or impossible to quantify.

In engineering, risk is quantified in at least two different ways. The first refers to the probability of an unwanted event which may or may not occur (cf. the quote, “the risk of a melt-down during this reactor’s life-time is less than one in 10,000”). The second conception of risk refers to the statistical expectation value of unwanted events which may or may not occur. Expectation value means probability-weighted value. Hence, if for the construction of some large infrastructural project the probability of death is 0.005 % for each year worked by an individual and the total construction work requires 200 person-years of work, then the expected number of fatalities from this operation is 200 × 0.00005 = 0.01. The risk of fatalities in this operation can then be said to be 0.01 deaths. Expectation values have the important property of being additive. Suppose that a certain operation is associated with a 1 % probability of an accident that will kill five persons and also with a 2 % probability of another type of accident that will kill one person. Then the total expectation value is 0.01 × 5 + 0.02 × 1 = 0.07 deaths. In similar fashion, the expected number of deaths from a hydraulic dam is equal to the sum of the expectation values for each of the various types of accidents that can occur in or at the dam.

It should be noted, however, that in everyday language, “risk” is often used without reference to probability. Furthermore, although uncertainty and risk are commonly defined as two mutually exclusive concepts, it is common practice to use “uncertainty” in lieu of “risk or uncertainty.” Then “uncertainty” is used as a general term for lack of knowledge (whether probabilistic or not), and risk is a special form of uncertainty, characterized by the availability of a meaningful probability estimate. In what follows, we will adhere to this practice and use “uncertainty” in the broad sense that covers (probabilizable) risk.

Even in cases when the plausibility of a danger can be meaningfully summarized in a probability estimate, there may yet remain significant uncertainties about the accuracy of that estimate. In fact, only very rarely are probabilities known with certainty. Even if we have extensive knowledge of the design of a nuclear power plant, for example, we do not know the exact probability of failure of the plant. The probability that a tsunami would cause a meltdown of the Fukuyama nuclear reactors in Japan, as it did in 2011, could not have been predicted with any accuracy beforehand. Therefore, even if a decision problem is treated as a decision “under risk,” this does not mean that the decision in question is made under conditions of completely known probabilities. Rather, it means that a choice has been made to simplify the description of this decision problem by treating it as a case of known probabilities. This is practically important in engineering design. Some of the probability estimates used in risk calculations are quite uncertain. Such uncertainty about probabilities should be taken into account when probabilistic analyses are used for decision-guiding purposes.


The concept of safety is sometimes used in an absolute, sometimes in a relative, sense. In order to illustrate the meaning of absolute safety, suppose that you buy a jacket that is promised to be of fireproof fabric. Later, it actually catches fire. Then you might argue, if you apply the absolute notion of safety, that you were not safe from fire in the first place. If the producer of the jacket tries to argue that you were in fact safe since the fabric was highly unlikely to catch fire, you would probably say that he was simply wrong. In some contexts, therefore, “I am safe against the unwanted event X” is taken to mean that there is no risk at all that X will happen.

Technical safety has often been defined as absolute safety. For example, in research on aviation safety, it has been claimed that “Safety is by definition the absence of accidents” (Tench 1985). However, in practice absolute safety is seldom achievable. For most purposes, it is therefore not a very useful concept. Indeed, the US Supreme Court has supported a non-absolute interpretation, stating that “safe is not the equivalent of ‘risk free’” (Miller 1988, p. 54). With this interpretation, a statement such as “this building is fire-safe” can be read as a short form of the more precise statement: “The safety of this building with regard to fire is as high as can be expected in terms of reasonable costs of preventive actions.” In this vein, the US Department of Defense has stated that safety is “the conservation of human life and its effectiveness, and the prevention of damage to items, consistent with mission requirements” (Miller 1988, p. 54).

Usage of the term “safe” (and derivatives such as “safety”) in technical applications, e.g., in aviation safety, highway safety, etc., vacillates between the absolute concept (“safety means no harm”) and a relative concept that only requires the risk reduction that is considered feasible and reasonable. It is not possible to eliminate either of these usages, but it is possible to keep track of them and avoid confusing them with each other.

Safety is usually taken to be the inverse of risk: when the risk is high, then safety is low, and conversely. This may seem self-evident, but the relationship between the two concepts is complicated by the fact that as we saw in Subsection “Risk,” the concept of risk is in itself far from clear. It has been argued that if risk is taken in the technical sense as statistical expectation value (expected harm), then safety cannot be the antinomy of risk, since other factors such as uncertainty have to be taken into account when assessing safety (Möller et al. 2006). With a broader definition of risk, an antonymic relationship between the two concepts may be more plausible.

Terminological Differences Between Engineering Domains

Safety engineering has “separate origins” in many different engineering disciplines. Due in part to lack of communication between these disciplines, in part to differences in their technical tasks and social conditions, these disciplines have developed different approaches to safety. This is also reflected in their terminological usages. To illustrate these differences, let us compare three widely different engineering disciplines: nuclear engineering, civil engineering, and software engineering.

Beginning with the concept of risk, nuclear engineering represents an extreme case among the engineering disciplines. Nuclear engineers have pioneered the use of probabilistic analysis in risk management. In their industry, by a “risk-based approach” is meant the use of probabilities to characterize hazards and prioritize their abatement. However, non-probabilistic thinking also has a place in the nuclear industry. In so-called “deterministic” analysis of nuclear risks, the focus is on what can at all happen under unfavorable circumstances.

Software engineering represents the other extreme. Software developers spend much of their work time trying to avoid various undesirable events such as errors due to unusual or unforeseen inputs, intrusion by hackers, and operator mistakes caused by confusing human-machine interfaces. However, only seldom do they calculate or estimate the probabilities of these possible errors. In the vast majority of cases, they treat risks in a way that nuclear engineers would call “deterministic.” Thus, when talking about a “risk,” they refer to an undesired event or event chain rather than a probability or expectation value.

In civil engineering, the standard approach is non-probabilistic. The tolerance of structural failures, such as a collapsing house, bridge, or dam, is very low. Design and construction work takes place under the assumption that such events should be avoided at almost any price. Traditionally, numerical probabilities of such failures have not been calculated as part of routine construction work. Instead, less complicated rules of thumb, including numerical safety factors, have been used to obtain the desired low probabilities. However, recently probabilistic analysis has increasingly been applied, particularly in large constructions.

The concept of safety also has different connotations in the three areas. In the nuclear industry, “safety” usually refers to the avoidance of large accidents that would pose a risk to both workers and the public. In building and construction, “safety” usually refers to worker safety. This difference is quite appropriate; large accidents that put the public at risk are a much more serious problem in nuclear engineering than in most civil engineering projects, whereas the building industry in most countries has a much worse record of workplace accidents than the nuclear industry. In software engineering, safety is less often referred to; instead “security” is the common antonym of “risk” in this area.

Current Approaches

In this section we discuss two main approaches to design for safety, viz., safety engineering and probabilistic risk analysis. Safety engineering is the older of the two, possibly going as far back as the earliest use of technological artifacts. The second approach, probabilistic risk analysis, is of more recent date and has been developed from the late 1960s onwards.

Safety Engineering

With the development of technological science, safety engineering has gained recognition as an academic discipline, and various attempts have been made to systematize its practices. Since the discussion of safety engineering is fragmented over different disciplines, there is no unified way to do this. However, the following three principles of safety engineering summarize much of its fundamental ideas:
  1. 1.

    Inherently safe design. A recommended first step in safety engineering is to minimize the inherent dangers in the process as far as possible. This means that potential hazards are excluded rather than just enclosed or otherwise coped with. Hence, dangerous substances or reactions are replaced by less dangerous ones, and this is preferred to using the dangerous substances in an encapsulated process. Fireproof materials are used instead of inflammable ones, and this is considered superior to using flammable materials but keeping temperatures low. For similar reasons, performing a reaction at low temperature and pressure is considered superior to performing it at high temperature and pressure in a vessel constructed for these conditions (Hansson 2010).

  2. 2.

    Safety factors. Constructions should be strong enough to resist loads and disturbances exceeding those that are intended. A common way to obtain such safety reserves is to employ explicitly chosen numerical safety factors. Hence, if a safety factor of 2 is employed when building a bridge, then the bridge is calculated to resist twice the maximal load to which it will in practice be exposed (Clausen et al. 2006).

  3. 3.

    Multiple independent safety barriers. Safety barriers are arranged in chains. The aim is to make each barrier independent of its predecessors so that if the first fails, then the second is still intact, etc. Typically the first barriers are measures to prevent an accident, after which follow barriers that limit its consequences, and finally rescue services as the last resort.

In the remainder of this subsection, we will focus on safety factors, being one of the most widely applied principles of safety engineering. It is generally agreed in the literature on civil engineering that safety factors are intended to compensate for five major types of sources of failure:
  1. (1)

    Higher loads than those foreseen

  2. (2)

    Worse properties of the material than foreseen

  3. (3)

    Imperfect theory of the failure mechanism in question

  4. (4)

    Possibly unknown failure mechanisms

  5. (5)

    Human error (e.g., in design) (Knoll 1976; Moses 1997)


The first two of these can in general be classified as variabilities, that is, they refer to the variability of empirical indicators of the propensity for failure. They are therefore accessible to probabilistic assessment (although these assessments may be more or less uncertain). In the technical terminology that distinguishes between risk and uncertainty, they can be subsumed under the category of risk. The last three failure types refer to eventualities that are difficult or impossible to represent in probabilistic terms and therefore belong to the category of (non-probabilizable) uncertainty.

In order to provide adequate protection, a system of safety factors will have to consider all the integrity-threatening mechanisms that can occur. For instance, one safety factor may be required for resistance to plastic deformation and another one for fatigue resistance. Different loading situations may also have to be taken into account, such as permanent load (“dead load,” i.e., the weight of the building) and variable load (“live load,” i.e., the loads produced by the use and occupancy of the building), the safety factor of the latter being higher because of higher variabilities. Similarly, components with widely varying material properties (e.g., brittle materials such as glass) are subject to higher safety factors than components of less variable materials (e.g., steel and metallic materials). Geographic properties may be taken into account by applying additional wind and earthquake factors. Design criteria employing safety factors can be found in numerous building codes and other engineering standards.

Probabilistic Risk Analysis

In the late 1960s, rapidly growing public opposition to new technologies gave rise to a new market for applied science: a market for experts on risks and on the public’s attitudes to risks. The demand came mostly from companies and institutions associated with the technologies that had been subject to public opposition. The supply was met by professionals and academics with training in the natural, behavioral, and social sciences. Most of their undertakings focused on chemicals and on nuclear technology, the same sources of risk that public opposition had targeted on. The new field was institutionalized as the discipline of probabilistic risk analysis, with professional societies, research institutes, and journals of its own. From the beginning, calculations of probabilities had a central role in the new discipline. In engineering, the terms probabilistic risk analysis and probabilistic risk assessment (often abbreviated to PRA) are mostly used interchangeably. We will use the term probabilistic risk analysis to refer to the approach of using probabilistic estimates and PRA to refer to the probabilistic evaluation of a particular design or artifact. The term probabilistic design will be used to refer to design methods that are based on probabilistic risk analysis.

Probabilistic risk analysis has largely been developed in the nuclear industry. Although the engineers designing nuclear reactors in the 1950s and 1960s aimed at keeping the probability of accidents very low, they lacked means to estimate these probabilities. In the late 1960s and early 1970s, methodology was developed to make such estimates. The first comprehensive PRA of a nuclear reactor was the Rasmussen report (WASH-1400) that was published in 1975 (Rasmussen 1975; Michal 2000). Its basic methodology is still used, with various improvements, both in the nuclear industry and in an increasing number of other industries as a means to calculate and efficiently reduce the probability of accidents.

Key concepts in probabilistic risk analysis are failure mode and effect analysis (FMEA) and fault trees. FMEA is a systematic approach for identifying potential failure modes within a system. The potential modes of failure of each component of the system are investigated, and so are the ways in which these failures can propagate through the system (Dhillon 1997). A failure mode can be any error or defect in the design, use, or maintenance of a component or process in the system. In effect analysis the consequences of those failures are investigated. The next step is to identify for each of these failure modes the accident sequences that may lead to its occurrence. Typically, several such sequences will be identified for each event. Each sequence is a chain containing events such as mechanical equipment failure, software failure, lacking or faulty maintenance, mistakes in the control room, etc. Next, the probability of each of these accident sequences is calculated, based on the probability of each event in the sequence. Some of these probabilities can be based on empirical evidence, but others have to be based on expert estimates. The final step in a probabilistic risk analysis consists in combining all this information into an overall assessment. It is common to combine the different failure modes into a so-called fault tree. Fault trees depict the logical relations between the events leading to the ultimate failure. Usually, these relations are limited to AND-gates and OR-gates, indicating whether two events are both necessary for failure (AND-gate) or each of them separately leads to failure (OR-gate) (Ale 2009).

Figure 1 shows a (simplified) fault tree for a train accident. A train accident occurs if any of the following three events occur: fire, collision, or derailment (OR-gate). A collision with a car occurs if both the signals and the braking fail (AND-gate).1 Braking can fail either due to technical failure or due to human error (OR-gate).
Fig. 1

Fault tree of train accident

In the early days of probabilistic risk analysis, the overall assessment often included a total probability of a major accident and/or a statistical expectation value for the number of deaths per year resulting from accidents in the plant. Today, most PRA specialists in the nuclear industry consider such overall calculations to be too uncertain. Instead, their focus is on using analyses of accident sequences to identify weaknesses in the safety system. According to one leading expert, the final step in a PRA

… is to rank the accident sequences according to their probability of occurrence. This is done because risk must be managed; knowing the major contributors to each undesirable event that was defined in the first step is a major element of risk management. Also ranked are the SSCs – systems, structures, and components – according to their contribution to the undesirable event. (Michal 2000, pp. 27–28)

The same basic methodology can be used in civil engineering. In the early 2000s, the Joint Committee on Structural Safety (JCSS) developed a Probabilistic Model Code for full probabilistic design. The code was intended as the operational part of national and transnational building codes that allow for probabilistic design but do not give any detailed guidance (Vrouwenvelder 2002). Contrary to nuclear engineering, civil engineering uses probabilistic risk analysis more to dimension individual components than to identify and analyze full accident sequences (JCSS 2001; Melchers 2002). This difference depends in part on the complicated redistribution of the load effects after each component failure, which makes it difficult to predict the behavior of the system as a whole (Ditlevsen and Madsen 2007 [1996]). However, attempts are made to broaden the scope of probabilistic risk analysis to infrastructure systems as a whole rather than single construction elements in such systems (Blockley and Godfrey 2000; Melchers 2007).

Discussion of the Two Approaches

In the literature, several arguments have been given for and against the replacement of traditional safety engineering by probabilistic risk analysis.

Arguments for Using Probabilistic Risk Analysis in Design

Two major arguments have been proposed in support of design methods based on probabilistic risk analysis, viz., economic optimization and fitness for policy making.

Economic Optimization

The first, and probably most important, argument in favor of probabilistic methods is that their output can be used as an input into economic optimization. Some argue that economic optimization of risk management measures is in fact the main objective of probabilistic risk analysis (Guikema and Paté-Cornell 2002). Traditional approaches in safety engineering, such as safety factors, provide regulatory bounds that may sometimes be overly conservative (Chapman et al. 1998). There is, for instance, no way to translate the difference between using the safety factor 2.0 and the safety factor 3.0 in the design of a bridge into a quantifiable effect on safety. Without a quantifiable effect (such as reduction in the expected number of fatalities), it is impossible to calculate the marginal cost of risk reduction, and therefore economic optimization of design is not possible. In contrast, a PRA that provides accident probabilities as outcomes makes it possible to calculate the expected gains from a safer design. This is what is needed for an optimization of the trade-off between risks and benefits (Paté-Cornell 1996; Moses 1997).

Such optimization may involve trade-offs against other factors than money. A risk can, for instance, be weighed against other risks that countermeasure against the first risk brought about (Graham and Wiener 1995). It is also common for overdesign to have a price in terms of excess usage of energy and other natural resources. Accident probabilities obtained in a PRA can be used as inputs into a risk-benefit analysis (RBA) or cost-benefit analysis (CBA) in which different types of advantages and disadvantages are taken into account (Rackwitz 2004).

The major problem with this argument for probabilistic risk analysis is that the outputs of PRAs are not always accurate enough to be used as inputs into economic analysis. Some relatively small and standardized infrastructure projects have effects that can be described fairly accurately in probabilistic terms. This applies, for instance, to some safety measures in road traffic such as central barriers on highways (Mak et al. 1998) or pedestrian crosswalks at intersections (Zegeer et al. 2006), for which the expected number of saved lives can be estimated with reasonable accuracy and weighed against the economic costs. In larger and more complex projects, the probabilistic quantifications of the effects of safety measures are generally not considered accurate enough to be used as direct inputs into economic analysis. For example, the safety of a gravity dam, a hydraulic structure that is supposed to be stable by its own weight, is largely dependent on seismic activity and on how the structure responds to it. Both can at most be quantified roughly, making it difficult to provide accurate accident probabilities (Abbas and Manohar 2002). In cases like this, it is therefore recommended to develop a robust structural design rather than an economically optimized one (Takewaki 2005). Similar problems are faced in the design of other large infrastructure projects, such as flood defense structures and offshore facilities. In summary, the argument that probabilistic risk analysis provides means for economic optimization is not valid for probabilistic risk analysis in general but only for those probabilistic risk analyses that provide probability estimates that are well calibrated with actual frequencies.

Fitness for Policy Making

A second advantage of probabilistic approaches concerns the organizational separation between risk assessment and risk management. In the 1970s the unclear role of scientists taking part in risk policy decisions led to increasing awareness of the distinction between scientific assessments and policy decisions based on these assessments. This resulted in what is now the standard view on the risk decision process, according to which its scientific and policy-making parts should be strictly distinguished and separated. This view was expressed in a 1983 report by the US National Academy of Sciences (National Research Council 1983). The decision procedure is divided into two distinct parts to be performed consecutively. The first of these, commonly called risk assessment, is a scientific undertaking. It consists of collecting and assessing the relevant information and using it to characterize the nature and magnitude of the risk. The second procedure is called risk management. Contrary to risk assessment, it is not a scientific undertaking. Its starting point is the outcome of risk assessment, which it combines with economic and technological information pertaining to various ways of reducing or eliminating the risk and also with political and social information. Its outcome is a decision on what measures – if any – should be taken to reduce the risk. In order to protect risk assessments from being manipulated to meet predetermined policy objectives, it was proposed to separate risk assessment organizationally from risk management. Compared to the safety engineering approach, probabilistic risk analysis seems more compatible with this organizational division between risk assessment and risk management. The selection of safety margins and other engineering measures to enhance safety is a value-dependent exercise, but it tends to be difficult to separate from scientific and technological considerations. In contrast, a PRA can be performed on the basis of scientific information alone. It is then up to the decision makers to set the acceptable probability of failure.

However, in most fields of engineering, there is in practice no separation between risk assessment and risk management. Technical standards, exposure limits, etc. are typically set by groups of experts who are entrusted both with assessing the scientific data and proposing regulation (Hansson 1998). In structural engineering, for example, the establishment of the European construction standard (Eurocodes) was characterized by organizational integration of risk assessment and risk management (Clausen and Hansson 2007). Similarly, in hydraulic engineering, Vrijling et al. (1998) developed a unified framework assessing safety in terms of acceptable individual and societal risks levels, which they derived from accident statistics and a postulated value of human life. Although the authors admit that the final judgment is political, the proposed approach merges risk assessment and management into one decision procedure.

These examples illustrate how the notions of probability and probabilistic design enter the domain of risk management where decisions on the acceptance of risks are made. Although probabilistic risk analysis in principle facilitates a clear distinction between risk assessment and risk management, the acceptable risk levels in a PRA are often decided in the community of safety experts who make the assessment as well. Hence, the actual organizational structure does not support or encourage a separation between risk assessment and risk management. This is a severe limitation on the practical applicability of the proclaimed advantage of probabilistic risk analysis that it is well suited for making this separation.

Arguments for the Safety Engineering Approach

In this section, we discuss the four arguments that we have found in the literature in favor of safety engineering approaches such as safety factors rather than probabilistic risk assessment. These arguments refer to computational costs, simplicity, residual uncertainties, and security.

Computational Costs

Probabilistic models promise to provide accurate estimates of failure probabilities that depend on many different input variables. The costs for data acquisition and computation tend to increase rapidly with the number of input variables. In practice, this leads either to unworkably long time for the analysis or to simplifications of the model that unavoidably lead to a decrease in accuracy. Especially when the additional time also involves delays in the design and engineering process itself, the simplicity of the safety factor approach may be an advantage, also from a cost-benefit point of view. In the building industry, the efficiency of the building process is often more important for cost-efficiency than the amount of material used. Hence, reducing the construction time may be economically preferable to saving construction material.


The simplicity of the safety engineering approach can make mistakes less likely. The importance of simplicity in safety work is known from chemical plant design. Plants with inherently safer technologies tend to be simpler in design, easier to operate, and more error tolerant (Overton and King 2006). Similarly, simpler calculation or design methods may be preferable to complex ones since they reduce the likelihood of mistakes in the calculations and, hence, the likelihood of mistakes in the construction itself.

Residual Uncertainties

One of the disadvantages of probabilistic design methods is that they can take potential adverse effects into account only to the extent that their probabilities can be quantified (Knoll 1976; Clausen et al. 2006; Hansson 2009a). Although attempts are made to quantify as many elements as possible, including human errors, this can at most be done approximately. In practice, these difficulties may lead to a one-sided focus on those dangers that can be assigned meaningful probability estimates. Probabilistic approaches tend to neglect potential events for which probabilities cannot be obtained (Knoll 1976; Hansson 1989). Safety factors, to the contrary, are intended to compensate also for in practice unquantifiable uncertainties such as the possibility that there may be unknown failure mechanisms or errors in one’s own calculations. It is a rational and not uncommon practice to set a higher safety factor to compensate for uncertainty. This is done routinely in toxicology (Santillo et al. 1998; Fairbrother 2002), and it seems sensible to do so in other fields as well.

The use of safety factors is not the only method in safety engineering that takes uncertainties into account. The same applies to the other safety principles mentioned in section “Safety Engineering,” namely, inherent safety and multiple safety barriers. These and other safety engineering principles introduce some degree of redundancy in the system, which is often an efficient way to protect also against dangers for which meaningful probability estimates are unavailable. Such “extra” safety may not be defensible from a cost-benefit perspective, but it may nevertheless be justified from the perspective of protection against uncertainties (e.g., uncertainties about the probabilities of known risks and about unknown failure modes). For an example of this, suppose that a ship builder comes up with a convincing plan for an unsinkable boat. A PRA shows that the probability of the ship sinking is incredibly low and that the expected cost per life saved by lifeboats would be exceptionally high. There are several reasons why the ship should still have lifeboats: the calculations may possibly be wrong, some failure mechanism may have been missed, or the ship may be exposed to some unknown danger. Although the PRA indicates that such measures are inefficient, we cannot trust the PRA to be certain enough to justify a decision to exclude lifeboats from the design. Similar arguments can be used, for instance, for introducing an extra safety barrier in a nuclear reactor, although a PRA indicates that it is not necessary. This is, of course, not an argument against performing PRAs but an argument against treating their outcomes as the last word on what safety requires.

Security and Vulnerability

A fourth argument in favor of the safety factor approach is related to security threats. So far, we have focused on safety, that is, the protection against unintended harm. However, the attacks on the New York Twin Towers on September 11, 2001, showed that not only “acts of nature” threaten the integrity of engineering structures. We also need protection against another type of threats, namely, those following from intended harm. This distinction is often expressed with the terms safety (against unintended harm) and security (against intended harm). Golany et al. (2009) refer to the former as probabilistic risk and the latter as strategic risk (where “strategic” refers to environments in which intentional actions are taken; it should be noted that Golany et al. do not discuss the epistemic uncertainties that may also be present in strategic situations). An important distinction is that in the latter case, there is an adversary who is capable of intelligent behavior and adapting his strategy to achieve his objectives. This has several implications.

First, it is in practice seldom meaningful to try to capture the likelihood of intended harms in probabilistic terms. Instead of assigning probabilities to various acts by a terrorist, it is better to try to figure out what actions would best achieve the terrorist’s objectives. In such an analysis, the terrorist’s responses to one’s own preparative defensive actions will have to be taken into account (Parnell et al. 2008). Game theory (that operates without probabilities) is better suited than traditional probability-based analyses to guide prevention aimed at reducing vulnerability to terrorist attacks and most other intentional threats (Hansson 2010).

Secondly, as noted by Golany et al. (2009), whereas the criterion of effectiveness is adequate in safety work, in security work it should be replaced by the criterion of vulnerability. Vulnerability can be understood as a weakness that can be exploited by an adversary. The adversary’s aim is related to this loss and can in many cases be described as maximizing the loss (e.g., by targeting critical infrastructure). The optimal protection against terrorist attacks thus involves strategies to reduce the potential for loss. Probabilities do not have a central role in deliberations on how best to achieve such a reduction.

Sarewitz et al. (2003) add force to this line of argument by pointing out that vulnerability reduction can be considered a human rights issue, which may in some situations give it priority over economic optimization. Since modern society has an obligation to ensure that all citizens are provided a basic level of protection and that their fundamental rights are respected, economic arguments should not always be decisive in resource allocation. The authors give the example of the Americans with Disabilities Act (ADA), which requires that all public buses be provided with wheelchair access devices. This requirement was first opposed on economic grounds. Cost-benefit analyses showed that providing the buses with wheelchair access devices would be more expensive than providing, at public expense, taxi services for people with disabilities. The measure was nevertheless introduced, in order to realize the right of people with disabilities to be fully integrated into society. The right to protection against violence can be seen as a similar fundamental right, to be enjoyed by all persons. Such a right can justify protection even when a PRA or a CBA indicates that the resources would be “better” used elsewhere.

Experiences and Examples

A discipline in which both approaches to the design for safety are being used is civil engineering and hydraulic engineering in particular. Civil engineering has a long history of applying safety engineering principles, in particular safety factors that have much of their origin in this domain of engineering (Randall 1976). Probabilistic risk analysis has a small but increasing role in civil engineering, most notably in the form of design criteria based on probabilistic information.

An example of how the safety factor approach is being used in hydraulic engineering is the geotechnical design of river dykes. One of the potential failure mechanisms of a slope is the occurrence of a slip circle, i.e., a rotational slide along a generally curved surface (Fig. 2).
Fig. 2

Slope instability of the inner slope (Source: TAW 2001; Kanning and Van Gelder 2008)

The classic approach to determine the stability of a slope against sliding is to calculate for all possible sliding circles, the moment caused by the driving or destabilizing forces (i.e., the moment caused by the vertical arrows in Fig. 3) and the moment caused by the resisting or stabilizing forces (i.e., the moment caused by the leftward-directed arrows in Fig. 3). A slope is considered stable (or safe) if the ratio of the resisting momentum and the driving momentum is larger than a predefined safety factor. This safety factor is soil dependent. If the ratio is lower than the required safety factor, a flatter slope should be chosen and the calculation should be repeated. All engineers working with geotechnical materials are familiar with this iterative process of determining the maximum slope level (Terzaghi et al. 1996).
Fig. 3

Potential sliding circle for geotechnical structure

The landmark example of probabilistic design in hydraulic engineering is the design of the Dutch Eastern Scheldt storm surge barrier in the 1970s and 1980s. This was the last part of the Dutch Delta works, which were built in response to the severe North Sea flood of 1953. The original plan was to close off the Eastern Scheldt, but by the late 1960s, both environmentalists and fishermen opposed the full closure of the Eastern Scheldt. As an alternative, a storm surge barrier was designed that would normally be open and allow water to pass through, but that would close in case the water level at seaside exceeded a certain level.

The Eastern Scheldt storm surge barrier is the first hydraulic construction where the probabilistic design approach has been applied. Contrary to the design tradition at that time, the design process started with the construction of a fault tree, including as many failure mechanisms as possible, including failure of the system due to operation errors (CalIe et al. 1985).

According to Dutch water law, the Eastern Scheldt storm surge barrier had to be designed for 1/4,000-year conditions. This criterion specifies that the barrier has to be designed for a surge level and wave conditions that are expected to occur once every 4,000 years. Initially, this criterion was interpreted in terms of a 1/4,000-year design high water level at the seaside of the barrier, i.e., a water level that is expected to be exceeded only once every 4,000 years, together with 1/4,000-wave conditions. It was assumed that this design high water level at the seaside in combination with a low water level at landside of the barrier and extreme wave conditions would determine the design load on the barrier. The result was a very unlikely combination of water level and wave conditions. It was therefore chosen to look at the combined 1/4,000-year hydraulic conditions, that is, the combination of high water level and wave conditions that had a probability of 1/4,000 years. This led to a reduction of 40 % in hydraulic load, as a result of which the distance between the pillars, an important design parameter, could be enlarged from 40 to 45 m. Similarly, some redundant elements in the design were removed because they did not significantly add to the overall safety (e.g., the removal of a backup valve for every opening in the barrier). However, the probabilistic approach did not only lead to the removal of redundant components of the barrier. On the basis of the fault tree, the weakest parts of the barrier were identified. Some elements were made stronger because that would significantly improve the overall safety (Vrijling 1990).

Despite recurrent pleas to switch from a “deterministic” to a probabilistic approach to design in hydraulic engineering, the prevalent design methodology is still based on the safety factor approach (Doorn and Hansson 2011). However, these safety factors are increasingly based on probabilistic calculations (Tsimopoulou et al. 2011). As such, they can be considered hybrid or mixed approaches. Probabilistic risk analysis approaches can play an important role after the design phase. Since probabilistic risk analysis approaches allow for comparison of the strengths of several elements within a system, they can accordingly indicate which element to improve. Therefore, probabilistic risk analysis approaches seem fit for identifying critical elements and setting up maintenance schemes (Vesely et al. 1994; Wang et al. 1996; Kong and Frangopol 2005). For the safety assessment of hydraulic structures after construction, probabilistic approaches increasingly replace the safety factor approach (Jongejan and Maaskant 2013; Schweckendiek et al. 2013).

Critical Evaluation

In Subsections “Arguments for Using Probabilistic Risk Analysis in Design” and “Arguments for the Safety Engineering Approach,” we discussed the arguments in defense of design approaches using probabilistic risk analysis and design approaches that use principles from safety engineering, respectively. The strongest arguments in favor of design methods based on probabilistic risk analysis are the possibility of economic optimization and fitness for policy making (risk management). The strongest arguments for traditional safety engineering approaches refer to computational costs, simplicity, residual uncertainties, and security. Which approach is preferable when we want to design for safety? There is no general answer to that question; both approaches are of value, and it does not seem constructive to see them as competitors. In practice, neither of them can tell the full truth about risk and safety (Hansson 2009b). In order to see how we can combine the insights from both approaches, let us reconsider the objectives of the two approaches as explained in section “Current Approaches.”

There are two different interpretations of the failure probabilities calculated in a PRA. One of these treats the calculated probabilities as relative indices of probabilities of failure that can be compared against a target value or against corresponding values for alternative designs. This interpretation seems unproblematic. It should be realized that it refers to a relative safety level; not all elements are included so it does not correspond to frequencies of failure in the real world (Aven 2009). Instead, this interpretation provides “a language in which we express our state of knowledge or state of certainty” (Kaplan 1993). It can be used to compare alternative components within a system, to set priorities or to evaluate the effects of safety measures. It is in such contexts of local optimization that probabilistic analysis has its greatest value (Lee et al. 1985).

The other interpretation treats the outcomes of PRA as objective values of the probability of failure. According to this view, these probabilities are more than relative indicators; they are (good estimates of) objective frequencies. In a world with no uncertainties but only knowable, quantifiable risks, this could indeed be a valid assumption. However, we do not live in such a world. In practice, failure probabilities of technological systems usually include experts’ estimates that are unavoidably subjective (Caruso et al. 1999). Often some phenomena are excluded from the analysis. Such uncertainties make comparisons between different systems unreliable and sometimes severely misleading. To compare the safety of a nuclear power plant with the safety of a flood defense system on the basis of PRAs of the two systems is an uncertain and arguably not even meaningful exercise since the uncertainties in these two technologies are different and difficult or perhaps even impossible to compare.

Let us return to the safety factor approach in safety engineering that was said to be intended for compensating for five major categories of sources of failure (section “Safety Engineering”). Two of these, namely, higher loads and worse material properties than those foreseen, are targeted both by safety factors and probabilistic risk analysis. Due to the higher precision of probabilistic approaches, quantitative analysis of these sources of failure should at least in many cases preferably be based on probabilistic information.

The main advantage of the safety factor approach over probabilistic risk analysis concerns the other three sources of failure: imperfect theory of the failure mechanisms, possibly unknown failure mechanisms, and human error (e.g., in design). Probabilistic risk analysis is not capable of capturing these uncertainties. This is a major reason why probabilistic risk analysis should be seen as one of several tools for risk assessment and not as a sure source of final answers on risk assessment.

The more ignorant designers are of the uncertainties involved, the more they should rely on the traditional forms of safety engineering. Conversely, when uncertainty is reduced, the usefulness and reliability of probabilistic design methods is increased. There are currently no empirical standards regarding the appropriate design approach for different situations. It is desirable to carry out some action-guiding experiments to systematically evaluate the effect of the different approaches on the safety of a particular design.


Probabilistic risk analysis is sometimes seen as competitor of traditional forms of safety engineering. This is a too narrow view of the matter. Neither of these methods can in practice tell the full truth about risk and safety. It is more constructive to see them as complementary. Probabilistic risk analysis is often an indispensable tool for priority setting and for the effect evaluation of safety measures. On the other hand, some of the uncertainties that safety engineering deals with successfully tend to be neglected in probabilistic calculations. Methodological pluralism, rather than monopoly for one single methodology, is to be recommended. Currently there is a trend in several fields of engineering towards increased use of probabilistic risk analysis. This trend will strengthen safety engineering, provided that it leads to a broadening of the knowledge base and not to the exclusion of the wide range of dangers – from one’s own miscalculations to terrorist attacks – for which no meaningful probability estimates can be obtained.



  1. 1.

    In this simplified example, it is assumed that in case of properly functioning signals, the driver will also stop at the halt line. Hence, for a collision to occur, it is both necessary that the signals fail and that the driver is not able to brake in time.


  1. Abbas AM, Manohar CS (2002) Investigations into critical earthquake load models within deterministic and probabilistic frameworks. Earthquake Eng Struct Dyn 31(4):813–832CrossRefGoogle Scholar
  2. Ale B (2009) Risk: an introduction. Routledge, LondonGoogle Scholar
  3. Aven T (2009) Perspectives on risk in a decision-making context – review and discussion. Saf Sci 47(6):798–806CrossRefGoogle Scholar
  4. Blockley DI, Godfrey PS (2000) Doing it differently. Thomas Telford, LondonGoogle Scholar
  5. CalIe EOF, Dillingh D, Meermans M, Vrouwenvelder AWCM, Vrijling JK, De Quelerij L, Wubs AJ (1985) Interim rapport TAW 10: Probabilistisch Ontwerpen van Waterkeringen. Technische Adviescommissie voor de Waterkeringen (TAW), DelftGoogle Scholar
  6. Caruso MA, Cheok MC, Cunningham MA, Holahan GM, King TL, Parry GW, Ramey-Smith AM, Rubin MP, Thadani AC (1999) An approach for using risk assessment in risk-informed decisions on plant-specific changes to the licensing basis. Reliab Eng Syst Saf 63(3):231–242CrossRefGoogle Scholar
  7. Chapman PM, Fairbrother A, Brown D (1998) A critical evaluation of safety (uncertainty) factors for ecological risk assessment. Environ Toxicol Chem 17(1):99–108CrossRefGoogle Scholar
  8. Clausen J, Hansson SO (2007) Eurocodes and REACH: differences and similarities. Risk Manage 9(1):19–35CrossRefGoogle Scholar
  9. Clausen J, Hansson SO, Nilsson F (2006) Generalizing the safety factor approach. Reliab Eng Syst Saf 91(8):964–973CrossRefGoogle Scholar
  10. Council NR (1983) Risk assessment in the federal government: managing the process. National Academy Press, Washington, DCGoogle Scholar
  11. Davis M (2001) Three myths about codes of engineering ethics. IEEE Technol Soc 20(Fall):8–14CrossRefGoogle Scholar
  12. Dhillon BS (1997) Failure mode and effects analysis: bibliography. Microelectr Reliab 32(5):719–731CrossRefGoogle Scholar
  13. Ditlevsen O, Madsen HO (2007[1996]) Structural reliability methods (internet edition 2.3.7). Wiley, ChichesterGoogle Scholar
  14. Doorn N, Hansson SO (2011) Should probabilistic design replace safety factors? Philos Technol 24(2):151–168CrossRefGoogle Scholar
  15. Fairbrother A (2002) Risk assessment: lessons learned. Environ Toxicol Chem 21(11):2261–2263CrossRefGoogle Scholar
  16. Golany B, Kaplan EH, Marmur A, Rothblum UG (2009) Nature plays with dice – terrorists do not: allocating resources to counter strategic versus probabilistic risks. Eur J Oper Res 192(1):198–208CrossRefGoogle Scholar
  17. Graham J, Wiener J (1995) Risk versus risk. Harvard University Press, Cambridge, MAGoogle Scholar
  18. Guikema SD, Paté-Cornell ME (2002) Component choice for managing risk in engineered systems with generalized risk/cost functions. Reliab Eng Syst Saf 78(3):227–238CrossRefGoogle Scholar
  19. Hansson SO (1989) Dimensions of risk. Risk Anal 9(1):107–112CrossRefGoogle Scholar
  20. Hansson SO (1998) Setting the limit: occupational health standards and the limits of science. Oxford University Press, New YorkGoogle Scholar
  21. Hansson SO (2009a) From the casino to the jungle. Synthese 168(3):423–432CrossRefGoogle Scholar
  22. Hansson SO (2009b) Risk and safety in technology. In: Meijers AWM (ed) Handbook of the philosophy of science. Philosophy of technology and engineering sciences, vol 9. Elsevier/North-Holland, Amsterdam, pp 1069–1102CrossRefGoogle Scholar
  23. Hansson, SO (2010) Promoting inherent safety. Process Safety and Environmental Protection Vol. 88(3), pp. 168–172CrossRefGoogle Scholar
  24. JCSS (2001) Probabilistic model code. Part 1 – BASIS of design. Joint Committee on Structural Safety. ISBN:978-3-909386-79-6Google Scholar
  25. Jongejan RB, Maaskant B (2013) Applications of VNK2: a fully probabilistic risk analysis for all major levee systems in The Netherlands. In: Klijn F, Schweckendiek T (eds) Comprehensive flood risk management: research for policy and practice. Taylor & Francis, London, pp 693–700Google Scholar
  26. Kanning W, Van Gelder PHAJM (2008) Partial safety factors to deal with uncertainties in slope stability of river dykes. In: De Rocquigny E, Devictor N, Tarantola S (eds) Uncertainty in industrial practice: a guide to quantitative uncertainty management. Wiley, LondonGoogle Scholar
  27. Kaplan S (1993) Formalism for handling phenomenological uncertainties. The concepts of probability, frequency, variability, and probability of frequency. Nucl Technol 102(1):137–142Google Scholar
  28. Keynes JM (1921) A treatise on probability. Macmillan, LondonGoogle Scholar
  29. Knight FH (1935[1921]) Risk, uncertainty and profit. Houghton Mifflin, BostonGoogle Scholar
  30. Knoll F (1976) Commentary on the basic philosophy and recent development of safety margins. Can J Civil Eng 3(3):409–416CrossRefGoogle Scholar
  31. Kong JS, Frangopol DM (2005) Probabilistic optimization of aging structures considering maintenance and failure costs. J Struct Eng-Asce 131(4):600–616CrossRefGoogle Scholar
  32. Lee WS, Grosh DL, Tillman FA, Lie CH (1985) Fault tree analysis, methods, and applications – a review. IEEE Trans Reliab 34(3):194–203CrossRefGoogle Scholar
  33. Mak KK, Sicking DL, Zimmerman K (1998) Roadside safety analysis program – a cost-effectiveness analysis procedure. Gen Des Roadside Saf Features 1647:67–74Google Scholar
  34. Melchers RE (2002) Probabilistic risk assessment for structures. Proc Inst Civil Eng-Struct Build 152(4):351–359CrossRefGoogle Scholar
  35. Melchers RE (2007) Structural reliability theory in the context of structural safety. Civil Eng Environ Syst 24(1):55–69CrossRefGoogle Scholar
  36. Michal R (2000) The nuclear news interview. Apostolakis: on PRA. Nucl News 43(3):27–31Google Scholar
  37. Miller CO (1988) System safety. In: Wiener EL, Nagel DC (eds) Human factors in aviation (cognition and perception). Academic, San Diego, pp 53–80Google Scholar
  38. Möller N, Hansson SO, Peterson M (2006) Safety is more than the antonym of risk. J Appl Philos 23(4):419–432CrossRefGoogle Scholar
  39. Moses F (1997) Problems and prospects of reliability-based optimization. Eng Struct 19(4):293–301CrossRefGoogle Scholar
  40. Overton T, King GM (2006) Inherently safer technology: an evolutionary approach. Process Saf Progr 25(2):116–119CrossRefGoogle Scholar
  41. Parnell GS, Borio LL, Brown GG, Banks D, Wilson AG (2008) Scientists urge DHS to improve bioterrorism risk assessment. Biosecur Bioterror 6(4):353–356CrossRefGoogle Scholar
  42. Paté-Cornell ME (1996) Uncertainties in risk analysis: six levels of treatment. Reliab Eng Syst Saf 54(2–3):95–111CrossRefGoogle Scholar
  43. Rackwitz R (2004) Optimal and acceptable technical facilities involving risks. Risk Anal 24(3):675–695CrossRefGoogle Scholar
  44. Randall FA (1976) The safety factor of structures in history. Prof Saf 12–28Google Scholar
  45. Rasmussen NC (1975) Reactor safety study. An assessment of accident risks in U.S. commercial nuclear power plants (WASH-1400, NUREG 75/014). U.S. Nuclear Regulatory CommissionGoogle Scholar
  46. Santillo D, Stringer RL, Johnston PA, Tickner J (1998) The precautionary principle: protecting against failures of scientific method and risk assessment. Mar Pollut Bull 36(12):939–950CrossRefGoogle Scholar
  47. Sarewitz D, Pielke R, Keykhah M (2003) Vulnerability and risk: some thoughts from a political and policy perspective. Risk Anal 23(4):805–810CrossRefGoogle Scholar
  48. Schweckendiek T, Calle EOF, Vrouwenvelder AWCM (2013) Updating levee reliability with performance observations. In: Klijn F, Schweckendiek T (eds) Comprehensive flood risk management: research for policy and practice. Taylor & Francis, London, pp 359–368Google Scholar
  49. Takewaki I (2005) A comprehensive review of seismic critical excitation methods for robust design. Adv Struct Eng 8(4):349–363CrossRefGoogle Scholar
  50. TAW (2001) Technisch Rapport Waterkerende grondconstructies: Geotechnische aspecten van dijken, dammen en boezemkaden. Technische Adviescommissie voor de Waterkeringen (TAW)/Expertise Netwerk Water (ENW), DelftGoogle Scholar
  51. Tench WH (1985) Safety is no accident. Collins/Sheridan House, LondonGoogle Scholar
  52. Terzaghi K, Peck RB, Mesri G (1996) Soil mechanics in engineering practice, 3rd edn. Wiley, LondonGoogle Scholar
  53. Tsimopoulou V, Kanning W, Verhagen HJ, Vrijling JK (2011) Rationalization of safety factors for breakwater design in hurricane-prone areas. Coastal structures 2011: Proceedings of the 6th international conference on coastal structures, Yokohama. World ScientificGoogle Scholar
  54. Van de Poel IR, Royakkers LMM (2011) Ethics, technology, and engineering: an introduction. Wiley-Blackwell, West-SussexGoogle Scholar
  55. Vesely WE, Belhadj M, Rezos JT (1994) PRA importance measures for maintenance prioritization applications. Reliab Eng Syst Saf 43(3):307–318CrossRefGoogle Scholar
  56. Vrijling JK (1990) Kansen in de Waterbouw (inaugural address). Technical University Delft, DelftGoogle Scholar
  57. Vrijling JK, van Hengel W, Houben RJ (1998) Acceptable risk as a basis for design. Reliab Eng Syst Saf 59(1):141–150CrossRefGoogle Scholar
  58. Vrouwenvelder A (2002) Developments towards full probabilistic design codes. Struct Saf 24(2–4):417–432CrossRefGoogle Scholar
  59. Wang J, Yang JB, Sen P, Ruxton T (1996) Safety based design and maintenance optimisation of large marine engineering systems. Appl Ocean Res 18(1):13–27CrossRefGoogle Scholar
  60. Zegeer CV, Carter DL, Hunter WW, Stewart JR, Huang H, Do A, Sandt L (2006) Index for assessing pedestrian safety at intersections. Transportation Research Record, No. 1982: Pedestrians and Bicycles. Transportation Research Board. National Academy of Sciences, Washington, DC, pp 76–83Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Department of Technology, Policy and ManagementTU DelftDelftNetherlands
  2. 2.Division of PhilosophyRoyal Institute of TechnologyStockholmSweden

Personalised recommendations