Much safety work is based on safety principles, which are usually simple rules or mottos such as “inherent safety,” “fail-safe” and “best available technology.” Many such principles have been proposed, and there is a considerable overlap between them (Möller et al. 2018). In this chapter, we will see Vision Zero as one of the safety principles and relate it to other such principles. We will begin with its closest relatives and then consider some of its more distant kin. The safety principles discussed in this chapter are summarized in Fig. 1.

Fig. 1
figure 1

A typology of the major safety principles discussed in this chapter

The “Zero Family”

The idea that there should be nothing at all of something undesirable must have been close at hand to human thinking since long before recorded history. Concepts of “naught” and “none” are much older than the mathematical concept of zero. For instance, at least since Alcidamas (fourth century BCE), abolitionists have claimed that no single human being should be a slave.

In modern discussions on safety, strivings to get completely rid of something undesirable have emerged in many contexts, probably often independently. The goal of “zero” is therefore an oftentimes reinvented wheel. As Gerard Zwetsloot and his co-authors have pointed out, we have a “family of zero visions” (Zwetsloot et al. 2013, p. 46). Its members are known under a great variety of names, some of which are recorded in Fig. 2. Some of them are “visions” in the sense of being difficult or perhaps impossible to fully achieve. Others are clearly possible to achieve and might more appositely be called “zero targets.” Both visions and targets can serve useful social purposes. Visions can inspire us to undertake long-term projects whose end points have to be further specified as we go along. Targets, on the other hand, are essential parts of our planning for what to do next (Edvardsson and Hansson 2005; Chap. 1, “Vision Zero and Other Road Safety Targets”).

Fig. 2
figure 2

Some of the major members of the “zero family” of safety principles

Zero Defects

The safety-related zero concepts have been influenced by zero visions in the area of industrial quality. In 1965, the US Department of Defense issued a handbook to be used by defence contractors for “establishing and implementing Zero Defects” as a “motivational approach to the elimination of human error.” According to the handbook, the programme had been originated in 1962 by a major (unnamed) defence contractor, which had “established goals for each department to reduce to zero those defects attributable to human error.” The programme had been successful, and its ideas had already been “adopted by numerous industrial and Department of Defense activities” (Anon 1965, p. 3). Each individual worker was asked to “accept voluntarily a challenge to do an errorless job,” a challenge that would be accepted by those feeling “pride in workmanship” (ibid.). Targets and scorekeeping were essential components of the concept.

The unnamed defence contractor was probably Martin Marietta, a Florida-based company that built missiles for the armed forces. Their employee James F. Halpin has been identified as the inventor of the concept (Peierls 1967). He published a book (Halpin 1966) that made the Zero Defects concept known in wider circles. Halpin was critical of what he called a “double standard” in our attitudes as consumers and as workers. Most consumers expect the products they buy to be free of defects, but the same persons, in their role as workers, consider a certain amount of errors in their work outputs to be acceptable. Zero Defects would remove that contradiction and at the same time improve the quality of working life by making workers more proud of their work (Zwetsloot et al. 2013, pp. 45–46).

Another important promoter of Zero Defects was Philip B. Crosby, who worked in the same missile company as Halpin in the early 1960s. In his book Quality is Free (Crosby 1979), he emphasized the hidden costs of low quality and argued that a Zero Defects strategy is beneficial for business. The Zero Defects programme was much in vogue in American industry in the late 1960s and early 1970s, but after that, it receded in importance. Largely due to Crosby’s influence, it had a renaissance in the automobile industry in the 1990s (Lovrencic and Gomišcek 2014, p. 4). Claims have been made that the “zero” concepts of safety engineering have developed out of Zero Defects (Butler 2017, p. 25), but no evidence seems to have been presented that confirms this genealogy of the concepts.

Workplace Safety

A large number of zero concepts have been used to express the goal that no one should be injured or harmed in their workplace. Common terms are “zero harm,” “zero injuries,” “zero accidents,” “zero incidents” and “incident and injury free” (abbreviated IIF) (Zou 2010; Lovrencic and Gomišcek 2014). If the phrases are interpreted literally, there is a large difference, for instance, between “zero accidents” and “zero incidents,” but in actual practice, the choice among these phrases does not say much about how safety work is conducted in the workplace. The variations in terminology seem to mark different national and industry-based traditions rather than different approaches to safety.

In the United States, a two-year safety programme named “Zero in on Safety” was launched in 1971 in all federal agencies. Its aim was to reduce the number of injuries and other losses on federal workplaces (Anon. 1971). However, the expression “zero in on” seems to have been used in the common sense “concentrate attention on,” rather than referring to a zero target.

The American construction industry has been prominent in promoting zero approaches to workplace safety. In 1989, the Construction Industry Institute (CII) started a project called “Zero Accidents Techniques,” which led up to a “zero injury” target that was launched in 1993. It was based on detailed analyses of accidents on construction sites and means to prevent them. The programme had an emphasis on worker compliance. For instance, it included a drug-testing policy according to which a worker with a positive drug test was expelled from the workplace in the following 60 days (Hinze and Wilson 2000; Lovrencic and Gomišcek 2014). In an interview, the CEO of Bechtel, one of the largest construction companies in the United States, explained the rationale behind the Zero Accidents objective as follows:

I sincerely believe all accidents and all injuries are preventable. Accidents don’t just happen. They occur primarily because of someone’s unsafe behaviour. Correcting that behaviour is the only way we’ll get to Zero Accidents. Zero Accidents means exactly that – zero. When it comes to preventing accidents, nothing less than perfection will do. (Zou 2010, p. 15)

However, this company also has a more empowering approach to employees’ contributions to safety. It has authorized all employees to stop work which they consider to be unsafe; “If it’s not safe, don’t do it” (Zou 2010, p. 14). Emmit J. Nelson, who was chair of the original CII Zero Accidents Task Force, has explained the justification of zero targets with the following hypothetical example:

Last year, a small 100-employee company experienced five serious lost-workday cases. Accident costs were high, but the misery that followed the injuries was even more devastating. The owner vowed to take action.

After consulting with company leaders, the owner set a goal of only two lost-workday cases for the upcoming year. At the next safety meeting, the owner voiced his concern about the five lost-workday cases and announced the new goal. Everyone was enthusiastic and seemed to buy into the plan.

Question: On the first workday in January, how many of the 100 employees think (as far as the goal is concerned) that it is okay for them to have an injury that results in a lost-work-day case?

Answer: All 100. Each employee thinks that, provided the goal of two is not reached, it is acceptable for a serious injury to occur.

What has the new goal inadvertently accomplished? It has said it is acceptable for injuries to occur (provided no more than two occur). This certainly is not the ‘Let’s stop injury’ message the owner intended to deliver….

Zero is the correct approach. Such commitment sends an unmistakable message to all employees that injury is unacceptable. (Nelson 1996)

In 2019, the National Safety Council (NSC) launched a campaign called Work to Zero 2050. Its purpose is to achieve zero fatalities on workplaces by the year 2050. The NSC pointed out that when the campaign was introduced, the number of workplace fatalities was 5000 per year, as compared to 50,000 per year in the early 1900s, although the number of people working had increased from 30 million to 160 million in the same period. This was largely due to technology innovation, and new technology will be an essential component of the new campaign. “Its purpose is to eliminate death on the job by the year 2050. Period. No hedging, no qualifiers” (McElhattan 2019).

In Japan , a government-sponsored Zero-Accident Total Participation Campaign (Zero-Accident Campaign) was introduced in 1973 by the Japan Industrial Safety and Health Association (JISHA). Reportedly, the campaign was inspired by the American “Zero in on Safety” campaign. It was also influenced by the quality improvement movement (JICOSJ n.d.). (Dekker (2017, p. 125) claims that the Japanese “Zero-Accident Total Participation Campaign” took place in the 1960s, but this is not corroborated by the Japanese sources I have had access to.) The target of the campaign was set high:

‘Zero accidents’ means to achieve an accident free workplace (not only no fatal accidents or accidents causing absent from work, but also no accidents, including industrial accidents, occupational illness, and traffic labor accidents) by detecting, understanding, and solving all hazards (problems) in everybody’s daily life as well as potential hazards existing in workplaces and work. (JICOSJ n.d.)

In the United Kingdom, the first recorded use of a zero goal for workplace safety seems to have been in 1988. In that year, the British Steel plant in Teesside in North East England launched a programme of “total quality performance.” The training material used in meetings with the whole workforce included a short text on accident prevention with the two headings “Total Quality is no Accident” and “Zero Defect = Zero Accidents” (Procter et al. 1990). Thus, safety was included as part of a campaign whose primary goal was product quality. The campaign was based on the notion of “continuous improvement” (see subsection “Continuous Improvement”). The site manager had visited an American steel plant that used the slogan “Total Quality is no Accident,” which was the inspiration for including safety in the quality campaign. The safety team in Teesside adopted a “zero-accident philosophy” for their work. A considerable reduction in accidents was reportedly achieved (Ball and Procter 1994).

In later years, the British construction industry has taken a leading role in applying zero approaches to safety. The best-known example is the construction of the venues of the 2012 Summer Olympics and Paralympics in London. The Olympic Delivery Authority adopted a “zero tolerance” approach to unsafe and unhealthy working conditions on their building sites. This was a five-year project involving 12,000 workers who worked a total of 80 million hours. The accident rate was unusually low for a building project, and there was no fatality. This seems to have been the first fatality-free Olympic construction project in modern history. The Olympic Delivery Authority received a special reward from the Royal Society for the Prevention of Accidents for this achievement (Wright 2012).

Most major building companies in the United Kingdom have adopted a zero aim for their safety work. A variety of two-word brands are in vogue, such as “zero harm,” “mission zero,” “target zero,” “zero target” and “beyond zero” (Sherratt 2014). The last-mentioned catchphrase may be somewhat surprising, given the considerable difficulties that the building industry has had in eliminating accidents leading to fatalities and severe injuries. Fred Sherratt, a researcher in construction management, noted that the Beyond Zero programme “boldly announces ‘Zero incidents? We can do better than that!’ on its webpage.” This, she says

…could suggest achieving zero incidents is an easy target. This reading was identifiable elsewhere in the text, ‘aiming for zero accidents was a soft target and was not the final word in what could be achieved’. Here, Zero is arguably belittled beyond itself: positioned as ‘soft’, something so easily attainable that it should not be considered a target, just something to be looked beyond. When considered in the context of one of the highest risk industries in the UK… this appears to be rather empty rhetoric. (Sherratt 2014, p. 742)

In New Zealand , the New Zealand Aluminium Smelters Limited (NZAS) introduced zero target thinking in 1990 and adopted the slogan “Our Goal is Zero.” This resulted in a considerably lower accident rate than that of most other aluminium smelters in in the world (Young 2014).

In Australia , most zero-aiming safety activities are performed under the designation of “zero harm,” which was introduced in an influential agreement in 2002 between the Australian Chamber of Commerce and Industry (ACCI) and the Australian Council of Trade Unions (ACTU). The term “zero harm” has become so popular that it has to a large extent replaced references to safety. Workplace health and safety personnel are recruited to positions as “zero harm manager,” “zero harm reporting coordinator” and “zero harm advisor.” Reporting of accidents and incidents is done on a “zero harm reporting app,” and safety culture is referred to as “zero harm culture” (Butler 2017, p. 28). In a PhD thesis on the Zero Harm movement in Australia, Keith Butler noted that “Zero Harm has been popularised as a mantra,” but he recognized that “industry is also actively implementing Zero Harm as a goal or vision and even as a numerical target based on the concept that if a single day without an injury can be achieved, then 365 days without an injury is also achievable” (ibid., p. 2).

In Finland, the Finnish Zero Accident Forum was established in 2003 as a voluntary network to help workplaces promote health and safety. It has now changed name to the Finnish Vision Zero Forum. Its membership consists of 440 workplaces, whose 450,000 employees comprise 16% of the country’s workforce ( Accessed August 19, 2020). Members of the Forum have improved their safety performance, as measured in terms of lost-time accidents, whereas non-members were on average less successful in this respect (Zwetsloot et al. 2017b, p. 22).

In Sweden , the government introduced a Vision Zero strategy for workplaces in 2016. Its focus is on fatal accidents, and the central formulation is as follows:

No one should have to die as a result of their job. Concrete measures are necessary in order to prevent work-related accidents leading to injury or death. (Quoted in Kristianssen et al. 2018, p. 265)

The strategy requires measures to prevent work-related injuries and to improve psychosocial work environments.

Sweden also has a Vision Zero for fire safety since 2010, stating “No one shall die or be seriously injured due to fire.” Contrary to the goal for workplace safety, the fire safety goal is combined with interim goals. (Kristianssen et al. 2018, p. 263).

Traffic Safety

This subsection is included as a brief introduction to a topic that is treated in much more detail in other chapters of this handbook.

Vision Zero as a goal for traffic safety was first adopted by the Swedish Parliament in 1997. The Bill stated that “the long-term goal of traffic safety is that nobody shall be killed or seriously injured as a consequence of traffic accidents” and that “the design and function of the transport system shall be adapted accordingly” (Government Bill 1996/97, p. 137). Vision Zero has its focus on severe accidents, i.e. accidents leading to fatalities or serious injuries. Its basic message is that as long as serious accidents still occur, there is a need to improve traffic safety. This has been described as a radically different approach from previously dominating road safety policies, in which a certain death toll in traffic was more or less openly accepted as a price for the advantages of mobility. However, it is also stated in the Bill that Vision Zero is not intended to eliminate every traffic accident that gives rise to property damages or light personal injuries (Belin et al. 2012, p. 171). This is a matter of priorities. As long as there are a large number of deaths and serious injuries in road traffic, they have to be the prime target in traffic safety work.

Vision Zero has had significant impact on the traffic safety work of the Swedish Road Administration. The following four changes on Swedish roads have resulted from systematic work to implement Vision Zero:

  • More roundabouts: Roundabouts have become more common in intersections, in particular within population centers. Roundabouts significantly reduce vehicle velocities. If collisions take place, their consequences will be less severe than in regular intersections, due both to reduced speed and different angles of collision.

  • Roads with midrails: The so-called 2+1 road is a three-lane road with two lanes in one direction and one in the other. A mid barrier separates traffic going in opposite directions. The direction of the middle lane alternates so that overtaking is always possible within a few kilometres. In this way, head-on collisions are prevented, which has led to a significant reduction in fatalities and serious injuries. The 2+1 road was introduced in 1998 on a route that had previously been the scene of many fatal accidents. There was much initial scepticism towards the new road design, but it has proved effective against accidents, and this road design is now widely used in Sweden. It reduces the number of fatalities by around 80% (Johansson 2009, p. 826).

  • Lower speed limits within population centers: In order to implement Vision Zero, local municipalities have been authorized to lower speed limits to 30 km/h. The purpose of this is to reduce fatalities among unprotected road users.

  • Safer roadsides: Efforts have been made to mitigate accidents where vehicles drive off the road. Rails have been set up, and roadsides have been cleared of dangerous objects such as boulders and trees (Vägverket 2004).

The measures taken to implement Vision Zero have led to a considerable reduction in road accidents in Sweden. The number of road traffic fatalities per year was reduced from 541 to 221 in the period from 1997 to 2019 ( Last accessed August 11, 2020).

In September 2000, the Norwegian Parliament adopted a vision of zero killed or seriously injured. Denmark adopted a similar vision with the slogan “every accident is one too many” (Færdselssikkerhedskommissionen 2000). Other countries in Europe have adopted their own variants of Vision Zero, and so have Australia and several states and major cities in the United States (Mendoza et al. 2017). Several car manufacturers have also taken up Vision Zero as a goal for technological developments, aiming at “zero crash cars” (Zwetsloot et al. 2017c, p. 96).

Crime Prevention

The application of zero goals to the prevention of criminal and deviant behavior has mainly taken place in the United States, and it has almost invariably been associated with the phrase “zero tolerance.” The application of this phrase to anti-crime policies dates back to 1983, when forty submarine sailors were reassigned by the US Navy for having used drugs. The term was picked up by a district attorney in San Diego, who developed a programme called “zero tolerance” in 1986. The main purpose of that programme was to prosecute all drug offenders however minor their offence was. Sea vessels carrying any amount of drugs were to be impounded (Skiba and Peterson 1999, p. 373; Skiba 2014, p. 28; Stahl 2016).

The programme received the attention of members of the US government, including the Customs Commissioner William von Raab, who decided to implement a similar approach on a national level. A government committee, called the White House Conference for a Drug-Free America (WHCDFA), issued a report in June 1988, concluding: “The U.S. national policy must be zero tolerance for illegal drugs” (Newburn and Jones 2007, pp. 223–224). Initially, the programme resonated well with sentiments in large parts of the general public. However, its implementation in the day-to-day activities of customs officials was far from frictionless. In 1990, two research vessels were seized due to small amounts of marijuana that had been found on-board. This was generally recognized as disproportionate, and after that, the US Customs Service discretely discontinued its zero tolerance programme (Skiba and Peterson 1999, p. 373). But in spite of the practical problems, the political appeal of “zero tolerance” was unscathed, and the concept had already started to proliferate in other social areas.

Two campaigns focusing on violence against women, one in Canada and one in Scotland, took up the “zero tolerance” motto. In 1993, the Canadian Panel on Violence Against Women, which had been appointed by the prime minister two years earlier, presented an action plan declaring zero tolerance. By this was meant that “no level of violence is acceptable, and women’s safety and equality are priorities.” Inspired by the Canadian initiative, the Women’s Committee of the Edinburgh City Council initiated a Zero Tolerance Campaign (ZTC) in late 1992. The campaign launched posters and cinema adverts with a prominently featured Z symbol, emphasizing that male violence against women and children should never be tolerated. A Zero Tolerance Charitable Trust was established in 1995 (Newburn and Jones 2007, pp. 224–225). It is still active, waging campaigns throughout Scotland with the goal “a world free of men’s violence against women” ( Last accessed August 19, 2020).

In 1990, when the US Customs Service abandoned their zero tolerance policies, school districts across the United States were busy introducing their own versions of zero tolerance. Already in the previous year, school districts in California, New York and Kentucky had adopted zero tolerance procedures that targeted drugs and weapons on school premises. Dissemination was rapid, and in 1993, such procedures were implemented in schools throughout the country. The scope of the policies was significantly extended after the Columbine school shooting in April 1999 in Littleton, Colorado. Throughout the country, school security was strengthened with measures such as metal detectors, increased surveillance and greater presence of security personnel. The lists of behaviors punished with school suspensions and exclusions became much longer. In many schools, misdemeanours such as swearing, truancy and dress code violations would lead to suspension (Skiba and Peterson 1999; Stahl 2016).

Consequently, the number of suspended and expelled students increased, in some cases dramatically. Media started to report on suspensions that appeared to be inordinately harsh. One twelve-year-old student was suspended for violating her school’s drug policy by sharing her inhaler with a student who had an asthma attack on a bus (Skiba and Peterson 1999, p. 375). A ten-year-old girl found a small knife in her lunchbox, where her mother had placed it for cutting an apple. She realized that it was a forbidden object and immediately handed it over to her teacher, but she was nevertheless suspended for bringing a weapon to school (APA Taskforce 2008, p. 852). A five-year-old bringing a plastic toy axe to school was suspended for the same reason. Worst of all, an eleven-year-old boy died because the school’s drug regulations forbade him to bring his inhaler to school (Martinez 2009, p. 155).

Research does not confirm the assumption that suspending students from school improves their law abidance and keeps them away from crime. To the contrary, numerous studies have shown school suspensions to be associated with higher risks of school dropout, failure to graduate and criminal activity (Martinez 2009, p. 155; APA Task Force 2008; Stahl 2016). In addition, zero tolerance policies in schools have turned out to be highly discriminatory. African American students have been suspended 3–4 times more frequently than other students (Hoffman 2014, p. 71; Lacoe and Steinberg 2018, p. 209). The widely held assumption that Black students earn their higher rate of school suspensions by their own behavior is not borne out by research. Instead, research shows that African American students are disciplined more than other students for less serious misbehavior (APA Task Force 2008; Skiba 2014, p. 30). However, in recent years, many states have reduced the punitive elements of their zero tolerance programmes and replaced suspensions by policies and interventions that keep the students in the classrooms (Lacoe and Steinberg 2018, pp. 207–208).

In 1991, zero tolerance policies were adopted by the federal Department of Housing and Urban Development. A very low bar was set for evicting a tenant from public housing programmes. Tenants could even lose their apartment without doing anything reproachable themselves; it was sufficient to have visitors engaging in criminal activity in or near the apartment. For instance, an elderly couple was thrown out of their home because their grandson had smoked marijuana in the parking lot. The immediate consequence of zero tolerance was described as follows by a researcher:

Creating homelessness is the [sic] one of the main outcomes of “zero-tolerance” policies. People are evicted from their housing units under these policies and they are left without any other place to live. That is, in fact, the main purpose of these policies. And since people are most often in government-supported housing programs because they do not have other options, eviction typically results in making the person homeless. (Marston 2016)

Often, those evicted had to move to another neighborhood, with negative effects on their social networks. Frequent moves are particularly problematic for families with children, who run increased risks of lower school achievement, school dropout and substance abuse (Marston 2016). In housing, just as in schools, zero tolerance policies have largely been counterproductive, fuelling rather than reducing social exclusion and criminality.

The most well-know application of zero tolerance policies took place in a number of police forces, most notably the New York Police Department. Zero tolerance policing was inspired not only by other zero tolerance programmes but also by the so-called “broken windows” approach to police work, which was introduced in the early 1980s by James Q. Wilson and George L. Kelling. They argued that “if a window in a building is broken and is left unrepaired, all the rest of the windows will soon be broken.” In the same way, they said, minor disturbances of order in a neighborhood could, if not curbed in time, lead to an uncontrollable escalation of criminality. In such a process, “[t]he unchecked panhandler is, in effect, the first broken window.” To prevent such negative developments, they proposed that police departments should cease assigning resources “on the basis of crime rates (meaning that marginally threatened areas are often stripped so that police can investigate crimes in areas where the situation is hopeless).” Instead, priority should be given to “neighborhoods at the tipping point – where the public order is deteriorating but not unreclaimable” (Wilson and Kelling 1982).

These ideas became a central part of the “zero tolerance” policies that were introduced under William Bratton, who was appointed commissioner of the NYPD in 1994. The programme had a strong focus on aggressive police crackdowns on various forms of minor misconduct in the public space, such as drunkenness, urination, squeegeeing, fare dodging and begging. A similar programme was run by the London Metropolitan Police in the King’s Cross area in December 1996. In London, police raids against minor public order offences led to the removal of beggars and inebriated and homeless people from public areas (Innes 1999; Newburn and Jones 2007).

Homicides were substantially reduced in New York City in the period 1991–1997. This decrease has often been attributed to the zero tolerance and broken window policies. However, in this period, many other changes took place that could have influenced homicide statistics. Criminological research does not confirm the usual “success story” for zero tolerance (Bowling 1999). An interesting comparison can be made with San Diego, where a more community-oriented policing programme was carried out in about the same period. The two cities experienced similar reductions in severe crime. However, whereas there was a dramatic increase in legal actions taken against the NYPD for police misconduct, no such effect occurred in San Diego (Greene 1999). It should also be observed that resource-demanding measures against petty crimes will necessarily divert resources that might have been directed at more serious crimes. A major assumption behind zero tolerance programmes appears to be that harsh and legalistic action against juveniles committing minor offences will deter them from a criminal career. This is not substantiated by the criminological evidence. To the contrary, harsh and legalistic treatment of young offenders is associated with a larger number of rearrests and a higher future participation in crime (Klein 1986; Innes 1999; Petrosino et al. 2014).

Against this background, it should be no surprise that the use of zero tolerance concepts has decreased considerably in police work (Wein 2013, p. 4). As an example of this, the NYPD has retreated from its zero tolerance strategy for policing (Anon. 2017).

Preventive Medicine

The prevention of diseases is one of the areas where zero targets naturally spring to mind. Why should the prevention of a disease have a less ambitious goal than its complete eradication? Ideas about eradicating infectious diseases are much older than the zero concepts discussed in the previous subsections. In later years, disease eradication has also been promoted for various iatrogenic diseases. Furthermore, a zero vision much inspired by the Vision Zero of traffic safety has been introduced in suicide prevention. This subsection is devoted to these three areas of zero-aiming medical goal setting.

Apparently, the first proposal to eradicate a disease can be found in a book published in 1793 by the English physician John Haygarth (1740–1827). He proposed that smallpox could be exterminated through a combination of obligatory variolation and strict measures to prevent contagion (Haygarth 1793). Variolation consisted in infecting a person with scabs or fluid from the skin bumps of a person with smallpox. This usually resulted in a less severe disease than after natural contagion. Variolation was far from risk-free, but the vast majority survived, and they acquired immunity against the disease. Three years after Haygarth published his book, Edward Jenner (1749–1823) made his first experiment with vaccination. He infected human subjects with pus from cowpox blisters (containing what we now know to be live viruses). Cowpox is a much milder disease than smallpox, but it gives rise to immunity also against smallpox. In 1801, Jenner published a short pamphlet on vaccination in which he made the bold prediction that it was “too manifest to admit of controversy, that the annihilation of the Small Pox, the most dreadful scourge of the human species, must be the final result of this practice” (Jenner 1801, p. 8; Fenner 1993).

In modern terminology, the word eradication is used for a “permanent reduction to zero” of the worldwide incidence of an infectious disease, such that no further interventions are needed to prevent new cases. Thus, eradication is a global concept. For regional or national absence of a disease, the word elimination is used instead (Hinman 2018; Kretsinger et al. 2017).

Smallpox was indeed possible to eradicate. It satisfies a crucial condition for this, namely, that it has no animal reservoir. If there is a species of wild animals that serve as alternative hosts for a human pathogen, then that pathogen can always survive in the wild, whatever measures we take to prevent its spread among humans. But the smallpox virus can only survive in humans. Therefore, if its dissemination in the human population could be stopped through vaccination and other efficient measures, then the virus would die out. Attempts to engage the WHO in a programme to eradicate smallpox were made in 1953, but the idea was dismissed as unrealistic. It was only in 1958 that a decision was made to introduce such a programme. An internationally funded and coordinated programme was not in place until 1967. In that year, the yearly death toll of smallpox was about two million people, and the disease was endemic (regularly present) in 32 countries. The programme had three major components. The first was mass vaccination to ensure that at least 80% of the population was vaccinated in all countries where the disease was endemic. In this way, the disease would be reduced to a level where all outbreaks could be effectively contained. The second component was an efficient reporting and response organization in all countries where the disease occurred. Places with outbreaks were visited by a team that searched for additional cases and vaccinated everyone who could have been infected. The third component was an international exchange of reports that kept all participants in the programme informed of developments throughout the world (Fenner 1993; Henderson 2011).

The programme was successful. The last case of smallpox occurred in 1978, and in 1980, the world was officially declared free of smallpox. This was 179 years after Jenner’s prediction that smallpox would eventually be “annihilated” by vaccination. The eradication of smallpox put an end to an immense amount of suffering that had haunted humanity since ancient times. Only in the twentieth century, at least 300 million people died from smallpox (Henderson 2011, p. D8).

In 1988, a new disease eradication programme was started, the Global Polio Eradication Initiative (GPEI). In its more than three decades of operation, it has radically reduced the number of polio cases. In 1988, there were around 350,000 cases, distributed over 125 countries. In 2018, there were 33 cases, all of which occurred in Pakistan and Afghanistan (Polio Global Eradication Initiative 2019). The campaign is now described as an endgame, but it operates under grave difficulties caused by militant anti-vaccination propaganda and violent groups targeting vaccination workers (Kaufmann and Feldbaum 2009). In the years 2012–2015, 68 government employees working with the administration of polio vaccine were killed in Pakistan (Kakalia and Karrar 2016). In 2019, the number of polio cases increased again ( Last downloaded August 19, 2020).

In spite of these remaining difficulties, discussions are ongoing on how the extensive infrastructure and capabilities created by the Global Polio Eradication Initiative can be used after polio has finally been defeated. The most common answer is that these resources should be retained and utilized in efforts to eradicate measles and rubella (Kretsinger et al. 2017; Cochi 2017). Measles is a major cause of child fatalities. In 2018, more than 140,000 measles deaths were reported globally, most of them in children ( Downloaded Dec 7, 2019). In industrialized countries, around 3 of 1000 children who catch measles will die from the disease, but in countries with widespread malnutrition (in particular, vitamin A deficiency) and insufficient healthcare resources, the death toll can be in the range between 100 and 300 per 1000 children with the disease (Perry and Halsey 2004). Rubella is transferred from the pregnant woman to the foetus, and it is a major cause of congenital diseases. About 100,000 children are born each year with rubella-induced inborn diseases, such as deafness, severe heart diseases, glaucoma and diabetes (Lambert et al. 2015; Banatvala and Brown 2004). Both measles and rubella are clearly possible to eradicate. Neither of them has an animal reservoir, both are preventable with two doses of vaccine and both have easily detectable clinical symptoms. Major efforts are ongoing to eliminate these diseases in most regions of the world, but progress has been slow, largely due to disinformation spread by anti-vaccination propagandists (Benecke and DeYoung 2019).

In 1986, the Carter Center in cooperation with the WHO started a campaign to eradicate the Guinea-worm disease (GWD, also called dracunculiasis). This is a painful but seldom deadly disease affecting people in Africa and Asia. The infection is spread by drinking water containing water fleas that are infected by guinea worm larvae. The disease can be prevented with relatively simple measures such as providing safe water, blocking the use of infected water sources and boiling or filtering water before drinking (Tayeh et al. 2017). The eradication programme has succeeded in drastically reducing the number of cases. In 1986, 3.5 million cases were recorded worldwide, whereas the number of cases was only about 30 in the years 2017 and 2018 ( Last accessed August 19, 2020). This shows that the disease can be contained on a very low level. However, the discovery that the disease has animal reservoirs in both dogs and frogs has dampened hopes of its eradicating (Callaway 2016; Eberhard et al. 2016).

In 2011, the Joint United Nations Programme on HIV and AIDS (UNAIDS) adopted a vision of three zeros: zero new HIV infections, zero discrimination and zero AIDS-related deaths. The World AIDS Day 2011 had the motto “getting to zero” (Garg and Singh 2013). Currently, this seems to be extremely difficult to achieve ( Last accessed August 19, 2020). There is still no vaccine suitable for mass vaccination. A study of the potential for getting “close to zero” concluded that a radical reduction in the number of new cases would nevertheless be possible through “the global implementation of a bundle of prevention strategies that are known to be efficient, including anti-retroviral therapy to all who need it (which currently does not happen due to insufficient healthcare resources) and condom promotion and distribution (which is currently prevented by ruthless religious hypocrisy)” (Stover et al. 2014). As long as these hurdles remain, it does not seem possible to defeat this disease.

One might expect the eradication of severe diseases to be an unusually uncontroversial undertaking, but that has not been the case. Anti-vaccination propagandists cause considerable problems for vaccination campaigns worldwide. Their activities have delayed the eradication of polio and led to outbreaks of measles in countries that had for long been spared from that disease (Kakalia and Karrarb 2016; Hussain et al. 2018). Recently, a group of biologists have questioned the eradication of pathogens and hosts transmitting them to humans, appealing to the ethical standpoint that “each species may have a right to exist, independent of its value to human being [sic]” (Hochkirch et al. 2018, p. 2). Their main example was the campaign to eradicate trypanosomiasis, a deadly disease caused by a parasitic protozoan spread by bites by tsetse flies. They wanted to “stimulate discussions on the value of species and whether full eradication of a pathogen or vector is justified at all” (ibid., p. 1). This discussion should be seen in the perspective that about 10% of the earth’s about six million insect species are threatened by extinction (Diaz et al. 2019). The role of disease prevention and eradication in species extinction is minuscule in comparison to other causes of reduced biodiversity.

Zero targets have also become popular in connection with iatrogenic diseases. In particular, infections spread in hospitals and clinics have been targeted in initiatives with names such as “zero hospital-acquired infections,” “zero healthcare-associated infections,” “zero tolerance for healthcare-associated infections” and “zero tolerance to shunt infections” (Warye and Granato 2009; Warye and Murphy 2008; Choksey and Malik 2004). Such zero targets have the purpose to “set the goal of elimination rather than remain comfortable when local or national averages or benchmarks are met” (Warye and Murphy 2008). In the discussion on these goals, phrases reminding of discussions on zero goals for workplace and traffic safety are common, for instance, “even one H[ealthcare] A[ssociated] I[nfection] should feel like too many” (Warye and Murphy 2008). However, as evidenced by frequent use of the term “zero tolerance,” the focus on individual compliance is often more pronounced in the medical context:

Lapses in [surgical] theatre discipline were not tolerated, and this attitude was inculcated into all present; we term this ‘zero tolerance’. (Choksey and Malik 2004)

A major impediment to achieving H[ealthcare] A[ssociated] I[nfection] zero tolerance has been a lack of accountability of hospital administrators and clinicians (including unit/ward/service directors). Where in the world would we be allowed to walk into the operating room and do surgery without complying with rules/regulations/culture of the operating room (e.g., strict hand hygiene, gowns, gloves, masks, sterile techniques, etc.)? Virtually nowhere. Yet, almost everywhere, I[ntensive] C[are] U[nit] directors, ward attendants, etc., commonly witness clinicians throughout their units/wards or healthcare facility (e.g., ICUs, wards, outpatient services, emergency departments, etc.) fail to comply with recommended infection control precautions and yet they say nothing. We must engage our hospital administrators and transition from a culture of benchmarking (i.e., are we as good as others like us) to a culture of zero tolerance (i.e., are we preventing all the H[ealthcare]A[ssociated]I[nfection]s we can prevent). Furthermore, hospital administrators must make it clear that unit/service/ward directors will be held accountable for the HAIs that occur in their patients in their units… No excuses should be tolerated. (Jarvis 2007, p. 8)

There has also been criticism against the use of zero goals in healthcare, in particular concerning iatrogenic infections. No medical intervention is completely free of risk, and sometimes, an intervention is justified although the risk of infection cannot be eliminated (Worth and McLaws 2012). A focus on zero targets may, according to some authors, be dangerous since it makes it “increasingly difficult to educate the public about the sources of risk of healthcare interventions” (Carlet et al. 2009).

The National Quality Forum in the United States, an umbrella organization of private and public healthcare organizations, conducts a campaign against “never events.” By this is meant three types of surgical mistakes: wrong site, wrong procedure and wrong person. Wrong site operations are usually either left/right mistakes or operations on an incorrect level, e.g. surgery on the wrong vertebra or (in dentistry) wrong tooth extraction. Wrong procedure operations take place on the correct site but with the wrong type of surgery. An example would be the removal of a patient’s ovaries along with the uterus, when the purpose of the operation was only to remove the uterus. Wrong patient operations depend on confusions of patients, for instance, patients with the same name. With careful preoperative and operative procedures, the risk of these never events can be substantially reduced (Michaels et al. 2007; Ensaldo-Carrasco et al. 2018). The British National Health Service (NHS) employs a more extensive list of never events, which includes, for instance, retention of a foreign object in the patient’s body after surgery, overdoses of insulin and transfusion with incompatible blood ( Last accessed August 19, 2020).

A Vision Zero for suicide was adopted by the Swedish government in 2007. It states, “No one should find him- or herself in such an exposed situation that the only perceivable way out is suicide. The government’s vision is that no one should have to end their life” (Kristianssen et al. 2018, p. 264). In 2011, the US National Action Alliance for Suicide Prevention (NAASP) published an ambitious action plan against suicides. It set the goal Zero Suicide, by which is meant that no suicide should take place among patients under treatment in the healthcare system. It is thus less ambitious than the Swedish goal, which also covers suicides by persons who are not patients. By adopting Zero Suicide, the organization tried to achieve “a transformation of a mindset of resigned acceptance of suicide into a mindset of active prevention of suicide as an outcome of treatment. Instead of asking how not to have more suicides than usual, a Zero Suicide organization challenges itself to have no suicides at all” (Mokkenstorm et al. 2018). Both the Swedish and the American zero suicide goals have been subject to criticism by authors who consider these goals to be unrealistic (Holm and Sahlin 2009; Smith et al. 2015). For an in-depth discussion on the goal of zero suicides and strategies to approach it, see Wasserman et al. (Chap. 37, “Vision Zero in Suicide Prevention and Suicide Preventive Methods”).

Environmental Protection

The term “zero waste” was used in environmental discussions already in the 1970s. In 1975, two American researchers described how a water purification plant could achieve “‘zero’ waste discharge,” by which they meant total recycling of all wastes generated in the plant (Wang and Yang 1975, p. 67). The phrase “zero waste” is now quite common, often in combination with other terms to denote an area or activity that is free from waste: “zero waste campus,” “zero waste community,” “zero waste city,” “zero waste living” (Zaman 2015), “zero waste product” (Zaman 2014, 2015) and even “zero waste humanity” (Zwier et al. 2015).

However, there are widely divergent opinions on what should be meant by “zero waste.” This can be illustrated by the various ways in which the term is used about the treatment of household waste in cities. One common definition of zero waste in that context is “diversion from landfill”; in other words, that no waste goes to landfill (Zaman 2014, p. 407). As Atiq Zaman has pointed out, this is an unambitious goal since it “does not place enough emphasis on how waste can be reused as a material resource (as opposed to being incinerated, for instance)” (ibid., p. 408).

Another, much more ambitious, interpretation identifies zero waste with total recycling. On that interpretation, “[a] 100% recycling of municipal solid waste should be mandatory to achieve zero waste city objectives” (Zaman and Lehmann 2011, p. 86). Contrary to the less ambitious goal of avoiding landfill, the goal of total recycling cannot be achieved by cities and municipalities alone. Only if the products discarded by city dwellers consist of 100% recyclable material can the city achieve zero waste in this sense. Therefore, product design has to be a crucial component of the strategy (ibid., p. 86).

However, not even 100% recycling means that nature is unscathed by the city’s activities. Recycling does not necessarily mean that a product gives rise to raw material of the same quality and quantity as the material it was made from. Some authors have required that instead of becoming waste, materials should enter an “endless scheme” and “pass through the process of usefulness without losing their capacity to feed the system again after being used” (Orecchini 2007, p. 245):

The challenge is to achieve completely closed cycles. Anything but a closed cycle, which starts from useful resources and returns to them after their use, is unable to realize truly sustainable development: diffused, shared, and ideally endless for the entire human society. (Orecchini 2007, p. 246)

Material included in such cycles will not be consumed in the usual sense of the word, and therefore, systems based on these principles have been called systems of “zero consumption” (ibid., p. 245).

Zero waste and related concepts have also been applied to industrial processes. In that case as well, there are large variations in how the terms are defined. Sometimes, remarkably lenient interpretations have been used. For instance, consider the following definition of “zero discharge”:

[A] Z[ero] D[ischarge] system is most commonly defined as one from which no water effluent stream is discharged by the processing site. All the wastewater after secondary or tertiary treatment is converted to a solid waste by evaporation processes, such as brine concentration followed by crystallization or drying. The solid waste may then be landfilled. (Das 2005, pp. 225–226)

This means that a factory is said to have “zero discharge” if all its waste is put in a landfill, even if there is toxic leakage from that landfill. As this example shows, claims that an activity produces “zero” environmental harm can be severely misleading if the “zero” does not cover all environmental detriments associated with that activity.

The term “zero emissions” was used in a legal text adopted in California in 1990, namely, the Zero Emission Vehicle Mandate. It specified steps that the automobile industry had to take towards the introduction of zero emission vehicles (Kemp 2005). The definition of the term “zero emissions” is equally problematic as that of “zero waste.” For instance, battery electric vehicles are typically called “zero emissions” vehicles because they have zero tailpipe emissions of greenhouse gases. However, the production of these cars gives rise to greenhouse gas emissions, and the electricity used to charge the batteries may not have an emission-free source (Ma et al. 2012). Similarly, the claim that a biofuel is “zero-carbon” or has “net zero emissions” needs some qualification. The greenhouse effect of a carbon dioxide molecule from a biofuel is the same as that of a carbon dioxide molecule coming from coal. The advantages of the biofuel will only materialize through replanting resulting in photosynthesis that “compensates” the emissions from burning the fuel. This effect is delayed and depends on the future use of the harvested land (Sterman et al. 2018). Therefore, although replacing fossil fuel with biofuel is a clear advantage from the viewpoint of climate change mitigation, claims of net zero emissions cannot usually be validated.


The pacifist’s credo is a “zero”: no wars! In arms control negotiations, total abolishing of certain types of weapons has had a prominent role. In the world’s first treaty on chemical weapons, signed in Strasbourg in 1675, France and the Holy Roman Empire agreed not to use any poisoned bullets. The Geneva Protocol that went into force in 1929 prohibits all uses of chemical and biological weapons in war (Coleman 2005). This prohibition is still a cornerstone in the international law of war.

After World War II, considerable efforts have been made to outlaw and eliminate the third major type of weapons of mass destruction, namely, nuclear weapons. The very first resolution adopted by the United Nations General Assembly, on a session in London on January 24, 1946, mandated work that would lead to “the elimination from national armaments of atomic weapons and of all other major weapons adaptable to mass destruction” (United Nations 1946). A large number of attempts have been made to reinvigorate this mission. Perhaps most remarkably, at the Reykjavík Summit in 1986, Ronald Reagan and Mikhail Gorbachev agreed to work for an agreement to eliminate all nuclear weapons. However, no such agreement materialized. One of the more prominent independent initiatives towards nuclear disarmament is the “Global Zero” initiative, which was launched in Paris in 2008 and is still highly active. It aims for a structured destruction of all nuclear weapons, ultimately resulting in a world without such weapons. In a speech in Prague on April 5, 2009, President Barack Obama expressed “America’s commitment to seek the peace and security of a world without nuclear weapons” (Holloway 2011). At the time of writing, chances of nuclear disarmament seem to be at a low point.

However, in 1999, the Ottawa Treaty against Anti-Personnel Landmines came into force. Its signatories comprise the vast majority of the world’s countries (but as yet neither China, Russia, nor the United States). Each state that has signed the treaty is committed to “never under any circumstances” use anti-personnel mines and to destroy all such mines in its possession ( Last accessed August 19, 2020).


As we have seen, the “zero family” is broad and diverse. In this subsection, we are going to consider three major ways in which the members of this family differ from each other: (1) how realistic they are; (2) the objects of the zero goals or, in other words, what it is that one strives to make zero; and (3) the subjects, i.e. the persons or organizations tasked with achieving or approaching zero.


The zero targets we have examined above cover a wide range of degrees of realism, from the proven (but initially doubted) realism of eradicating smallpox to goals such as zero deaths on all American workplaces that seem to be exceedingly difficult to fully achieve.

Criticism of “zero” as too unrealistic is one of the recurrent themes in the literature on zero targets (Chap. 3, “Arguments Against Vision Zero: A Literature Review”). For instance, Goh and Xie claim that the “zero defects” goal is impossible to reach due to “one fundamental characteristic of nature, namely that all natural elements are subject to variation”:

By deduction, therefore, ‘zero defect’ by itself is a pseudo-target, attractive and even seductive when brandished at management seminars, but misleading or self-deluding on the shop floor. For example, has anyone ever claimed, directly or indirectly, to have run a printed circuit board soldering machine ‘right the first time’ and obtained ‘zero defect’ in the soldered joints all the time? (Goh and Xie 1994, p. 5)

In consequence, they propose that “do it better each time” is a better slogan than “zero defects” (Goh and Xie 1994, p. 5). Similar criticism has been waged, for instance, against Vision Zero for traffic safety (Elvik 1999) and against zero targets for iatrogenic infections (Carlet et al. 2009).

Defenders of zero targets have pointed out that what initially seems to be unrealistic may become realistic if old ways of thinking are broken up. Setting zero goals can be an efficient way to overcome fatalism (Zwetsloot et al. 2013, pp. 44–45) and to defeat “the subtle message that fatal injuries will occur and are acceptable” (Nelson 1996, p. 23). For instance, the zero suicide goal can serve to induce “a transformation of a mindset of resigned acceptance of suicide into a mindset of active prevention of suicide as an outcome of treatment” (Mokkenstorm et al. 2018, p. 750).

Obviously, partial achievement of a zero goal can be a great step forward. For instance, the drastic reduction in polio cases that has been achieved in the programme for polio eradication is already an outstanding accomplishment in terms of human health and welfare, even though the disease has not (yet) been fully eradicated. Even in cases when full attainment is not within reach, zero goals can be efficient means to inspire important improvements. In other words, goals can be achievement inducing even if they cannot be fully attained (Edvardsson and Hansson 2005; Chap. 1, “Vision Zero and Other Road Safety Targets”).

Our traditions and conventions concerning the realism of goals differ considerably between social areas (see Fig. 3). In some areas, the tradition is to set goals only after carefully investigating and taking into account what is feasible and what compromises with other objectives are necessary. We can call this restricted goal setting. For instance, goals for economic policies are usually set in this way. Another example is the setting of occupational and environmental exposure limits, which is usually preceded by careful investigations of what exposure levels can be achieved in practice.

Fig. 3
figure 3

Above, restricted goal setting. Below, aspirational goal setting

In other areas, the tradition is instead to set goals without first determining what is in practice feasible. We can call this aspirational goal setting . For instance, it is desirable that no one should be exposed to violence, but unfortunately, this is not a realistic goal that can be fully achieved. However, law enforcement agencies do not operate with “compromised” goals such as “at most 10 murders and 20 rapes in this district next year.” The reason for this is of course that the “uncompromised” goal of no violent crimes is good enough as an indication of what they should strive for.

It is reasonable to assume that aspirational goal setting tends to result in goals that are more inspiring than those emerging from restricted goal setting. On the other hand, the latter goals are often more suitable for guiding policy implementation and evaluation. In many cases, the best solution can be to combine both types of goals, in order to obtain both inspiration and practical guidance (Edvardsson and Hansson 2005; Chap. 1, “Vision Zero and Other Road Safety Targets”).

The Zero Object

By the object of a zero goal or target, we mean that which is required to be zero. Zero objects can be classified according to how narrow or broad they are. For instance, the Vision Zero of traffic safety has a fairly narrow zero object: The goal is explicitly limited to fatalities and serious injuries, and it does not require strivings for zero occurrence of less serious accidents. As we saw above, some of the measures taken to implement Vision Zero do in fact increase the frequency of less serious accidents. For instance, the introduction of roundabouts in four-way crossings decreases the risk of high-speed collisions with fatal outcomes, but it also increases the risk of low-speed collisions with at most minor personal injuries. The “Work to Zero” goal of the US National Safety Council has a similar approach; its zero object is limited to fatal accidents at workplaces (McElhattan 2019).

However, many other zero goals in workplace safety operate with a broad zero object. For instance, Emmit J. Nelson recommends “first setting the injury commitment to zero lost-workday cases.” When that has been achieved, “the commitment can then become zero recordables” (Nelson 1996, p. 23). Even wider zero objects that include incidents (near accidents) are also common in workplace safety (Zou 2010).

Does a broad zero object, which includes incidents and minor accidents, divert attention and resources away from the most serious accidents, which should have top priority? Or is a broad approach, to the contrary, a superior way to prevent serious accidents, since it stops the beginnings of event chains that would later lead to a serious accident? The answer to these questions will depend on the extent to which the most serious accidents begin in the same way as the less serious ones, or as Sidney Dekker expressed it, “If preventing small things is going to prevent big things, then small and big things need to have the same causes” (Dekker 2017, p. 127).

In this respect, there are large differences between different types of accidents. Fires in households are an example with a large overlap. Most big fires in housing areas begin as small fires. The prevention of small home fires is therefore an efficient – and necessary – means to prevent big fires in housing areas. On the other hand, large accidents in a nuclear plant (such as those in Chernobyl in 1986 and Fukushima in 2011) do not typically start in the same way as small accidents in the same plants. Preventing accidents with handheld tools, or falls from scaffolding, in a nuclear plant, is of course important for its own sake. However, it does not usually contribute much to preventing the event sequences that end up in large accidents with massive emissions of radioactive material to the surroundings. There are similar situations in other industries. The BP Deepwater Horizon explosion in April 2010, which killed eleven workers and gave rise to the largest marine oil spill in human history, seems to be a case in point. The accident was reportedly preceded by six years of injury-free and incident-free operations on the rig. The explosion was the result of other types of failures than those that lead to smaller workplace accidents (Dekker 2017, p. 125). Zero tolerance in law enforcement is an interesting parallel case. Its efficiency depends on the degree to which the targeted minor offences are parts of the causal chains that lead to more serious, violent crimes. The failure of some zero tolerance programmes seems to have been due to lack of such overlaps in causal chains.

We can learn from examples like these that the choice between broad and narrow zero objects has to be informed by a careful analysis of overlaps between different kinds of undesirable event chains. This may of course result in broad zero goals being chosen in some areas and narrow goals in others.

Some organizations have a whole package of zero goals, referring, for instance, to different aspects of product quality, waste reduction, etc. Zero goals for workplace safety have often been integrated as one of several zeros in such combinations. It would not be unreasonable to worry that the safety-related zero will be outcompeted by other, perhaps more economically important, goals in the same package. However, experience reported in the literature indicates that inclusion in such comprehensive zero packages strengthens the safety goal by increasing management commitment (Twaalfhoven and Kortleven 2016, p. 65; Zwetsloot et al. 2017c, pp. 95–96):

For example, Paul O’Neill, the former chief executive officer of aluminum manufacturer Alcoa, made a public commitment to his employees, the media, and investors that Alcoa would target zero accidents in its plants. He then made it the single most important performance metric for everyone in his organization, including himself. At the time O’Neill introduced this effort, many were concerned that Alcoa might be “overinvesting” in safety such that, on the margin, the benefits of added safety would be exceeded by the costs of the productivity lost in trying to achieve it. The history of Alcoa’s efforts, however, suggested that there was no such trade-off between safety and productivity. In fact, O’Neill’s aspirational push led to improvements on both dimensions for Alcoa. (Huckman and Raman 2015, p. 1811)

But obviously, such a conflict-free relation between safety and productivity cannot be taken for granted. Equally obviously, safety goals may have to be pursued even when they clash with other important goals. (Zwetsloot and co-workers (2017a, p. 261) report that “commitment to product safety in the Chinese industry may decrease the commitment to work safety,” but no confirmation of that claim could be found in the source referred to.)

The Zero Subject

Since the early twentieth century, two major approaches to responsibilities for accidents and other untoward events have been prominent in safety management. One of them is the environmental theory that received much of its scientific basis in the work that the American sociologist Crystal Eastman (1881–1928) presented in a seminal book on workplace accidents, published in 1910. Based on detailed investigations of a large number of accidents, she showed that the same types of accidents were repeated again and again and that they could be prevented by appropriate measures on the workplace. This approach put the responsibility for workplace safety on employers, in contrast with two attitudes that were common at the time, namely, that accidents were unavoidable “acts of God” and that they were the results of workers’ carelessness. Eastman’s work had considerable influence in safety engineering, and it was instrumental in the creation of a worker’s compensation law. Safety work in this tradition, with its emphasis on technological and organizational measures that make workplaces less dangerous, has been instrumental in reducing risks of accidents on all kinds of workplaces (Swuste et al. 2010).

In the 1920s, psychologists Eric Farmer and Karl Marbe independently developed another approach, often called the accident-proneness theory. Their basic idea was that most accidents are caused by a minority of workers who behave dangerously on the workplace. Therefore, accidents could be avoided by psychological testing that would identify and exclude these workers (Swuste et al. 2010, 2014). However, contrary to the environmental theory, the accident-proneness theory did not contribute much to improved safety. No tests were developed that could identify workers with an increased proneness to accidents. In addition, the basic assumptions of the theory turned out not to hold. For instance, much of its alleged scientific support came from studies showing that accidents are quite unevenly distributed among workers in the same workplace. This could of course depend on some workers having more dangerous tasks and working conditions than others. Such differences were not taken into account in these studies, and therefore, no conclusions can be drawn from them (Rodgers and Blanchard 1993). The theory fell into disrepute among safety professionals, but it has not entirely died out. Various attempts have been made to revive it, but it is still fraught with the problems that caused its decline in the 1950s and 1960s. In 1975, the road safety researcher Colin Cameron summarized the situation as follows:

For the past 20 years or so, reviewers have concluded, without exception, that individual susceptibility to accidents varies to some degree, but that attempts to reduce accident frequency by eliminating from risk those who have a high susceptibility are unlikely to be effective. The reduction which can be achieved in this way represents only a small fraction of the total. Attempts to design safer man–machine systems are likely to be of considerably more value. (Cameron 1975, p. 49)

This conclusion still stands. Or as Sidney Dekker, another safety researcher, concluded in a recent review of the literature:

The safety literature has long accepted that systemic interventions have better or larger or more sustainable safety effects than the excision of individuals from a particular practice. (Dekker 2019, p. 79)

The contrast between the environmental theory and the accident-proneness theory comes out clearly in the above review of zero targets and goals. Most of the zero goals have been operationalized in the tradition of the environmental theory. That applies, for instance, to Vision Zero in traffic safety and to most of the zero targets for workplace safety. However, there are also zero goals with a strong focus on correcting or removing individuals whose behavior is deemed undesirable. The clearest example of this is zero tolerance in law enforcement. Notably, applications of Vision Zero in traffic and workplace safety have on the whole been successful, whereas zero tolerance in law enforcement has largely been abandoned since it did not deliver. Against the background of previous experiences with the two major approaches to safety, this should be no surprise.

But obviously, an environmental, or as we can also call it, system-changing, approach to safety should not be taken as a reason for individuals to be careless about safety and leave everything to the system. Individual attention to safety is still needed. Furthermore, improvements of safety do not come by themselves. They require individuals who propose, demand and implement them.

Improvement Principles

The various zero targets and goals refer to different categories of risks, but their common message is that no level of risk above zero is fully satisfactory. Consequently, improvements in safety should always be striven for as long as they are at all possible. Several other safety principles have essentially the same message. Among the most prominent of these are continuous improvement, as low as reasonably achievable (ALARA) and best available technology (BAT). These principles differ in their origins, and they are used in different social areas, but they all urge us to improve safety whenever we can do so. We can call them improvement principles (Hansson 2019).

Continuous Improvement

The so-called quality improvement movement in industrial management originated in the United States in the 1920s and 1930s. Walter A. Shewhart (1891–1967), W. Edwards Deming (1900–1993) and Joseph Moses Juran (1904–2008) were among its most important pioneers. Their focus was on the quality of industrial products and production processes, which they succeeded in improving by means of new methods of quality control and new ways to incentivize workers. Their ideas were adopted not only in the United States but even more in Japan, whose fledgling export industry was struggling to wipe out its reputation for producing low-quality products. The widespread adoption of American ideas of quality improvement has often been cited as an explanation of the so-called Japanese economic miracle, a long period of economic growth that began after World War II and lasted throughout the 1980s (Bergman 2018, p. 333; Berwick 1989, p. 54). Japanese companies with strict devotion to quality throughout their organization often achieved results that astonished Western visitors:

When a team of Xerox engineers visited Japan in 1979, they discovered that competitors were manufacturing copiers at half of Xerox’s production costs and with parts whose freedom from defects was better by a factor of 30. (Abelson 1988)

The word “kaizen,” which means “improvement,” was used in the Japanese quality improvement movement as a general motto, covering all the aspects in which industrial products and processes should be improved. In English, “kaizen” has frequently, but not quite accurately, been translated as “continuous improvement,” abbreviated CI (Singh and Singh 2009). With this terminology, the quality improvement ideas developed in the United States in the 1920s and 1930s were reimported into the United States and other Western countries in the 1980s and 1990s. The main ideas were summarized by a group of British management researchers as follows:

In its simplest form CI can be defined as a company-wide process of focused and continuous incremental innovation. Its mainspring is incremental innovation–small step, high frequency, short cycles of change which taken alone have little impact but in cumulative form can make a significant contribution to performance. (Bessant et al. 1994, p. 18)

The concept of continuous improvement also found resonance in the safety professions. However, in spite of its origin in industrial management, it is not in workplace safety but in patient safety that continuous improvement has been most successful. Beginning in the 1980s, professionals working with patient safety in hospitals and clinics around the world have adopted continuous improvement as an overarching idea for their activities (Batalden and Stoltz 1993; Bergman 2018; Berwick 1989, 2008; Berwick et al. 1990). A major reason for its success in healthcare is that continuous improvement fits well into the evaluation culture in modern healthcare, with its use of standardized treatment protocols. Just as in industrial management, continuous improvement in healthcare means that there is no “optimal quality level beyond which further improvement would not be worth the incremental cost of achieving it”; to the contrary, “each instance of improvement is an invitation to consider options for further improvement” (Huckman and Raman 2015, p. 1811). This way of thinking is very much in agreement with Vision Zero and other zero goals.

There is also another interesting similarity between continuous improvement and Vision Zero: They both tend to be closely aligned with the environmental approach to accidents and other untoward events (cf. section “The Zero Subject”). Although doctors and nurses are still personally responsible for the treatments they recommend and administer, the focus in healthcare is shifting away from individual weaknesses to weaknesses in routines, technologies and organizational structures (Kohn et al. 2000). The reason for this is that, perhaps in particular in healthcare, “defects in quality could only rarely be attributed to a lack of will, skill, or benign intention among the people involved with the processes” (Berwick, 1989, p. 54). The need for an organizational focus in patient safety was explained as follows by two researchers in healthcare management:

M[aintenance] O[f] C[ertification] aims to certify that individuals are proficient in their target responsibilities. This individual certification offers some degree of reassurance to patients who want to know that they are receiving treatment from qualified physicians. Continuous process improvement, however, assumes that quality primarily depends on the process, not simply the individuals who execute it. A central tenet of continuous process improvement is that the problem must be separated from the person. This recognition is important for at least [two] reasons. First, it focuses attention on the process, which is often the root cause of a defect. Second, it makes it safe for workers to highlight issues without concern that they or their colleagues will experience adverse consequences, such as being blamed as the source of the problem. (Huckman and Raman 2015)

In healthcare, continuous improvement has mostly been applied as a principle only for safety. It has usually not been directly aligned with improvements in terms of other goals, such as cost containment or increased productivity. In contrast, the application of continuous improvement in industrial safety is usually part of a more general management strategy, which also includes improvements in other respects than safety. Cost reduction is often a dominant criterion of what constitutes an improvement (Baghel 2005, pp. 765–766). In the literature on continuous improvement, examples are often given that show how productivity, quality, safety and economic output can all be improved at the same time. Potential conflicts among these goals are less often discussed. A prominent exception can be found in a 2002 report from the Nuclear Energy Agency of the OECD on improvements in nuclear plants:

The modern equipment often detects cracks and faults in components and welds that were undetectable by the equipment available when the plant was constructed. If the plant has operated safely and reliably for many years, and there is good evidence that the defect is not “growing”, should the regulator require the defect to be repaired, especially if the repair might degrade other safety features of the plant? Such questions present a real challenge to the regulator when he has to decide how to react to such new information and he must be clear whether he is requiring the licensee to maintain safety or to improve safety. The costs involved can be very great and, in the present financial climate, utilities are likely to mount strong challenges to requirements which they perceive go beyond the original design basis…

The lesson for nuclear safety here may very well be: qui n’avance pas recule! [Who doesn’t advance retreats.] (Nuclear Energy Agency 2002, pp. 11 and 17)

Similar situations may well occur in other areas and should then be discussed and dealt with in a transparent and responsible manner.

As Low as Reasonably Achievable (ALARA)

The as low as reasonably achievable (ALARA) principle originated in mid-twentieth century radiation protection.

Within 5 years after Röntgen’s discovery of X-rays in 1895, researchers working with radioactive material noted that high exposures gave rise to skin burns. It was generally believed that exposures low enough not to produce such acute effects were innocuous, but some physicians warned that radiation might have unknown detrimental effects (Kathren and Ziemer 1980; Oestreich 2014). In the Manhattan project, which developed the first nuclear weapons, the radiologist Robert S. Stone (1895–1966) in the Health Division of the Metallurgical Laboratory in Chicago was assigned to determine “tolerance levels” for radiation exposures of workers. He reported that there was no known safe level for such exposures. Instead of fixed tolerance levels, he proposed that exposures should be kept as low as practically possible. His proposal was accepted (although some wartime exposures were very high, judged by modern standards) (Auxier and Dickson 1983).

After the war, this precautionary no-limit approach was much strengthened by growing awareness that exposure to ionizing radiation increases the risk of leukaemia. The risk appeared to be stochastic. It increases with increasing exposures, but researchers could not identify any “threshold dose,” i.e. any exposure level above zero, below which the risk would be zero. Many scientists believed that the risk of radiation-induced leukaemia was approximately proportionate to the radiation dose (the “linear dose-response model”) (Lewis 1957; Brues 1958; Lindell 1996). In consequence, Robert Stone’s approach was adopted by the US National Committee on Radiation Protection (NCRT). In a 1954 statement, they declared that radiation exposures should “be kept at the lowest practical level” (Auxier and Dickson 1983). In 1958, the International Commission on Radiological Protection (ICRP) took a similar standpoint, based on a review of what was then known about the dose-response relationships of leukaemia and other cancers:

The most conservative approach would be to assume that there is no threshold and no recovery, in which case even low accumulated doses would induce leukaemia in some susceptible individuals, and the incidence might be proportional to the accumulated dose. The same situation exists with respect to the induction of bone tumors by bone-seeking radioactive substances…

It is emphasized that the maximum permissible doses recommended in this section are maximum values; the Commission recommends that all doses be kept as low as practicable, and that any unnecessary exposure be avoided. (ICRP 1959, pp. 4 and 11)

This recommendation has been reconfirmed in a long series of decision by the ICRP. In 1977, it was rephrased as a requirement that “all exposures shall be kept as low as reasonably achievable, economic and social factors being taken into account” (ICRP 1977, p. 3). This principle goes under many names. Apparently, it was first called “as low as practicable” (ALAP), but that name was soon replaced by “as low as reasonably achievable” (ALARA) and “as low as reasonably practicable” (ALARP) (Wilson 2002). Currently, it is known under the following names and abbreviations (Reiman and Norros 2002; Nuclear Energy Agency 2002, p. 14; HSE 2001, p. 8):

  • As low as practicable (ALAP)

  • As low as reasonably achievable (ALARA)

  • As low as reasonably attainable (ALARA)

  • As low as reasonably practicable (ALARP)

  • So far as is reasonably practicable (SFAIRP)

  • Safety as high as reasonably achievable (SAHARA)

ALARA is now recognized worldwide as a principle for radiation protection. It is usually applied to collective doses (i.e. the sum of all individual doses in a plant or an activity), rather than to individual doses. It is therefore often seen as a utilitarian principle. However, it is combined with upper limits for individual exposures, which can be interpreted as based on deontological principles (Hansson 2007a, 2013b). In most countries, this principle is not much used outside of radiation protection. The major exception is Britain, where it has an important role in general worker’s health and safety. In this application, it is mostly applied to individual risks (HSE 2001).

In the interpretation of ALARA, it is essential to pay attention to the meaning of the term “reasonable” (alternatively “achievable” or “practicable,” in other names of the principle). This term is often used in legal texts such as regulations and rulings, where it has two major functions (Corten 1999). First, it makes regulations adaptable and allows their interpretation to be adjusted to circumstances unforeseen by the lawmaker. In this way, the word “reasonable” can resolve “a contradiction between the essentially static character of legal texts and the dynamic character of the reality to which they apply” (ibid., p. 615). Secondly, references to reasonableness provide legitimacy to a legal order “by presenting an image of a closed, coherent and complete legal system.” The notion “masks persistent contradictions regarding the meaning of a rule, behind a formula which leaves open the possibility of divergent interpretations” (ibid., p. 618).

The reasonableness of the ALARA principle (the “R” in the acronym) appears to have both these functions. The first function becomes apparent in the use of “reasonableness” in adjustments to various kinds of economic and practical constraints. It allows the ALARA principle to be “balanced against time, trouble, cost and physical difficulty of its risk reduction measures” (Melchers 2001). The second function shows up when divergences between safety and other considerations are “internalized” within safety management by treating certain potential solutions to safety problems as unreasonable, instead of presenting these divergences as conflicts to be resolved.

According to the original conception of ALARA, it applies to all non-zero risks. For instance, even very low radiation doses should be eliminated if this can be done. However, ALARA has often been reinterpreted so that it only applies to risks above a certain threshold of concern. Hence, the Health and Safety Executive (HSE) in Great Britain has introduced a three-levelled approach to risk, dividing risk exposures into three ranges. In the highest range, risks have to be reduced irrespective of the costs, and there is no need for ALARA considerations. In the lowest range, risks are assumed to be acceptable, which means that there is no need for reducing them, and consequently, ALARA does not apply. It is only in the medium range that ALARA-based activities are said to be applicable. According to the HSE’s tentative limits for the three regions, the “ALARA region” comprises activities with an individual risk of death per year between one in a million and one in one thousand (HSE 2001, pp. 42–46). A similar three-levelled approach has also been proposed for exposures to ionizing radiation (Kathren et al. 1984; Hendee and Edwards 1986).

Such interpretations of ALARA, which disallow its application to risks below a certain level, regardless of how cheaply and easily they can be reduced, make ALARA considerably weaker than full-blown improvement principles such as Vision Zero and continuous improvement. The problems with such a weakening were eloquently expressed already in 1981 by two leading radiation protection experts, Bo Lindell (1922–2016) and Dan J. Beninson (1931–1994):

[I]n each situation, there is a level of dose below which it would not be reasonable to go because the cost of further dose reduction would not be justified by the additional eliminated detriment. That level of dose, however, is not a de minimis level below which there is no need of concern, nor can it be determined once and for all for general application. It is the outcome of an optimization assessment which involves marginal cost-benefit considerations… It is not reasonable to pay more than a certain amount of money per unit of collective dose reduction, but if dose reduction can be achieved at a lesser cost even at very low individual doses, the reduction is, by definition, reasonable. (Lindell and Beninson 1981, p. 684)

Best Available Technology (BAT)

Probably the first legal requirement to employ the best practicable technology to solve environmental problems can be found in the British Alkali Act of 1874. After specifying in some detail how emissions of hydrogen chloride should be reduced, the Act continued:

In addition to the condensation of muriatic acid gas [hydrogen chloride] as aforesaid, the owner of every alkali work shall use the best practicable means of preventing the discharge into the atmosphere of all other noxious gases arising from such work, or of rendering such gases harmless when discharged. (Anon 1874, p. 168)

The term “best practicable means” was interpreted as referring to both economic limitations and technological feasibility (Holder and Lee 2007, p. 331).

When new and more ambitious environmental regulations were introduced in the 1960s and 1970s, it was soon discovered that statutes requiring specific technological solutions have two important problems: They are unadaptable, since they do not allow industry to achieve the same effect with different technical means, and they are also slow-moving and tend to lag behind when new and better technological solutions become available. Regulations specifying maximal allowed emissions are more adaptable, since they leave it to industry to decide how to achieve the required emission standard. However, they are just as slow-moving as statutes requiring a specific technology (Sunstein 1991, pp. 627–628n; Ranken 1982, p. 162). Legislation based on the best available technology (BAT) was introduced as a way to solve both these problems and make legislation both adaptable and sufficiently fast-moving. The basic idea was to require use of the best available emission-reducing technology. Such a rule is technology neutral; if there are alternative ways to reach the best result, then each company can make its own choice among these alternatives. Furthermore, BAT statutes can stimulate innovations in environmental technology. If a new technological solution surpasses those previously available, then it becomes the new BAT standard to which industry must adjust.

A large number of synonyms and near-synonyms of “best available technology” have been used in different legislations, including the following (Merkouris 2012; Vandenbergh 1996; Ranken 1982):

  • Best available control technology (BACT)

  • Best available techniques (BAT)

  • Best available technology not entailing excessive costs (BATNEEC)

  • Best environmental practice (BEP)

  • Best practicable control technology (BPT)

  • Best practicable environmental option (BPEO)

  • Best practicable means (BPM)

  • Lowest achievable emissions rate (LAER)

  • Maximum achievable control technology (MACT)

  • Reasonably achievable control technology (RACT)

Several of the above terms, perhaps in particular LAER, can also be interpreted as variants of the ALARA principle. In some legislations, more than one of these concepts are used, often with different specifications. For instance, in American legislation, “best practicable control technology” (BPT) has been used for lower demands on emissions control than the stricter “best available technology” (BAT).

In the United States, BAT strategies were introduced in most environmental legislations in the 1970s and 1980s and became “a defining characteristic of the regulation of the air, water, and workplace conditions” (Sunstein 1991, pp. 627–628). Legislation based on the BAT concept has also been introduced in most European countries and in legislation on the European level. The European Directive on Industrial Emissions does not allow large industrial installations to operate without a permit that imposes emission standards based on best available techniques (Merkouris 2012; Schoenberger 2011). The BAT concept is also employed in several international treaties, such as the 1992 Convention on the Protection of the Marine Environment of the Baltic Sea Area (the Helsinki Convention) and the Convention for the Protection of the Marine Environment of the North-East Atlantic (the OSPAR Convention) from the same year (Merkouris 2012).

BAT requirements are widely used in emissions control. However, there is one major category of emissions for which they are not much used: Legislation on the limitation and reduction of greenhouse gas emissions has in most cases been based on other regulatory principles, including standards based on current technologies and tradable emission permits. Perhaps surprisingly, the BAT concept does not either seem to have been used systematically in safety legislation. For instance, type approval and similar procedures for motor vehicles, aircraft, electrical appliances, etc. are based on well-defined technical standards, and there does not seem to be a movement towards replacing such standards by reference to the best available technology. Proposals have sometimes been made to apply the BAT concept to other areas than emissions control, but with relatively little success (Helman and Parchomovsky 2011). One of the few cases in which BAT principles have been applied to safety is the 1967 guidelines for the safety standards to be developed by the US National Highway Traffic Safety Administration. Such standards were to be “stated in terms of performance rather than design specifying the required minimum level of performance but not the manner in which it is to be achieved” (Blomquist 1988, p. 12).

Just as the stringency of ALARA principles depends on how the R (“reasonable”) is interpreted, the stringency of BAT principles hinges on the interpretation of the A (“available”). Originally, BAT regulations did not require a cost-benefit analysis. This has often been an advantage from the viewpoint of safety. For instance, this made it possible for American regulators to ensure that offshore technologies “truly implement the best available technology as opposed to technology that is only economically convenient” (Bush 2012, p. 564). However, in some BAT regulations, “available” is interpreted as economically feasible. This has led to the linguistically somewhat awkward situation that there are technologies on the market that are better than the “best available technology” (but too expensive). Such technologies have been called “beyond BAT” (Schoenberger 2011).

On the other hand, there are cases in which not even the best technology that is at all available (at any price) is good enough to protect the environment. In some such cases, regulatory agencies have been authorized to impose requirements stricter than the BATs in order to achieve sufficient protection of the environment (Vandenbergh 1996, pp. 837–838 and 841). Some authors claim that BAT regulations are inadequate since they put focus on what can currently be done rather than on what is most important to do, thereby distracting “from the central issue of determining the appropriate degree and nature of regulatory protection” (Sunstein 1991, p. 629; cf. Ackerman and Stewart 1988, pp. 189–190).


The improvement principles that we have studied in this and the previous section have much in common. They all carry the same basic message, namely, that as long as improvement in safety is possible, it should be pursued. They therefore serve as antidotes to fatalism and complacency.

None of this disallows economic and other competing considerations from having a role in determining the pace and means of implementation. It would be futile to prescribe that all improvements should be implemented immediately, regardless of costs. But there is an important difference between postponing a safety improvement and dismissing it altogether. When compromises with other social objectives are necessary, the improvement principles induce us to see these compromises as temporary, unsatisfactory concessions. This is a clear signal that currently unrealistic safety improvements should be pursued if and when they become realistic and that innovations that put them within reach are most welcome.

However, several of the improvement principles have been subject to reinterpretations that dismiss instead of postpone currently unrealistic safety improvements. For ALARA, this has taken the form of interpreting the R (“reasonable”) as excluding the reduction of comparatively small risks, even if such reductions can be done with very small effort and sacrifice. For BAT, the A (“available”) has been reinterpreted so that costly but affordable reductions in emissions are not required. It is one of the advantages of Vision Zero and other zero goals that they do not easily lend themselves to such debilitating reinterpretations.

Aspiration Principles

The improvement principles form part of a larger group of safety principles, namely, those that tell us what levels of safety or risk reduction we should aim at or aspire to. This larger group can be called the aspiration principles (Hansson 2019). As shown in Fig. 1, we can distinguish between three major types of aspiration principles, in addition to the improvement principles.

Acceptance principles draw a line between acceptable and unacceptable risks. That limit depends on the risks alone, without taking the benefits that come with the risks into account. We will consider three types of acceptance principles: risk limits, exposure limits and equipment and process regulations.

Weighing principles require that we weigh safety against other objectives, such as productivity and economic gains, and strike a balance between them. Whereas acceptance principles usually have an affinity with deontological (duty-based) moral thinking, weighing principles have much in common with consequentialist ethics. We will discuss three types of weighing principles, namely, cost-benefit analysis, individual cost-benefit analysis and cost-effectiveness analysis.

Finally, we will discuss hypothetical retrospection , a safety principle requiring that our decisions will be defensible also in the future.

Risk Limits

In the early days of risk analysis, some risk analysts maintained that all dangers falling below a certain risk limit are acceptable. That limit was usually expressed as a probability of death, often a “cut-off level of 10−6 individual lifetime risk [of death]” (Fiksel 1985, pp. 257–258). That idea has now largely been replaced by more sophisticated approaches that weigh risks against the benefits they are accompanied by. However, the idea of a risk limit has repeatedly been revived, usually under the auspices of a “de minimis” position in risk regulation, according to which there is a probability threshold, below which a risk is always acceptable even if it comes without any advantages.

It does not take much intellectual effort to see that this is an untenable approach (Pearce et al. 1981; Bicevskis 1982; Otway and von Winterfeldt 1982; Hansson 2013a, pp. 97–98). To begin with, since the “de minimis” principle is applied to each risk individually, it does not protect us against large cumulative effects of a large number of risks, each of which falls below the limit. For instance, in modern societies, we are exposed to a large number of chemical substances. If each of them were to give rise to a “de minimis” risk, the combination of them all could nevertheless be far above the risk limit.

More fundamentally, even risks with very low probabilities are clearly unjustified if they bring nothing good with them. For a simple example, suppose that someone constructs a bomb connected to a random generator, such that the probability is 10−9 that the bomb will detonate. The bomb has been covertly mounted in a place where it will kill exactly one (unsuspecting) person if it explodes. The risk associated with such a device is “de minimis” according to the usual criteria. However, it is clearly unacceptable, for the simple reason that it imposes a frivolous, completely unjustified risk on a person who did not ask for it. On the other hand, we routinely take risks of 10−9 or higher in order to gain some advantage. Travelling an hour by car is one example of this (International Transport Forum 2018, p. 21). We can conclude from examples like this that the acceptability of a risk imposition cannot be determined based only on the size of the risk. Other factors, such as the associated benefits, have to be taken into account, even if the risk is very small. Reliance only on the size of the risk has been called the “sheer size fallacy” in risk analysis (Hansson 2004a, pp. 353–354).

Exposure Limits

Our second group of acceptance principles is exposure limits, numerical upper bounds on allowable exposures to chemical substances and to physical hazards such as noise and radiation. Exposure limits can be based on the same types of considerations as risk limits, but they are much easier to implement, since exposures can usually be measured with well-established chemical and physical methods.

The first limits for occupational exposures were proposed by individual researchers in the 1880s. In the 1920s and 1930s, several lists were published in both Europe and the United States, and in 1930, the USSR Ministry of Labor issued what was probably the first official list (Cook 1987, pp. 9–10). In 1946, the American Conference of Governmental Industrial Hygienists published the first edition of their list. This list, which is revised yearly, has long been a standard reference for official lists all over the world. Since the 1970s, most industrialized countries have their own lists of occupational exposure limits. Exposure limits for ambient air were introduced in the same period (Greenbaum 2003). In food safety, exposure limits were introduced in the 1960s under the name of “acceptable daily intake” (Lu 1988).

The first proposed exposure limit for ionizing radiation was published in 1902. It aimed to protect against the acute effects, but it was based on a rather primitive and unreliable method of measurement. In the 1920s and 1930s, improved methods for dose measurements were developed and put to use in the implementation of more precise exposure limits. In the 1950s, exposure limits were adjusted to take the long-term carcinogenic effects of ionizing radiation into account (Parker 1980, pp. 970–971; Broadbent and Hubbard 1992).

Ideally, one might hope that exposure limits should guarantee safety, in the sense that exposures below the limits impose no risk. Unfortunately, that is often not the case, for three major reasons. First, many standards, in particular those for occupational exposures, result from compromises with economic considerations. This has often led to exposure limits at levels that are known to be associated with occupational disease (Hansson 1997b, 1998a, b; Johanson and Tinnerberg 2019). These risks can be considerable for the average worker, but they are even greater for workers who are particularly sensitive, for instance, due to pregnancy or prior disease (Johansson et al. 2016; Hansson and Schenk 2016). Secondly, long-term effects of chemical exposures are difficult to determine, and some exposure limits that were believed to be safe have later been shown to be unsafe due to previously unknown effects of the substance. One example of this is the drastic reduction of the exposure limit for vinyl chloride from 500 ppm (parts per million) to 1 ppm when the carcinogenicity of this substance was discovered in 1974 (Soffritti et al. 2013). Thirdly, for many carcinogenic substances, it is impossible to determine a risk-free exposure level above zero. The best estimate seems to be that the risk of cancer is proportional to the exposure, which means that every non-zero exposure limit is associated with an implicit level of accepted risk. For all these reasons, exposure limits should not be considered as safe limits. Gains in safety can be expected if exposures are reduced as far below current exposure limits as possible.

Exposure limits are constructed to have a very wide application. Occupational exposure limits for chemical substances apply to all workplaces where the substances are used. Similarly, air quality standards for ambient air apply to outdoor air everywhere in the jurisdiction. This general applicability is unproblematic for health protection if the exposure limit represents a level below which there are no adverse effects on the exposed population. However, if the limit represents a compromise between health protection and economic considerations, then the general applicability tends to lead to exposure limits that are unnecessarily high in many of the places where they apply. This is because economic considerations at the places where exposure reductions are expected to be most costly tend to dominate the standard-setting process. A classic example of this is the exposure limit of 1 ppm for the carcinogenic substance benzene that was adopted by the US Occupational Safety and Health Administration in 1987. Values lower than this were considered infeasible due to excessive compliance costs in the petrochemical, coke and coal industries. However, only 2.2% of the workers exposed to benzene worked in these industries (Rappaport 1993, p. 686). The remaining 97.8% of American workers exposed to benzene, about 230,000 workers, had a weaker protection against benzene than what would have been economically feasible in their own branches of industry.

It is not unreasonable to ask: If workers in a particular industry have to be exposed to high levels of a hazardous substance, is that really a reason to accept equally high levels in other industries where it would be comparatively easy and inexpensive to comply with a considerably lower exposure limit? An alternative approach in this situation would be to adopt a lower general exposure limit that is realistic on most workplaces, in combination with regularly reviewed, higher, exception values for branches of industry that are not yet capable of complying with the general value (Hansson 1998a, pp. 106–109).

Process and Equipment Regulations

Our third group of acceptance principles is process and equipment regulations. Regulations requiring machines to be equipped with certain safety features have a long tradition, in particular in occupational safety. In Britain, already the pioneering Factories Act of 1844 contained stipulations on both equipment and work processes. All mill gearing, as well as certain other moving parts of machines, had to be securely fenced. Furthermore, it was prohibited to use children or young workers to clean the mill gearing while it was in motion (Hutchins and Harrison 1911, pp. 85–87; Tapping 1855, pp. 43–47). This was the beginning of increasingly strict regulations on equipment and processes, which have contributed much to the reduction of many types of workplace injuries. Hand injuries from mechanical power presses are among the best known examples.

Such regulations have been equally important in road vehicle safety. Since its beginnings in the late nineteenth century, the legislation on motor vehicles has developed gradually from an almost exclusive focus on driver behavior to increasingly strict requirements on vehicle construction. For instance, the first British legislation on motor cars was the Locomotives on Highways Act of 1896, in which motor cars were called “light locomotives.” That legislation was focused on the behavior of drivers, who were required to have a license, which could be suspended in case of misconduct. A general speed limit of 14 mph (23 km/h) applied to all “light locomotives.” In the detailed regulations based on the Road Traffic Act of 1930, much more emphasis was put on the construction of vehicles, which were, for instance, required to have unimpaired view ahead, safety glass in windscreens and rear-view mirrors. This was followed in 1937 by requirements for windscreen wipers and speed indicators (Tripp 1938). In the 1960s, important further steps were taken in many countries towards making vehicles safer (Furness 1978). The United States had an important role in this development. The National Traffic and Motor Vehicle Safety Act of 1966 introduced a new way of thinking about traffic safety. A federal agency, the National Highway Traffic Safety Administration, which is still in operation, was created with the explicit task to make manufacturers produce vehicles with reduced risk of crashes and improved protection of the occupants of the vehicle in case of a collision (Mashaw and Harfst 1987; Blomquist 1988). The general approach taken by the administration was to reduce traffic casualties as much as possible. This was noted by economist Glenn Blomquist in a book criticizing their approach:

Each year the NHTSA [National Highway Traffic Safety Administration] prepares a report on its activities under the Vehicle Safety Act. Each year changes in the number of traffic fatalities and in the fatality rate (per vehicle miles) are described. Some years the fatalities and rates are up and some years the fatalities and rates are down compared to previous years. Every year, however, the implication is the same: the traffic safety problem deserves more attention than ever before. Travel risks are not zero despite effective policy is the contention. The reasoning seems to be if fatalities and rates are up then more aggressive policy is needed to bring them down, and if fatalities and rates are down, then more aggressive policy is needed to reduce them further. (Blomquist 1988, pp. 115–116)

According to Blomquist, this showed that the NHTSA entertained a “risk-free goal.” In his view, such a goal is “unwise and futile” since “no agency will ever have sufficient power or resources to completely control individual behavior” (ibid., p. 115). There is of course another side to this; with a more positive view on zero goals, the NHTSA can instead be described as a forerunner of a modern, more progressive approach to technology improvement.

Today, the development of safety standards for motor vehicles is largely driven through international cooperation, in which the World Forum for Harmonization of Vehicle Regulations has a central role. In this and other areas, technological safety regulations tend to be quite specific on what is required, and best available technology (BAT) clauses are seldom if ever used. However, in areas such as motor vehicle safety where regulators actively follow the technology development in detail, technological improvements can still be introduced by timely amendments of regulations.

Cost-Benefit Analysis

We will now turn to the next main category of aspiration principles, namely, weighing principles. These are principles demanding the weighing of safety objectives against various other objectives with which they may run into conflict, such as ease of work, product quality, environmental protection, productivity, cost containment and economic gains. Most of the discussion has focused on conflicts between safety and economic limitations, but in practice, safety concerns can also clash with various non-economic constraints and objectives. The dominant weighing principle is cost-benefit analysis (CBA). It is a powerful economic tool, but it is based on simplifying assumptions that are far from unproblematic.

The basic idea of cost-benefit analysis is quite simple: In order to compare the advantages and disadvantages of decision alternatives, they are all assigned a monetary value. Suppose that a proposed new road project costs 25 million euros. Furthermore, it is expected to lead to a total reduction in traffic time for all its users of 6,000,000 hours and the loss of four unique local species of hoverflies. We assign the value of 5 euros to each gained hour and the value of 1 million euros for each hoverfly species. If these are the only factors to be taken into account, then the total value of the project is as follows:

$$ 5\times 6,000,000\hbox{--} 4\times \textrm{1,000,000}\hbox{--} \textrm{25,000,000}=\textrm{1,000,000}\ \textrm{euros}. $$

Since the total value is positive, the analysis recommends that the road be built. A major problem in this example is of course how to determine the economic values of travel time and lost species. Proponents of cost-benefit analysis emphasize that since we do not have unlimited amounts of money, there is no way to avoid weighing non-monetary values against monetary costs. For instance, we take measures to save a hoverfly species if doing so does not cost much, but we will not do it at any price. According to the proponents of cost-benefit analysis, the major difference is that with this method, we make these decisions transparently, basing our decisions on known prices, rather than unarticulated intuitions. If we use the same monetary values in different decisions, then we can also achieve increased consistency in our decision-making processes.

When cost-benefit analysis is applied to safety decisions, uncertain outcomes will have to be included in the analysis. This is usually done by assigning to each such outcome the best available estimate of its expectation value (probability-weighted value). For instance, suppose that 200 deep-sea divers perform an operation in which the risk of death is 0.001 for each individual. Then, the expected number of fatalities from this operation is 0.001 × 200 = 0.2. If we apply a “value of life” of 3 million euros, then the monetary cost assigned to this series of dives is 0.6 million euros.

Cost-benefit analysis involves a rather radical simplification of multidimensional real-life problems in order to make them accessible to a transparent and easily manageable one-dimensional analysis. Unsurprisingly, this gives rise to a host of philosophical and interpretational issues (Hansson 2007c). Here, we will focus on four problems that are highly relevant for safety applications, namely, incommensurability, incompleteness, collectivism and complacency.

By incommensurability between two values is meant that they are so different in nature that no translation between them is possible. Probably the most common criticism of cost-benefit analysis is that it violates the incommensurability between human life and money. The assignment of a monetary value to human lives is said to violate the sanctity of life (Anderson 1988; Sagoff 1988; Hampshire 1972, p. 9). One reason why this criticism is so widespread may be that cost-benefit analysts have failed to explain the difference between the calculation values used in their analyses and prices on a market. The assignment of a sum of money to the loss of a human life does not imply that someone can buy another person, or the right to kill her, at that price. A more serious problem may be the arbitrariness of the values used in cost-benefit analyses. Not only do we lack a well-founded answer to what calculation value should be used for the loss of a human life. We also lack definite answers to questions such as how many cases of juvenile diabetes correspond to one death or what amount of human suffering or death corresponds to the extinction of an antelope species. Methods have been developed to determine monetary values for these and other seemingly non-monetary assets, but these methods are all fraught with uncertainty, and none of them has a reasonably sound philosophical foundation (Heinzerling 2000, 2002; Hausman 2012).

By incompleteness is meant in this context that factors that could legitimately have an influence on a decision are left out of the analysis. Even quite extensive cost-benefit analyses of societal projects tend to leave out decision effects that are difficult to express in quantitative terms. This applies, for instance, to risks of cultural impoverishment, social isolation and increased tensions between social strata. Such issues may nevertheless be important considerations for decision-makers. Unfortunately, there is often a trade-off between attempted solutions to the incompleteness problem and incommensurability problems. In order to solve incompleteness, we would have to assign monetary value to additional potential effects, such as social incohesion, which are extremely difficult to monetize. But by doing so, we would aggravate the problem of incommensurability.

The collectivism of standard cost-benefit analysis is a consequence of its aggregation of all effects to a single number, irrespectively of whom they accrue to. In our above example of a road project, the reduction in travel time was judged by the total sum for all travellers, 6,000,000 h. The distribution of these gains has no influence on the analysis. This net gain could, for instance, arise as a result one million long-distance travellers gaining 7 h each, whereas each of five thousand local travellers has to spend 200 h more travelling. This would be very different from a situation where only one hundred thousand travellers were affected, and they all gained 60 h each (Nordström et al. 2019). Standard cost-benefit analysis makes no difference between these two situations, since the total net effect on travel time is the same. Even worse, cost-benefit analysis treats serious risks such as death risks in the same way. Distributional issues are simply not part of its standard considerations. This is of course particularly problematic if the benefits and the disadvantages of a project are received by different groups of people.

The complacency induced by cost-benefit analysis consists in its tendency to foster acceptance of those evils that cannot currently be rectified with a positive cost-benefit analysis. For an example, suppose that a country has a large number of unguarded railroad crossings in thinly populated areas. Each year, several fatalities are caused by collisions between trains and vehicles or pedestrians passing one of these crossings. The number of fatalities can be drastically reduced by installing traffic lights and half-barrier gates, operated by the rail traffic control system. However, this would be much too expensive, due to the large number of crossings in places with few road users. The message of a cost-benefit analysis in such a situation would be that the life-saving traffic control system is simply not optimal and cannot be defended. In contrast, the message emerging from Vision Zero or other improvement principles would be that the life-saving system is indeed desirable but cannot be implemented at present, due to other, even more pressing priorities. The latter message has the obvious comparative advantage of being more conducive to cost-reducing innovations and more promotive of continued social activities in the issue.

But as already mentioned, there are other ways to balance advantages against disadvantages. In the next two subsections, we will briefly consider two alternatives to standard cost-benefit analysis .

Individual Cost-Benefit Analysis

In individual cost-benefit analysis , costs and benefits affecting different individuals are not added up. Instead, a separate cost-benefit analysis is made for each individual or, in practice, for each type of concerned individual (Hansson 2004b). For instance, in a road project, separate cost-benefit analyses can be made for categories such as local inhabitants, people driving a private car to and from work on the road and people travelling daily on it in buses. The outcomes of these different cost-benefit analyses will typically differ, and they may even point in different directions concerning the value of the project. This should not be seen as a disadvantage. A necessary first step towards solving conflicts of interest is to recognize them.

Cost-Effectiveness Analysis

The other, probably more important, alternative to standard cost-benefit analysis is cost-effectiveness analysis (CEA), which compares costs and benefits by calculating cost-effect ratios. For instance, if the desired effect of a technological innovation in motor cars is to reduce the number of fatalities, then the outcome can be reported as the expected cost per life saved by introducing the innovation in question.

Cost-effectiveness is mostly applied to healthcare interventions, where the most commonly used ratios are (i) cost per life-year gained and (ii) cost per quality-adjusted life-year gained. (The number of saved quality-adjusted life-years is the product of the number of saved life-years with a factor that is 1 if these are years lived in good health but smaller if they are years lived with a severe medical condition.) For instance, a French study investigated the costs and effects of smoking cessation counselling and treatment. It showed an average expected cost of less than 4000 euros per life-year gained (Cadier et al. 2016). Other studies of smoking cessation give similar results. This is an unusually low cost for a life-saving medical intervention, and smoking cessation is therefore an unusually cost-effective medical intervention.

Cost-effectiveness studies are comparatively uncommon outside of the healthcare sector, but there are plenty of examples showing their usefulness. For instance, in studies intended to guide energy saving in buildings, it is highly useful to calculate the cost per kWh energy saved with different energy efficiency measures (Tuominen et al. 2015). Houseowners and other decision-makers can then obtain maximal energy savings for their money by giving priority to the most cost-efficient measures. The cost-efficiency approach appears to be much more appropriate in this case than a cost-benefit analysis, which would divide the measures into two classes, those approved and those disapproved. Safety measures can also be evaluated in this way. For instance, one study showed that engineering control programmes to reduce silica exposure on workplaces are highly cost-effective; some such measures had a cost of only about USD 110 per quality-adjusted life-year (Lahiri et al. 2005; cf. Tengs et al. 1995).

Cost-effectiveness analysis is eminently useful when a proposed safety measure has to be evaluated in terms of its effects on safety and its cost, and no additional factors need to be taken into account. The application of this method is much less clear-cut if there are also other effects, say effects on the environment, that have to be taken into account. It can also be difficult to apply if the safety measure is an integrated part of some larger project. For instance, if a new road is built to replace an old unsafe road with too little capacity, then it is usually not possible to divide up the project costs between costs for increased safety and costs for increased capacity. But in the cases with well-defined costs for safety, cost-effectiveness analysis has distinct advantages over cost-benefit analysis and should probably be used more often.

Hypothetical Retrospection

Safety management is largely a matter of giving sufficient weight to untoward events that might happen in the future. A basic type of reasoning to that effect is the “foresight argument” (Hansson 2007b, p. 147). It urges us to take into account the possible effects of what we do now on what can happen later. The argument has both a deterministic and an indeterministic variant. As an example of the deterministic variant, some of the consequences of drinking excessively tonight can, for practical purposes, be regarded as foreseeable. As an example of the indeterministic variant, driving drunk substantially increases the risk of causing an accident, but of course, there is also a considerable chance that nothing serious will happen. Nevertheless, the increased risk is reason enough not to drink and drive.

The indeterministic variant of the foresight argument is highly useful for thinking about safety. It requires that we think through the various ways in which the future can develop and pay special attention to those “branches” of future development in which things go seriously wrong. It can therefore be described as the very antithesis of wishful thinking. Its purpose is to ensure, as far as possible, that whatever happens in the future, it will not give us reason to say that what we do now was wrong. To achieve this, we can systematically consider what we plan to do now from alternative future perspectives. This way of thinking is called hypothetical retrospection.

This may seem difficult and perhaps overly abstract, but it is in fact a way of thinking that we teach our children when trying to help them become responsible and thoughtful persons. “Do not leave all that homework to tomorrow! You know very well how you will feel tomorrow if you do so.” “Save some of the ice-cream for tomorrow. You know that you will regret if you don’t do it.” And of course, as grown-ups, we sometimes wish that we had been more proficient at “thinking ahead” about various subject matter. To apply hypothetical retrospection in safety management means to methodically develop and apply that way of thinking in one’s area of professional responsibility.

The following example exemplifies what this can mean in practice:

A factory owner has decided to install an expensive fire alarm system in a building that is used only temporarily. When the building is taken out of use, the fire alarm has yet never been activated. The owner may nevertheless consider the decision to install it to have been right, since at the time of the decision other possible developments (branches) had to be considered in which the alarm would have been life-saving. This argument can be used not only in actual retrospection but also, in essentially the same way, in hypothetical retrospection before the decision. Similarly, suppose that there is a fire in the building. The owner may then regret that she did not install a much more expensive but highly efficient sprinkler system. In spite of her regret, she may consider the decision to have been correct since when she made it, she had to consider the alternative, much more probable development in which there was no fire but the cost of the sprinklers had made other investments impossible. (Hansson 2013a, pp. 68–69; cf. Hansson 2007b, pp. 148–149)

For most practical purposes, the application of hypothetical retrospection in safety management consists in following a simple rule of thumb: “Make a decision that you can defend also if an accident happens.” The application of this principle will typically support strivings for risk reduction, and it is therefore concordant with zero goals and other improvement principles.


In this section, we have studied various aspiration principles, other than the improvement principles that were the topics of the two previous sections. Some of the principles discussed in this section tend to run into conflict with the improvement principles, since they support acceptance of conditions in which safety can still be improved. This applies in particular to cost-benefit analysis, risk limits and exposure limits. These principles share a major problem: they tend to support the presumption that the compromises that have been made (perhaps for good reasons) between safety and other social goals represent a satisfactory state of affairs, thus downplaying the need for future enhancements that go beyond them.

On the other hand, two of the aspiration principles that we have studied in this section, namely, cost-effectiveness analysis and hypothetical retrospection, are easily compatible with the improvement principles. Cost-effectiveness analysis, in particular, can serve as a priority-setting tool to support the application of Vision Zero, continuous improvement and other improvement principles. In a situation with limited economic resources, cost-effectiveness analysis can act as a pathfinder, helping to identify the largest improvements in safety that can be achieved as the next step.

Error Tolerance Principles

Two of the most important insights in safety engineering and safety management are that things go wrong and that humans make mistakes however much we try to avoid it. Therefore, it is not sufficient to reduce the risk of failures as much as we can. We also have to ensure that the consequences of failures are as small as possible. This is not a new insight. More than 500 years ago, Leonardo da Vinci (1452–1519) wrote as follows in one of his notebooks:

In constructing wings one should make one cord to bear the strain and a looser one in the same position so that if the one breaks under the strain the other is in position to serve the same function. (Hart 1962, p. 321)

Some of the most important safety principles recommend that equipment, procedures and organizations be so constructed that failures have as small negative consequences as possible. We can call them error tolerance principles. In this section, we will have a close look at six such principles: fail-safety, inherent safety, the substitution principle, safety factors, multiple safety barriers and redundancy.


An equipment or procedure is fail-safe if it can “fail safely,” which means that the system is kept safe in the case of a failure. Fail-safety can refer to two types of failure: device failure and human failure. The requirements of a fail-safe system have been usefully summarized as follows:

The basic philosophy of fail-safe structures is based on:

  1. (i)

    the acceptance that failures will occur for one reason or another despite all precautions taken against them.

  2. (ii)

    an adequate system of inspection so that the failures may be detected and repaired in good time.

  3. (iii)

    an adequate reserve of strength in the damaged structure so that, during the period between inspections in which the damage lies undetected, ultimate failure of the structure as a whole is remote. (Harpur 1958)

The safety valve is a classic example of a design that makes a system fail-safe. Safety valves are mounted on pressure vessels in order to prevent explosions. (Other means to achieve the same effect are rupture disks, also called burst diaphragms, which act as one-time safety valves, and leak-before-burst design, by which is meant that a crack will give rise to pressure-releasing leakage rather than an explosion.) The origin of the safety valve is not known with certainty, but it is usually credited to the French physicist and inventor Denis Papin (1647–1713), in whose book from 1681 on pressure cookers it was first described (Papin 1681, pp. 3–4; Stuart 1829, p. 84; Le Van 1892, pp. 10–11). In the eighteenth century, safety valves became a standard feature of steam engines. However, enginemen soon found that they could be used to control the machine. Safety valves were frequently tied down or loaded with heavy objects in order to increase the working pressure. These work practices resulted in serious accidents (Hills 1989, p. 129). To prevent such calamities, engine makers provided steam engines with two safety valves. One of them could be operated by the enginemen, whereas the other was inaccessible to them. It could, for instance, be contained in locked, perforated box. This was common practice at the beginning of the nineteenth century (Partington 1822, pp. 80, 88, 90, 98, 100, 106, 107–108, 109, 114, 115, 116, 122 and Appendix, p. 76). In 1830, an American railway company applied it as a safety rule for their locomotives:

There must be two safety valves, one of which must be completely out of the reach or control of the engine man. (Thomas 1830, p. 373)

Notably, the requirement of a tamperproof safety valve is an early example of a construction tailored to protect not only against machine failures but also against mistakes by the operators.

Another classic example of a fail-safe construction is the so-called dead man’s handle (dead man’s switch), a control device that has to be pressed continuously in order to keep a machine going or a vehicle moving. The term “dead man’s handle” was used already in an American engineering magazine in 1902. The author emphasized that the motorman could only drive the train if he held the handle at all times “and should he drop dead or become disabled, the train will stop of itself, and will not run wild” (Anon. 1902). (However, the handle might not be released if the driver fell over it. Therefore, more advanced vigilance systems are used in modern trains.) Today, similar mechanisms can be found on lawnmowers and on handheld machines such as drills and saws.

A similar mechanism, triggered by device failure rather than human failure, was introduced in the early 1850s by the American inventor Elisha Otis (1811–1861) in his so-called safety elevator. The elevator car was equipped with brakes that automatically gripped the vertical guide rails if the tension of the cord was released, for instance, in the event of a cord break. This invention made elevators safe enough for general use, and it was one of the technical preconditions for the building of skyscrapers that began in the 1880s.

A fail-safe system should go to a safe state in the event of failure. However, technical systems differ in what that safe state is. Trains, lawnmowers, elevators and handheld electric drills can be made safe (or as safe as possible) by being stopped. In all these cases, fail-safety is achieved with a negative feedback that stops movement in the system if a failure occurs. The same applies to a nuclear reactor, in which dangerous conditions should lead to an automatic shutdown. In all these cases, the system is fail-safe if it is fail-passive (fail-silent) (Hammer 1980, p. 115). However, there are also technical systems in which safety requires normal operations to continue as long as possible even in the event of failure. This applies, for instance, to airplanes. In such cases, a fail-safe system should be fail-operational (fail-active). This is achieved if the device is sturdy enough to fulfil its function for a sufficient time after it is damaged. This is called fault tolerance (damage tolerance) and is often achieved with the help of safety factors or with redundancy, i.e. the duplication of vital components or functions.

Several alternative terms are used for fail-safety when it is primarily aimed to protect against human failures. A system is said to be foolproof (idiot-proof) if nothing dangerous happens when it is used incorrectly, tampered with or used in unintended ways. Design making a system foolproof is often called defensive design. The Japanese term poka-yoke means mistake-proof. It is often used about constructions that prevent human mistakes from leading to product defects, rather than to safety problems.

In 1974, W.C. Clark proposed a distinction between the two terms safe-fail and fail-safe (Jones et al. 1975, p. 1n.). According to this proposal, “fail-safe policy strives to assure that nothing will go wrong,” whereas “safe-fail policy acknowledges that failure is inevitable and seeks systems that can easily survive failure when it comes” (Jones et al. 1975, p. 2). However, these definitions do not correspond to common linguistic practice. The term “safe-fail” is seldom used, and what Jones and co-workers called by that name is usually called “fail-safe.” What Jones and co-workers called “fail-safe” is designated by other terms, such as “inherent safety. ”

Inherent Safety

By inherent safety is meant that untoward events are eliminated or made impossible. This contrasts with fail-safety, which reduces the negative effects of untoward events, rather than preventing them from happening. For a simple example, consider a process in which inflammable materials are used. If we replace them by non-inflammable materials, then we have achieved inherent safety. If we still have them but have reduced the consequences of a fire, for instance, by keeping them in containers at safe distance from all buildings, then we have achieved fail-safety.

This distinction has a long tradition. Around 1950, it became common to use the term “primary prevention” for measures against a disease that have the effect of “keeping it from occurring” and “secondary prevention” for “halting the progression of disease after early diagnosis” (Sabin 1952, p. 1270). These terms were soon adopted in accident prevention. In an article in an international road safety journal in 1961, the influential Norwegian civil servant Karl Evang wrote that the concepts of primary prevention (prevention of occurrence) and secondary prevention (prevention of progress) “have now been generally accepted in the field of preventive medicine.” He proposed that they should also be used in the area of traffic safety (Evang 1961, p. 42n).

“Primary prevention” is essentially a synonym of “inherent safety” and “secondary prevention” a synonym of “fail-safety.” The phrase “inherent safety” has been used at least since the 1920s (Bouton 1924), but it acquired its modern sense in the discussions that followed after the disastrous explosion in a chemical plant in Flixborough in June 1974, which caused the death of 28 persons and seriously injured 36. Trevor Kletz (1922–2013), a chemist working for one of the large chemical companies, showed that the accident would not have reached its catastrophic proportions if simple measures had been taken to reduce the hazards. Perhaps most notably, large quantities of inflammable chemicals had been stored close to occupied buildings. Based on these tragic experiences, Kletz proposed that whenever possible, the chemical industry should eliminate hazards rather than just try to manage them. He originally used the term “intrinsic safety” for this concept but soon replaced it by “inherent safety” (Kletz 1978). Four major types of measures are included in the concept of inherent safety that he and other safety professionals in the chemical industry have developed (Khan and Abbasi 1998; Bollinger et al. 1996):

  • Minimize (intensify): use smaller quantities of hazardous materials

  • Substitute: replace a hazardous material by a less hazardous one

  • Attenuate (moderate): use the hazardous material in a less hazardous form

  • Simplify: avoid unnecessary complexity in facilities and processes, in order to make operating errors less likely.

Full inherent safety, i.e. total absence of hazards, is seldom if ever achievable. Therefore, it is well advised to avoid the absolute term “inherently safe” and instead refer to “inherently safer” technologies and procedures. For instance, it may be impossible to eliminate an explosive reactant. Usually, it is nevertheless possible to substantially reduce the hazard it gives rise to by drastically reducing the inventories of the substance. One way to do this is to produce the substance locally in a continuous process. In terms of the four above-mentioned strategies, this means that minimization is chosen instead of substitution.

The disaster in a chemical factory in Bhopal, India, in 1984, illustrates this. With an official death toll of 2259, it is the largest accident in the history of the chemical industry, and it has also been called “the worst example of an inherently unsafe design” (Edwards 2005, p. 91). Methyl isocyanate, the substance that caused the calamity, was an intermediate that was stored in large quantities (ibid.). The final product could have been obtained from the same raw materials via an alternative chain of reactions in which methyl isocyanate is not produced. This and other alternative processes should have been considered. Even if a process involving methyl isocyanate was chosen, storage of large quantities of the substance could and should have been avoided.

In general, solving a problem with inherent safety is preferable to relying on interventions at later stages in a potential chain of events leading up to an accident. A major reason for this is that as long as a hazard still exists, it can be activated by some unanticipated triggering event. Even with the best of control measures, some unforeseen event can give rise to an accident. Even if a dangerous material is safely contained in the ordinary process, there is always a risk that it will escape, for instance, due to a fire, an uncontrolled chemical reaction, sabotage or an unusual mistake (Hansson 2010). Even the best add-on safety technology can fail or be destroyed in the course of an accident. An additional reason is that inherent safety is usually more efficient against security threats than fail-safety. Add-on safety measures, which are typically required for fail-safety, can often easily be deactivated by those who wish to do so. When terrorists enter the plant with the intent to blow it up, it does not matter much if all ignition sources have been removed from the vicinity of explosive materials. They will bring their own ignition source. Similarly, even if a toxic substance has been securely contained in a closed process, they can usually find ways to release it. In contrast, if explosive and toxic substances have been removed or their quantities drastically reduced, then the plant is safer, not only against accidents but also against wilfully created disasters.

Safety measures based on the ideas of inherent safety have contributed much to reducing hazards in the chemical industry (Hendershot 1997; Overton and King 2006). However, several commentators have complained that progress in the implementation of inherent safety is too slow (Kletz 2004; Edwards 2005; Srinivasan and Natarajan 2012). Indeed, 24 years after the Bhopal accident, investigations of a fatal accident at a chemical plant in West Virginia revealed considerable safety problems in the plant. A large inventory of methyl isocyanate, up to 90,000 kg, was stored on the plant. Luckily, no detectable release of the substance took place in the 2008 accident. (The death toll of the Bhopal accident was due to release of 47,000 kg of the same substance.) Inherently safer alternatives to this massive storage of the substance had previously been considered but had been rejected as too expensive (Ogle et al. 2015).

It has often been proposed that the ideas of inherent safety should be exported to other industries, including mining, construction and transportation (Gupta and Edwards 2003). However, the only other industry in which inherent safety has a major role is the nuclear industry. Much effort has been devoted to developing nuclear reactors that are inherently safer than those currently in use. By this is meant that even in the case of failure of all active cooling systems and complete loss of coolant, fuel element temperatures should not exceed the limits below which most radioactive fission products remain confined within the fuel elements (Elsheikh 2013; Adamov et al. 2015).

Several authors have discussed the application of inherent safety to the construction of road vehicles and infrastructure. Inherent safety is often mentioned as a means to make progress towards Vision Zero for traffic safety. One recurrent idea is that speeds should be kept at levels low enough for the inherent safety of the system to prevent serious accidents (Tingvall and Haworth 1999; Khorasani-Zavareh 2011; Hakkert and Gitelman 2014). Arguably, much ongoing work in the construction of safer road vehicles can be described as applications of the basic principles of inherent safety. However, contrary to the literature on chemical and nuclear engineering, the technical literature on vehicle safety seldom refers to the notion of inherent safety .

The Substitution Principle

As we saw in the previous subsection, the substitution of hazardous substances by less dangerous ones is one of the major methods to achieve inherent safety. Independently of the inherent safety principle, a “substitution principle” has gained prominence in chemicals policy. The substitution principle requires the replacement of toxic chemicals by less dangerous alternatives. According to most versions of the principle, the replacement may be either another chemical or some non-chemical method to achieve the same or a similar result. The earliest example on record of a general rule requiring such substitutions seems to be a paragraph in the Swedish law on workplace health and safety from 1949:

A poisonous or otherwise noxious substance shall be replaced by a non-toxic or less harmful one whenever this can reasonably be done considering the circumstances. (Svensk författningssamling 1949, p. 401)

A special “substitution principle” for hazardous chemicals was introduced into the European health and safety legislation in 1989 (European Union 1989). It states that the employer has to implement preventive measures according to a series of “general principles of prevention,” one of which is “replacing the dangerous by the non-dangerous or the less dangerous” (European Union 1989, II.6.2). Substitution was also emphasized as a major risk-reducing strategy in the discussions in the 1990s that led up to a new European chemicals legislation (Sørensen and Petersen 1991; Antonsson 1995). The European Commission’s 2001 White Paper recommended “the substitution of dangerous by less dangerous substances where suitable alternatives are available” (European Commission 2001). Following this, a substitution principle was integrated into the European chemicals legislation (the REACH legislation), which was adopted in 2006. The substitution principle has also had an important role in various projects for chemical safety promoted by both government agencies and industrial companies (Lissner and Romano 2011; Hansson et al. 2011). Increasingly, the substitution principle has become associated with the movement for green chemistry, i.e. chemical engineering devoted to developing less hazardous chemical products and processes (Fantke et al. 2015; Tickner et al. 2019).

Decisions based on the substitution principle are often hampered by lack of reliable knowledge on the effects of both the chemicals currently in use and their potential alternatives (Rudén and Hansson 2010). Due to incomplete or inaccurate information, attempts to apply the substitution principle have sometimes led to the replacement of an unsafe product by another product that is in fact no better:

The chemical trichloroethylene (TCE), a volatile organic chemical, was widely used as a degreaser in the manufacture of electronic circuits and components until concerns about TCE’s environmental effects led the industry to replace it with trichloroethane (TCA), which has similar chemical structure. TCE and TCA were among the most widely used industrial degreasers, and they are now found in many of the hazardous cleanup sites listed on the National Priorities List. TCA, in turn, was replaced as a degreaser by chlorofluorocarbons such as Freon when ozone depletion concerns were raised about TCA in the 1990s. The use of Freon as a chemical degreaser was eventually phased out due to its own health and environmental concerns. Now, new mixtures of solvents are being used in vapor degreasing. (Bent 2012, pp. 1402–1403)

Generally speaking, the difficulties in assessing health risks and environmental risks are larger for chemical substances than for most other sources of potential hazards (Rudén and Hansson 2010). Therefore, a double strategy for chemical safety is advisable: Systematic work to replace hazardous substances and processes by less hazardous alternatives needs to be combined with equally methodical endeavours to reduce emissions and exposures.

In applications of the substitution principle, priority is usually given to substituting the most hazardous products and processes, but there is no predetermined level of risk below which further substitutions to even less perilous substances and methods are considered unnecessary. Furthermore, the principle is not “subordinated to purely economical considerations” (Szyszczak 1992, p. 10). This is in line with the above-mentioned European health and safety legislation from 1989, which says the following:

The employer shall be alert to the need to adjust these measures to take account of changing circumstances and aim to improve existing situations. (European Union 1989, II.6.1)

Notably, this requirement is not restricted to companies in which existing conditions are below a certain standard. With this interpretation, the substitution principle is well in line with improvement principles such as Vision Zero and continuous improvement. It can also, with this interpretation, be classified as an improvement principle (Hansson 2019).

However, the substitution principle has sometimes been interpreted in ways that weaken its effects. In particular, high demands on the functionality of the replacement can sometimes block health and safety improvements. For instance, the substitution principle has been defined as “the replacement of a substance, process, product, or service by another that maintains the same functionality” (UK Chemicals Stakeholder Forum 2010). This would mean that a substitution can only be required if the replacement functions at least as well as the harmful substance that one wishes to avoid. To mention just one example, it would imply that a company using a highly toxic metal degreaser could not be required to substitute it by something less dangerous if the best replacement would require a small increase in the time that the metal parts have to be immersed in the solvent. With such an interpretation, the substitution principle would lose much of its effect (Hansson et al. 2011).

Safety Factors

A safety factor is a numerical factor (i.e. a number) that is used as a rule of thumb to create a margin to dangerous conditions. The most common uses of safety factors are in structural mechanics and in toxicology. In structural mechanics, to apply a safety factor x means to make a component x times stronger than what the predicted load requires. In toxicology, to apply a safety factor x means to only allow exposures that are at least x times smaller than some dose believed to be barely safe.

Safety factors provide a safety reserve, i.e. a distance or difference between the actual conditions and the conditions expected to cause a failure. You introduce a safety reserve if you hang your child’s swing with a stronger rope than what you actually believe to be necessary to hold a person using the swing. If you do this intuitively, the safety reserve is non-quantitative. If you ask the shop attendant for a rope that holds three times the highest load you expect, then your safety reserve is quantitative and expressible as a safety factor of three.

Non-quantitative safety reserves have been used in the building trades since prehistoric times (Randall 1976; Kurrer 2018). The early history of safety factors does not seem to have been written before, and a brief account will therefore be given here. The earliest record of a quantitative safety factor may be a letter written in March 1812 by the English inventor Richard Trevithick (1771–1833), where he described how he used what we would today call a safety factor of 4 in the testing of steam engines:

To prevent mischief from bad castings, or from the fire injuring the surface of cast iron, I make the boilers of wrought iron, and always prove them with a pressure of water, forced in equal to four times the strength of steam intended to be worked with. (Trevithick 1872, p. 14)

The use of a safety factor for steam pressure seems to have been a common practice in Britain in the early nineteenth century. In his book on steam engines from 1822, the British science writer Charles Frederick Partington referred to four engine makers who all recommended the practice. However, they had widely different views on what an appropriate safety factor should be. One said that steam engines should be tested at 2 to 3 times higher pressure than the intended work pressure, another recommended 10 to 12 times higher pressure, a third 14 to 20 times higher, and a fourth 50 times higher (Partington 1822, pp. 109, 112, 113, and Appendix, p. 76). We can conclude that the notion of a safety factor was well known among engine makers at this time, although they neither had a name for it nor a common view on its value.

In his 1827 book on steam engines, the influential English civil engineer Thomas Tredgold (1788–1829) referred to the “excess of strength” that is required in a boiler. Although he wrote only five years later than Partington, he reported a consensus in the matter: “it has been almost universally allowed, that three times the pressure on the valve in the working state, should be borne by the boiler without injury.” However, he was critical of that consensus. He proposed that the factor of 3 could be lowered to 2 for “ordinary low-pressure steam boilers,” whereas high-pressure boilers required higher factors, depending on their construction (Tredgold 1827, pp. 257–258). His statement that a factor of 3 was customary is confirmed in a call for tenders for new locomotive steam engines that was sent out in 1830 by an American railway company. They stated that they considered themselves at liberty to put the engine “to the test of a pressure of water, not exceeding three times the pressure of the steam intended to be worked, without being answerable for any damage the machine may receive in consequence of such test” (Thomas 1830, p. 373).

In his three-volume book on bridge-building, published in 1850, the English civil engineer Edwin Clark (1814–1894) reproduced a text from 1846 describing the construction of a bridge such that “its breaking-weight is seven times as great as any weight with which in practice it can ever be loaded.” He called this number a “factor of safety” and discussed how it should be used in calculations, given that the bridge’s own weight had to be taken into account (Clark 1850, pp. 514–515). He reported that the famous Scottish engineer Robert Stephenson (1772–1850) favoured a factor of 7. This gives the impression that the use of safety factors was well established at the time, not only in boilermaking but also in civil engineering. Its earlier background in civil engineering remains to be investigated.

In 1859, the Scottish engineer and physicist William Rankine (1820–1872) published a table of “factors of safety” for different materials. This factor was, essentially, a ratio between breaking load and working load. He recommended safety factors of 10 for timber, 8 for stones and bricks and between 4 and 8 for different types of iron and steel (Rankine 1859, p. 65).

In 1873, the American engineer Barnet Le Van wrote a report to the Franklin Institute on a boiler explosion in Pennsylvania that had killed 13 persons and wounded many more. He concluded that proper maintenance and regular examination and testing of the boiler, in accordance with well-established routines, would have prevented the accident. However, he also had a more general conclusion:

In conclusion, I would call the attention of the Institute to the factor of safety for boilers as being entirely too low. The great number of disastrous explosions that have lately occurred in different parts of the country are the best evidences of the fact. The Bridge Engineers have long since come to this conclusion, and have fixed their factor of safety at one-eighth the ultimate value of the material. (Le Van 1873, p. 253)

The value of the safety factor for boilers that he criticized is not mentioned in his text, but it may well have been 3.

Today, safety factors are almost ubiquitous in engineering design. It is generally agreed that their main purpose is to compensate for five major sources of error in design calculations (Knoll 1976; Moses 1997):

  1. 1.

    Higher loads than those foreseen

  2. 2.

    Worse properties of the material than foreseen

  3. 3.

    Imperfect theory of the failure mechanism in question

  4. 4.

    Possibly unknown failure mechanisms

  5. 5.

    Human error (e.g. in design)

In toxicology, the first proposal to apply safety factors seems to have been Lehman’s and Fitzhugh’s proposal in 1954 to calculate the Acceptable Daily Intake of food additives by dividing the highest dose (in milligrams per kilo body weight) at which no effect had been observed in animals by 100 (Dourson and Stara 1983). Today, safety factors are essential components of regulatory food toxicology. They are also widely used in ecotoxicology. In both these applications, it is common to construct an overall safety factor by multiplying several safety factors for various uncertainties and variabilities. Thus, the traditional 100-fold factor is commonly accounted for as a combination of a factor of 10 for interspecies variability (between experimental animals and humans) in response to toxicity and another factor of 10 for intraspecies variability (among humans). In more recent approaches, toxicological safety factors often incorporate additional subfactors, referring, for instance, to differences between experimental and real-life routes of exposure, extrapolation from short-term experimental to life-long real-life exposures, and deficiencies in the available data (Gaylor and Kodell 2000). However, consistent use of safety factors has not been introduced into the process of setting occupational exposure limits. That area is still dominated by case-by-case compromises between health protection and economic considerations, often resulting in exposure limits at levels where negative health effects are expected (see section “Exposure Limits”).

Since the 1990s, the use of safety factors in both structural engineering and toxicology has been criticized by scientists who want to replace them by calculated failure probabilities. However, in practice, the safety factor approach is still dominant, and it has only rarely been replaced by probability calculations. One reason for this is that probabilistic calculations are often much more complicated and time-consuming than the use of safety factors. Another reason is that meaningful probabilities are not available for some of the potential failures that safety factors are intended to protect against. In structural mechanics, this applies, for instance, to unknown failure mechanisms and imperfections in the calculations. In toxicology, it applies to unknown metabolic differences between species and unknown effects only occurring in parts of the human population (Doorn and Hansson 2011) .

Multiple Safety Barriers

When several measures are employed to improve safety, they can often be perceived as a chain of safety measures or as they are then often called: a chain of safety barriers. Each of these barriers should be as independent as possible of its predecessors in the sequence, so that if the first barrier fails, then the second is still intact, etc. The use of multiple barriers is often advisable even if the first barrier is strong enough to withstand all foreseeable strains and stresses. The reason for this is that we cannot foresee everything. If the first barrier fails for some unforeseen reason, then the second barrier can provide protection.

The archetype of multiple safety barriers is an ancient fortress. If the enemy manages to pass the first wall, then there are additional layers that protect the defending forces. This is an age-old practice. As early as 3200 BCE, the Sumerian town Habuba Kabira (now in Syria) was surrounded by double walls (Keeley et al. 2007, p. 86). Double and triple walls were also erected around other major cities in the ancient Near East (Mielke 2012, p. 76). In the early Iron Age (around 450 BCE), hill forts were built in Britain with up to four concentric ramparts (Armit 2007).

Some engineering safety barriers exhibit the same spatial pattern as the concentric barriers of a fortification. Illustrative examples of this can be found in nuclear waste management. For instance, the nuclear industry in Sweden has proposed that spent fuel from the country’s nuclear reactors should be placed in copper canisters constructed to resist all foreseeable stresses. The canisters will be surrounded by a layer of bentonite clay, intended to protect against movements in the rock and to absorb radionuclides, should they leak from the canisters. This whole construction is placed in deep rock, in a geological formation that has been selected to minimize transportation to the surface of any possible leakage of radionuclides. The idea behind this construction is that the whole system of barriers should have a high degree of redundancy, so that if one of the barriers fails, then the remaining ones will suffice to keep the radionuclides below the surface (Jensen 2017; Lersow and Waggitt 2020, pp. 282–287).

More generally, the safety measures (“barriers”) included in a multiple-barrier system should be arranged in a temporal or functional sequence, such that the second barrier is put to work if the first one fails, etc. The barriers may, but need not, be sequentially arranged in space. The combination of inherent safety and fail-safety can be used as an example of a temporally but not spatially sequential arrangement of barriers. Inherent safety is the first barrier. If it fails, then fail-safety should come in as a second resort. A systematic theoretical discussion of consecutive barriers in safety management was provided by William Haddon (1926–1985). He proposed that the protection against mechanical accidents such as traffic accidents should be conceptualized in terms of four types of barriers:

In the context of the recognition that abnormal energy exchanges are the fundamental cause of injury, accident prevention and hence accident research aimed at prevention are easily sorted into several types, each concerned with successive parts of the progression of events which lead up to these traumatic exchanges. In general, measures directed against accidental or deliberately inflicted injuries attempt: first, to prevent the marshalling of the hazardous energy itself, and second, if this is not feasible, to prevent or modify its release. Third, if neither of these is successful, they attempt to remove man from the vicinity, and fourth, if all of these fail, an attempt is made to interpose an appropriate barrier which will block or at least ameliorate its action on man. (Haddon 1963, p. 637)

For another example, consider a chemical process in which hydrogen sulfide is used as a raw material in the production of organosulfur compounds. Hydrogen sulfide is a deadly and treacherous gas, and it is therefore imperative to protect workers against exposure to it. This can be done with the help of a series of five barriers. The first barrier consists in reducing the use of the substance as far as possible. If it cannot be dispensed with completely, then resort must be had to the second barrier, which consists in encapsulating the process efficiently so that leakage of hydrogen sulfide is excluded as far as possible. The third barrier is careful maintenance, including regular checking of vulnerable details such as valves. The fourth barrier is an automatic gas alarm, combined with routines for evacuation of the premises in the case of an alarm. The fifth barrier is efficient and well-trained rescue and medical services. Importantly, even if the first, second, third and fourth of these barriers have been meticulously implemented, the fifth barrier should not be omitted. Doing so amounts to what we can call the “Titanic mistake.”

The sinking of the Titanic on April 15, 1912, is one of the most infamous technological failures in modern history. The ship was built with a double-bottomed hull that was divided into sixteen compartments, each constructed to be watertight. At least two of these could be filled with water without danger. Therefore, the ship was believed to be virtually unsinkable, and consequently, it was equipped with lifeboats only for about half of the around 2200 persons on-board. This was in line with the regulations at the time, which only required lifeboats for 990 persons for this ship (Hutchinson and de Kerbrech 2011, p. 112). Archibald Campbell Holms (1861–1954), a prominent Scottish shipbuilder (also known as a leading spiritualist), commented as follows on the accident in his textbook on shipbuilding:

As showing the safety of the Atlantic passenger trade, may be pointed out that, of the six million passengers who crossed in the ten years ending June 1911, there was only a loss of six lives. The fact that Titanic carried boats for little more than half the people on board was not a deliberate oversight, but was in accordance with a deliberate policy that, when the subdivision of a vessel into watertight compartments exceeds what is considered necessary to ensure that she shall remain afloat after the worst conceivable accident, the need for lifeboats practically ceases to exist, and consequently a large number may be dispensed with. The fact that four or five compartments were torn open in Titanic, although no longer an inconceivable accident, may be regarded as an occurrence too phenomenal to be used wisely as a precedent in deciding the design and equipment of all passenger vessels in the future. (Holms 1917, p. 374)

Needless to say, this is an unusually clear example of the type of thinking that the concept of multiple safety barriers is intended to overcome. Luckily, most reactions to the accident were wiser than that of Campbell Holms. In consequence of the disaster, maritime regulations for long sea voyages were changed to require lifeboats for all passengers. However, the changes did not apply to shorter sea voyages. As late as in the 1960s, a night ferry between Belfast and the English seaport Heysham took up to 1800 passengers but had lifeboats only for 990 (Garrett 2007) .


The notion of multiple barriers can be generalized to that of redundancy. By redundancy is meant that safety is upheld by a set of components or processes, such that more than one of them have to fail for conditions to become unsafe (Downer 2011; Hammer 1980, pp. 71–75). The redundant components can be arranged in different ways, for instance, in parallel or consecutively. If the arrangement is consecutive, then we have the special case of multiple barriers. Redundancy with a parallel arrangement can be exemplified by the engine redundancy in aircraft. This means that an airplane can reach its destination or at least the nearest airport, even if not all the engines are operative (DeSantis 2013). As this example shows, redundancy can be a way to achieve fail-safety.

The major difficulty in constructing redundant systems is to make the redundant parts as independent of each other as possible. If two or more of them are sensitive to the same type of impact, then one and the same destructive force can get rid of them in one fell swoop. For instance, any number of concentric walls around a fortified city could not protect the inhabitants against starvation under siege. Similarly, ten independent emergency lights in a tunnel can all be destroyed in a fire, or they may all be incapacitated due to the same mistake by the maintenance department. The Fukushima Daiichi nuclear accident in 2011 was caused by a natural disaster (an earthquake and its resultant tsunami), which shut down both the reactors’ normal electricity supply and the emergency diesel generators. In consequence, the emergency cooling system did not work, which led to nuclear meltdowns and the release of radioactive material. This would not have happened if the emergency generators had been placed at a higher altitude than the reactors. In general, how much safety is obtained with an arrangement for redundancy depends to a large degree on how sensitive the system is to failures affecting several redundant parts at the same time (“common-cause failures”). Often, safety is better served by few but independent barriers than by many barriers that are sensitive to the same sources of incapacitation.

The quality of redundancy systems is often discussed in terms of diversity and segregation. By diversity is meant that redundant parts differ in their constructions and mechanisms. For instance, in order to avoid dangerously high temperatures in a chemical reactor, we may introduce two temperature guards, each of which automatically turns off the reactor if a certain temperature limit is exceeded. The redundancy obtained by having two instruments is improved if they are of different types. It is also improved if we employ different software for their operations (Vilkomir and Kharchenko 2012). By segregation is meant that redundant components are physically separated from each other. This is done in order to reduce the risk of spatially limited common-cause failures produced, for instance, by fire, explosion, flooding, structural failure or sabotage. Segregation is more easily achieved in large industrial buildings or complexes than in operations with limited space such as ships, offshore platforms and aircrafts. However, the principle has been applied with success in the latter types of workplaces as well (Kim et al. 2017).


The various error tolerance principles that we have discussed in this section – fail-safety, inherent safety, substitution, safety factors, multiple safety barriers and redundancy – are all perfectly compatible with Vision Zero and other improvement principles. At least one of them, namely, inherent safety, has also been discussed in connection with Vision Zero. The error tolerance principles can all be seen as means to implement the improvement principles. In general, it is advisable to combine several error tolerance principles, as explained above in the subsections on multiple barriers and redundancy.

Evidence Evaluation Principles

Decisions on safety often have to be based on information that may be difficult to obtain. We may have to ask questions such as: Can this structure sustain the additional load we intend to place on it? Is this chemical exposure hazardous to human health? How reliable is the gas alarm? Sometimes, trustworthy answers to such questions can be obtained, but on other occasions, we have to make decisions based on uncertain or insufficient evidence. This section is devoted to such principles. We will begin with the precautionary principle and then discuss three of its alternatives, namely, reversed burden of proof, risk neutrality and “sound science”.

The Precautionary Principle

According to a common misconception, the precautionary principle says that all our decisions should be cautious. According to that reading of the principle, we all apply the precautionary principle when we wear a seat belt or have our children vaccinated. But this is not what the precautionary principle means. It is a well-defined principle for the evaluation of evidence, defined in international treaties and also in the European legislation. What it means is, essentially, that even if the evidence of a danger is uncertain, we may, and often should, take precautionary measures against it.

This is of course no new way of thinking. Presumably, our ancestors refrained from entering a cave if they heard a suspicious growl from it, even if they were far from convinced that the animal they heard was dangerous. An illustrative, more recent example is the closing of a water pump in London in 1854. In early September that year, the city was struck by cholera, and 500 people died in 10 days. The physician John Snow notified the authorities that according to his investigations, a large number of those affected by the disease had drunk water from a pump on Broad Street. The authorities had no means to verify that this was more than a coincidence. According to the prevalent opinion among physicians, cholera was transmitted through air rather than water. However, although the evidence was uncertain, the authorities decided to have the handle removed from the pump. This had the effect hoped for, and the cholera epidemic was curbed (Snow 2002; Koch and Denike 2009).

The modern precautionary principle had precursors in Swedish and German legislation and in treaties on protection of the North Sea in the 1980s (Hansson 2018b). It rose to international importance through the Rio Declaration on Environment and Development that was a major outcome of the 1992 so-called Earth Summit in Rio de Janeiro:

Principle 15. Precautionary principle

In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation. (United Nations 1992)

The European Union and several of its member states have incorporated the precautionary principle into their legislations. Through the 1992 Maastricht amendments to the European Treaty (Treaty of Rome, now known as the Treaty on the Functioning of the European Union), the precautionary principle was written into European legislation (Pyhälä et al. 2010, p. 206). According to the treaty, union policy “shall be based on the precautionary principle” (European Union 2012). In February 2000, the Commission issued a Communication on the Precautionary Principle that further clarified its meaning:

The precautionary principle is not defined in the Treaty, which prescribes it only once – to protect the environment. But in practice, its scope is much wider, and specifically where preliminary objective scientific evaluation, indicates that there are reasonable grounds for concern that the potentially dangerous effects on the environment, human, animal or plant health may be inconsistent with the high level of protection chosen for the Community…

Recourse to the precautionary principle presupposes that potentially dangerous effects deriving from a phenomenon, product or process have been identified, and that scientific evaluation does not allow the risk to be determined with sufficient certainty.

The implementation of an approach based on the precautionary principle should start with a scientific evaluation, as complete as possible, and where possible, identifying at each stage the degree of scientific uncertainty…

[M]easures based on the precautionary principle should be maintained so long as scientific information is incomplete or inconclusive, and the risk is still considered too high to be imposed on society, in view of chosen level of protection. (European Commission 2000)

It can clearly be seen from this and other official texts that the precautionary principle is a principle for decision-making in situations with uncertainty about a potential hazard. In the United States, the precautionary principle has seldom been invoked by policymakers, but some administrations have taken measures based on similar thinking, using other terms such as “better safe than sorry” (Wiener and Rogers 2002). Even in Europe, the precautionary principle is seldom referred to outside of the policy areas that concern health, safety and the environment. However, similar approaches to evidence are prevalent in a wide range of policy areas although no reference is made to the precautionary principle. For instance, economists commonly agree that action should be taken against a potential financial crisis even in the absence of full evidence that it will otherwise take place. Similarly, a military commander who waits for full evidence of an enemy attack before taking any countermeasures would be regarded as incompetent.

The following two, hypothetical but realistic, examples can be used to exemplify the precautionary principle:

The volcano example

A group of children are tenting close to the top of an old volcano that has not been active for thousands of years. While they are there, seismographs and gas detectors suddenly indicate that a major eruption may be on its way. A committee of respected volcanologists immediately convene to evaluate the findings. They conclude that the evidence is uncertain but weighs somewhat in the direction that a major eruption will take place in the next few days. They unanimously conclude that although the evidence is not conclusive, it is more probable that an eruption is imminent than that it is not. (Hansson 2018b, pp. 269)

The baby food example

New scientific evidence indicates that a common preservative agent in baby food may have a small negative effect on the child’s brain development. According to the best available scientific expertise, the question is far from settled but the evidence weighs somewhat in the direction of there being such an effect. A committee of respected scientists unanimously concluded that although the evidence is not conclusive, it is more probable that the effect exists than that it does not. The food safety agency has received a petition whose signatories request the immediate prohibition of the preservative. (Hansson 2018b, pp. 268–269)

As these examples exemplify, there are occasions when we wish to take measures against a possible danger, although the scientific information is not sufficient to establish that the danger is real. At least in the second case, this would (in Europe) be described as an application of the precautionary principle.

However, there are also dangers with basing decisions on less than full scientific evidence. If we give up the scientific basis entirely, then we run the risk of making decisions that have no foundation at all, leaving room for decisions based on prejudice and uninformed suppositions (Hansson 2016, 2018a). It is necessary to ensure that full use is made of the available scientific information even when we are willing to base decisions on incomplete evidence. The following three principles have been proposed as guidelines (Hansson 2008, pp. 145–146). They can be called the principles of science-based precaution in practical decision-making:

  1. 1.

    The evidence taken into account in the policy process should be the same as in a purely scientific evaluation of the issue at hand. Policy decisions are not well served by the use of irrelevant data or the exclusion of relevant data.

  2. 2.

    The assessment of how strong the evidence is should be the same in the two processes.

  3. 3.

    The two processes may differ in the required level of evidence. It is a policy issue how much evidence is needed for various practical decisions.

A Reversed Burden of Proof

In particular in the discussion on chemical hazards, it has often been claimed that the onus of proof should fall to those who claim that a substance can be used without danger, rather than those who wish to restrict its use. This is commonly called the “reversed burden of proof” (Wahlström 1999, pp. 60–61). If by the burden of proof is meant the duty to pay for the required investigations of the effects of a substance, then this is a burden that can and arguably should be borne by those who wish to put the substance on the market. In many jurisdictions, considerable duties of investigation have already been imposed on companies wishing to put a chemical substance on the market. However, in discussions of chemical risks, the term “burden of proof” usually means something else, which is close to what legal scholars call “burden or persuasion”: It is claimed that unless the company in question can prove that the substance is harmless, the substance should not be used. On the face of it, this seems to be an excellent safety principle: We should only use provenly harmless substances. Who can be against that?

Unfortunately, this principle has a fundamental defect: It cannot be realized. It can often be proved beyond reasonable doubt that a substance has a particular adverse effect. However, it is often impossible to prove beyond reasonable doubt that a substance does not have a particular adverse effect, and in practice, it is always impossible to prove that it has no adverse effect at all (Hansson 1997a). The major reason for this is that with respect to serious health effects, we care about risks that are small in comparison to the limits of detection in scientific studies. If we only cared about whether an exposure kills more than one-tenth of the exposed population, then this problem would not arise. But for ethical reasons, we wish to exclude even much lower frequencies of adverse effects.

As a rough rule of thumb, epidemiological studies can only detect reliably excess risks that are about a tenth of the risk in the unexposed population. For instance, suppose that the lifetime risk of a deadly heart attack (myocardial infarction) is 10% in a population. Furthermore, suppose that a part of the population is exposed to a substance that increases this risk to 11%. This is a considerable risk increase, leading to the death of one in a hundred of those exposed. However, even in large and well-conducted epidemiological studies, chances are slim of detecting such a difference between the exposed and the unexposed group (Vainio and Tomatis 1985). There are similar statistical problems in animal experiments (Weinberg 1972, p. 210; Freedman and Zeisel 1988; Hansson 1995).

The lesson from this is that it is in general impossible to prove that an exposure has no negative effects. Demands for such proofs can be counterproductive since they contribute to the misconception that chemical risks can be eliminated with substance choice, without any measures to reduce exposure. A realistic strategy to minimize chemical risks should be based on a multiple-barrier approach. Appropriate pre-market investigations of substances, constructed to discover negative health effects as far as possible, can serve as a first barrier. However, this has to be followed by other barriers, including measures that reduce exposures as well as check-ups to discover unexpected harmful effects.

Many technological devices are accessible to more reliable pre-market testing than chemical substances. The reason for this is that relevant failure types, such as mechanical and electrical failures, are much better understood than toxicity, which makes more reliable testing possible. (A caveat: This does not always apply to software failures.) However, this does not exclude the need for a “second barrier” in the form of post-marketing follow-ups. Experiences from safety recalls in the motor vehicle, aircraft, toy, food, pharmaceutical and medical device industries show that even in industries with a comparatively high focus on safety, the “first barrier” of pre-market testing and assessment does not always exclude the marketing of unsafe products (Rupp 2004; Bates et al. 2007; Berry and Stanek 2012; Nagaich and Sadhna 2015; Shang and Tonsor 2017; Niven et al. 2020; Johnston and Harris 2019). Proposals have been made to introduce routines for safety recalls in industries still lacking recall traditions, such as the building materials industry (Huh and Choi 2016; Bowers and Cohen 2018; Watson et al. 2019).

Risk Neutrality

Opponents of the precautionary principle have often proposed that it should be replaced by a risk-neutral or, in a common but rather misleading terminology, “risk-based” approach (Klinke et al. 2006, p. 377). By this is meant that risks should be assessed according to their expectation values, i.e. the product of some measure of the expected damage with its probability. This is the way in which risks are assessed in cost-benefit analysis. As we saw above, this is a method with considerable drawbacks. Attempts to use it as a replacement for the precautionary principle will also run into an additional, quite severe problem: The precautionary principle is a principle for the interpretation of uncertain or limited evidence. For that task, meaningful probabilities are usually not available. In practice, “risk-based” decision-making tends to proceed by neglecting uncertainties and only taking known dangers into account.

The following, somewhat stylized, example serves to illustrate the point: Consider two substances A and B, both of which are alternatives for being used in an application where they will leak into the aquatic environment. A has been thoroughly tested and is known to be weakly ecotoxic. It is not known whether B is ecotoxic. (No exotoxicity was discovered in the standard tests, but due its chemical structure, some researchers have expressed worries that it may be toxic to other organisms than those included in those tests.) However, B is known to be highly persistent and bioaccumulative. This means that if B is ecotoxic, then it can be highly potent since it will accumulate in biota. The ecological risks of using substance A can be quantified and entered into a cost-benefit analysis and a “risk-based” decision procedure. However, since no meaningful probability can be assigned to the eventuality that B is ecotoxic, we cannot perform a “risk-based” assessment of its potential to harm the environment. Therefore, a “risk-based” assessment will show that A poses an ecological risk, but it will have no risk to report for B. In contrast, an assessment in line with the precautionary principle will put focus on the serious but unquantifiable risks that B may give rise to. Thus, in spite of its name, a “risk-based” assessment will in this case tend to downplay risks that are taken seriously if the precautionary principle is applied.

“Sound Science”

If “sound science” means good science, then all rational decision-makers should make use of sound science, combining it with decision criteria that are appropriate for the purposes of the decision. However, in recent discussions, the phrase “sound science” has acquired a different meaning. It was adopted as a political slogan in 1993, when the tobacco company Philip Morris initiated and funded an ostensibly independent organization called The Advancement of Sound Science Coalition (TASSC). Its major task was to promulgate pseudoscience in support of the claim that the evidence for health risks from passive smoking was insufficient for regulatory action (Ong and Glantz 2001). The term “sound science” has also been used in similar lobbying activities against reductions in human exposure to other toxic substances (Rudén and Hansson 2008, pp. 300–301; Samet and Burke 2001; Francis et al. 2006). Considerable efforts have been made to create “sound science” alternatives to the scientific consensus on climate change summarized by the IPCC (Cushman 1998; Boykoff 2007, p. 481; Dunlap and McCright 2010, p. 249; Hansson 2017).

The major effect of the requirements for “sound science” has been to delay and prevent health, safety and environmental regulations by incessantly questioning the evidence on which they are based (Neff and Goldman 2005). Decision-making based on uncertain evidence is consistently repudiated. However, disregarding well-grounded evidence of danger whenever it is not strong enough to dispel all doubts is nothing less than blatantly irrational. Even if you do not know for sure that a dog bites, reasonable suspicions that it does are reason enough to prevent your child from playing with the dog. Scientific evidence of danger should be treated in the same way.


In this section, we have studied four approaches to the evaluation of uncertain evidence. The precautionary principle, interpreted in the science-based way described above, is fully compatible with improvement principles such as Vision Zero, and it can be used to support their implementation. The idea of a reversed burden of proof, in its most common interpretation, is much more problematic. It cannot be implemented in practice, and its promotion tends to support a once-and-for-all approach to chemical safety, rather than a more appropriate multiple-barrier approach. Risk-neutral (“risk-based”) assessments of uncertain evidence are usually not feasible since they require probability values that cannot be obtained. Finally, “sound science,” in the sense that the phrase has acquired through the activities of tobacco lobbyists and their allies, should not be classified as a safety principle. It epitomizes the kind of risk-taking that has always stood in the way of safety.


We began our exploration of safety principles with an overview of how zero goals and targets have been used in widely different areas. We found that strivings for zero of something undesirable have the important advantage of counteracting fatalism and complacency. After that, we broadened our attention to a larger group of safety principles, containing continuous improvement, as low as reasonably achievable (ALARA) and best available technology (BAT). All these principles can be called improvement principles, since they convey the message that no level of risk above zero is fully satisfactory and that consequently, improvements in safety should always be striven for as long as they are at all possible. These principles are all fully compatible with each other, and we can see the different improvement principles as different ways to express the same basic message.

Next, we explored some other principles that tell us what levels of safety or risk reduction we should aim at (aspiration principles). Several of these principles tend to classify some unsafe and improvable conditions as acceptable. Such principles are not easily combined with Vision Zero and other improvement principles. However, we also found that one of these aspiration principles, namely, cost-effectiveness analysis, fits in very well with the improvement principles. Cost-effectiveness analysis can be used to choose the safety measures that yield the largest improvements.

We then turned to the error tolerance principles, which are safety principles telling us that since failures are unavoidable, we have to ensure that the consequences of failures will be as small as possible. We discussed six such principles: fail-safety, inherent safety, substitution, safety factors, multiple safety barriers and redundancy. All of these principles are highly compatible with Vision Zero and other improvement principles. We can see them as complementary strategies for implementing the improvement principles.

Finally, we considered four evidence evaluation principles. One of them, namely, the precautionary principle, is well in line with Vision Zero and the other improvement principles.

Safety work is complex and in need of guidance on many levels. Therefore, we need several safety principles. Vision Zero and other improvement principles can tell us what we should aspire to. Error tolerance principles provide essential insights on the means that can lead us in that direction. We can use cost-effectiveness analysis to prioritize among the measures that are available to us and the precautionary principle to deal with uncertainties in the evidence available to us.