Inga Bolstad is the director general of the National Archives of Norway. In Spring 2016, she made the decision to terminate a complex and prestigious development project. Considerable resources had been invested to create a common platform for archiving documents for the Norwegian public sector. The overall aim of the project had been to counter what Bolstad calls digital dementia, the forgetting of vital public information regarding taxation, health, education, and other kinds of services offered to citizens and organizations in society. Norway needed an electronic archive for local and central public administrations, and the E-archive Project was supposed to provide it. “The time horizon for such a project is, if not eternity, at least a thousand years. Our nation’s common memory depends on a well-functioning digital depot. The stability of our democratic system relies on easy electronic access to documents from the past, and it is our responsibility to build it” (Bolstad, 2017). Stakes were thus high to come up with a robust and reliable solution, but the first attempt failed.

“We took the decision to terminate the project after a meeting with the Digitalization Council, the government appointed unit set up to give advice to public organizations about digital projects. The Council provided constructive criticism regarding what we had done so far and the plans for the further development of the project. Now we realized that it had been wrong to go for one particular alternative from the beginning of the project, since it had made us lose sight of other viable alternatives. Furthermore, we had primarily focused on our own needs and goals, and not taken sufficiently into account those of the people who were supposed to use the system on a daily basis. There was also a lack of properly defined milestones for the project, where we could have taken the temperature on the development and progress. When I entered a meeting with twelve people currently working on the project, and asked them about its purpose and direction, I got twelve significantly different answers. All of this made us understand that the E-archive project was about to become a fiasco, and we decided to stop it. We had failed, and realized that it was best to take a step back and start afresh” (Bolstad, 2017).

The topic of this chapter is the role of failure in innovative processes. A range of studies has focused on experimentation and how organizational structures and incentives should encourage it (Ahuja & Lampert, 2001; Cannon & Edmondson, 2005; Lee, Edmondson, Thomke, & Worline, 2004). With active experimentation comes the risk of failure, and leaders in organizations tend to be reluctant to talk about it, because they assume that failure is bad. That is often a misguided assumption, since failure is an integral part of testing hypotheses about the world, and in experimental explorations to develop new products and services (Edmondson, 2011). Narratives about failure can also be sources of significant organizational learning (Bledow, Carette, Kühnel, & Bister, 2017; Rami & Gould, 2016; Shepherd, Patzelt, & Wolfe, 2011).

When a pilot or a surgeon makes a mistake, it can lead to truly bad and devastating outcomes, but in other organizational settings, to fail can often be a welcome dimension of learning and development. In innovation, “failing fast” has become a viable catchword, indicating that individuals, groups, and organizations should stop wasting valuable time and resources by remaining loyal to one particular idea. The successful design company IDEO’s slogan is “Fail often in order to succeed sooner,” and other companies are attempting to adopt a similar stance in order to reduce the stigma of failure (Edmondson, 2011).

This chapter explores how learning from failure requires close attention to the distinction between causes of failure and blame for failure. It also identifies and discusses three psychological phenomena that pose a challenge to effective learning from failure. All of them have links to the communication climate for voicing a concern that the proposed course of action may not after all be the best one. First, the sunk-cost-fallacy is the tendency we have to follow through on an activity even when it is not meeting our expectations, because of the resources we have already invested in it. Second, research on the bystander effect indicates that the more people who are witness to an event that calls for help or some other form of intervention, the less likely it is that anybody will step forward and help or intervene. Third, people are vulnerable to the confirmation fallacy, in that they have a tendency to notice information that is in line with their beliefs and assumptions, and to disregard information that gives them reason to reconsider. These three phenomena are well documented and known from social psychology, and the aim here is to connect them to challenges regarding fallibility at work. The context in the current chapter is that of innovation and the need to fail fast, but an understanding of the three psychological phenomena is also relevant in situations where it is urgent to speak up about mistakes because they can lead to harm, as in aviation and healthcare, as will be demonstrated in coming chapters.

1 Innovation and Failure

In the aftermath of the termination of E-archive, Bolstad and her organization have received positive responses on the decision, and on the willingness to share the narrative of their failure. The Agency for Public Management and eGovernment in Norway has an annual conference for dwelling on mistakes in the public sector, called Feiltrinn (Misstep). The idea behind it is to create a learning platform for public organizations who are dealing with similarly complex projects as E-archive, and need to identify and learn from mistakes. In December 2016, Bolstad took the stage at the conference to talk about the mistakes in the E-archive project, and how they had affected her organization. Her narrative of failure was highly relevant for the other participants, several of whom worked on other digital projects in the public sector, and could easily end up in similar circumstances of having to decide whether to stop a project and admit failure, or not.

Bolstad has highlighted the learning aspect of the closing down the E-project. “We have failed, but the experience made us stronger. We are now an organization where it is acceptable to try, fail, learn, and move on. One other notable thing is that have become more professional in handling disagreement. That is a prerequisite for open and honest talk about our projects” (Bolstad, 2017). The need to create a digital depot for the public sector in Norway remains, and the current efforts to do so are different from the first in four significant dimensions, in that the project is characterized by:

  • Stronger user orientation, taking into account the needs and competencies of the people who are going to use the system.

  • Not just one, but multiple alternatives for a solution are under consideration from the start.

  • A communication climate where people are encouraged to voice concerns and disagreements early.

  • Tolerance for failure in the process of developing the alternatives.

What Bolstad describes as the key elements in the work to counter digital dementia overlaps with the main tenets of design thinking, where principles of design are applied to the way people work. This approach focuses on users’ experiences in encounters with technologically complex processes and uses prototypes to explore potential solutions. It is built on the assumption that some alternatives need to fail in order for others to stand out as the better ones. Design thinking has proved to be especially useful in addressing wicked problems (Buchanan, 1992), that is, problems with high levels of complexity and ambiguity. A common aim for such processes is to make the users’ interaction with the technological solutions intuitive and pleasurable. That is the task for the team currently working in Bolstad’s organization to create a digital archive. At the time of the interview, they had seven active conceptual alternatives, and will eventually converge on one of them for further development and implementation. One of the alternatives was similar to the original and discarded project, but now it was measured up against a range of other viable options.

Toleration for failure is a dimension of innovative work, since it is rare to get things right the first time (Kolko, 2015), as experienced by Bolstad and her team. In some contexts, what counts as getting things right is quite clearly defined and well understood, while in others, the process may lead to unexpected breakthroughs outside the scope of the original project. Here are four examples of what has been labeled accidental innovation (Austin, Devin, & Sullivan, 2012):

3M attempted to create a super-adhesive that could be used in the construction of planes, and instead ended up with a weak adhesive that was labelled “a solution without a problem”. Employee Arthur Fry heard about the failure, and noticed that pieces of paper with the weak adhesive could be used as bookmarks, since they could be reused and could be peeled away without leaving any marks on the pages. Fry applied for a grant to develop the idea further, and the failed attempt to make super-glue led to the development of the Post-it note. (Brand, 1998; Govindarajan & Srinivas, 2013)

The drug Sildenafil Nitrate was originally intended as a treatment for angina, but turned out to be ineffective for that purpose. Nurses participating in the testing of the drug noted that the patients who took the drug got penile erections. Their copious notes of side effects from the trails led to the discovery of Viagra. A failure to develop a drug to treat chest pains thus became a successful drug to treat erection problems. (Cook, 2016)

The Norwegian company Tine tried to develop and manufacture a salami sausage made from salmon. It failed, because the customers and market did not show any interest in the salmon salami. The failed sausage was based on the use of new fermentation technology that made it possible to send exceptionally fresh filets of salmon to the market. The raw material to be used in the sausage had to be of exceptional quality for the technology to work. The company got this from a salmon provider that had developed a technology to distribute fresh fish to the market immediately after the fileting process had taken place. The commercial director realized that it was much easier to sell the raw material (the exceptionally fresh salmon filets) than the salami itself. This product was called Salma, a name originally designed for the failed salmon salami sausage, and it turned out to become a great commercial success. (Hoholm, 2011)

One late evening at the restaurant Osteria Francescana, a three-Michelin-star restaurant in Modena, Italy, the sous chef prepared the last dessert dish, a lemon tart. On his way out of the kitchen to the guests’ table he dropped the plate, half of the tart ended up on the counter, and half remained on the plate. The sous chef despaired, but the master chef Massimo Bottura saw it as on opportunity to create a new dish. Together they rearranged the lemon tart on the plate, and served it as if the destructed tart was according to plan, calling the dish “Ooops! I dropped the lemon tart”. It has since become a signature dish in the restaurant. (Gelb, 2015)

The first, second, and third examples are of innovation processes that accidentally led to the discovery of a different product to that envisaged by the initiators. The fourth is not an innovation process as such, but rather an accident in the implementation of a creative process. What the four examples have in common is that somebody had an eye for possibilities and were able to turn failure into a surprising success.

2 Beyond Blame

After more than two decades of studying failure, Edmondson (2011) has noted that executives and managers tend to think about it in the wrong way. She believes that the main reason why they struggle to do so that they are trapped in a false dichotomy: “How can you respond constructively to failures without giving rise to an anything goes attitude? If people aren’t blamed for failures, what will ensure that they try as hard as possible to do their best work?” (Edmondson, 2011, p. 50). Managers seem to believe that they have to blame and criticize employees who fail, because otherwise they will become complacent and think that it does not really matter whether they do the best they can at work.

In order to disentangle this dichotomy, Edmondson goes on to provide a spectrum of reasons for failure, ranging from deliberative deviations at one end, to exploratory testing at the other. An act of choosing to violate a process or procedure tends to be blameworthy, as when a flight crew skips parts of procedures before takeoff, or a doctor fails to wash his or her hands properly before treating a patient. These are unwelcome occurrences, and if the manager does not intervene to blame the responsible individuals, it may indeed lead to complacency and an anything goes attitude.

The situation is very different on the other side of the scale, where the aim is to expand knowledge and generate solutions by testing out ideas, to see if they are worth pursuing. Here, a failure can be a welcome event, something that enables the group or organization to move forward with the knowledge that this particular idea did not work. The decision to stop e-Archive and start afresh with new ideas can serve as an example of such an event. In the beginning, it can be painful to accept failure, in light of so many hours and so much energy spent to get things right. Gradually that feeling may give way to relief at being able to pursue new directions. Any manager who fails to see the difference between mistakes on opposite sides of the spectrum outlined by Edmondson and blames employees when things go wrong during experimentation or hypothesis testing is likely to hamper innovation.

In between the two endpoints of deviance and exploratory testing lie the reasons for failure where it is more difficult to attribute degrees of blame. The root cause of why things go wrong may be that the agent is inattentive, lacks ability, or has been given faulty or incomplete instructions about how to act. It can happen in a hospital, when inexperienced doctors or nurses get tasks that are at the limits of their current competence. When things go wrong, and patients are harmed, it can be difficult to establish whether the cause is primarily a personal mistake on the part of the doctor or nurse, or a systemic mistake, as when the person should have received better training, instruction, and support from seniors. In such cases, the blame may partly lie with the executive or managers who have put the person in that position and partly with the person him or herself, who should have spoken up about competence limitations. One concrete way to respond when facing a situation where personal competence is stretched is to ask for help, a topic explored further in Chap. 6 . The main reason for failure may also be that the task itself is difficult, or that the situation is complex and ambiguous. The more the failure can be adequately accounted for by appeal to circumstances, the less room remains for reasonable blame.

Edmondson warns leaders and other decision-makers against entering a blame game in the aftermath of a bad outcome. Many failures in organizations are not truly blameworthy, and when they are mistreated as such, it is likely to block learning. Collins (2001) used the term “autopsy without blame” to establish a similar thought. In situations where things do not go well, the organization can analyze them and try to figure out what happened, without attributing blame. Learning and development depend on cool heads that keep any tendency towards blame and punishment at bay, at least during the analyzing phase. In some cases, the result of the inquiry into the causes of the failure may be that some people are actually to blame and are not fit to perform the kind of task in question. That conclusion, however, should come at the end of careful reflection about the probable causes, all through the spectrum of reasons for failure Edmondson outlines.

The attitude of performing an autopsy without blame can be crucial when interviewing people about their own behavior and that of their colleagues in events leading up to an accident. Whether the interviewer focuses on (i) causes or (ii) blame is likely to affect the openness of the interviewee. If the latter senses that (ii) is the prime perspective, answers tend to become more defensive and weighted, and the likelihood decreases of getting a full and honest account of events at hand. In aviation, autopsy without blame has become common practice and has contributed to improved safety (Stoop & Kahan, 2005), while in healthcare, a blame focus has been documented to inhibit reporting of medical failure (Bond, 2008; Waring, 2005). Lessons from aviation on dealing with fallibility and blame to strengthen safety have received increasing interest in healthcare and medicine. Chapters 4 and 5 in this book will explore in further detail alternative approaches to fallibility at work in both these sectors of organizational life.

3 Three Obstacles

Learning from failure requires that missteps are detected and brought to the surface. In organizational settings, whether that happens or not depends on the communication climate, and particularly on the extent to which it is normal for employees to speak up when they sense that something is wrong with a project or initiative. The climate and the individuals who operate in it are put to the test in critical quality moments, situations where the next thing to happen determine whether events unfold in a positive or negative manner. Research in social psychology has identified cognitive biases that tend to hamper our abilities to act rationally in concrete circumstances. Three of them are particularly relevant in the context of voicing concerns about failures and mistakes. First, the sunk-cost fallacy is the tendency we have to remain committed to a decision or plan, even when we know that they are not living up to expectations. Second, the bystander effect indicates that the more people who are witnesses to a failure and can intervene, the less is the likelihood that anybody will actually make an intervention. Third, the confirmation fallacy makes us stick to initial assumptions and beliefs about states of affairs, and overlook information that gives us reason to revise them.

In decision-making and economics, a sunk cost is a cost that has already been incurred and cannot be recovered (Kahneman & Tversky, 1979). From a perspective of rational decision-making, sunk cost should not affect current decisions about how to go forward, since whatever the decision-maker does from now on will not change the fact of that cost. Only prospective costs should be taken into consideration. In reality, sunk costs do influence decision-making and can make people pursue projects and plans that are not living up to expectations, or are not in line with their current priorities (Fischer, Greitemeyer, Pollozek, & Frey, 2006; Friedman, Pommerenke, Lukose, Milam, & Huberman, 2007). The sunk-cost fallacy is sometimes also named as the Concorde fallacy, after the escalating and expensive efforts to make a success of that supersonic airplane (Arkes & Ayton, 1999).

Research on the sunk-cost fallacy has identified two psychological explanations for the bias. One is that information about failure creates cognitive dissonance (Gilad, Kaish, & Loeb, 1987; Staw, 1976). We want to believe that the initial decision was rational and correct, and now face information to the contrary. One way to reduce the mental discomfort of cognitive dissonance is to strengthen the belief in the decision to go ahead. Self-justification can take the form of continuing to add resources to a project, thus keeping the discomfort at bay, and prolonging a bad project. We can agree with the saying that if you have dug yourself into a hole, you should stop digging, but in reality, we struggle to live in accordance with that claim. The commitment to pour further resources into the project appears to be stronger the more personally responsible the decision-maker takes him- or herself to be for the initial decision to start (Bazerman, Giuliano, & Appelman, 1984; Staw, 1976).

The second explanation for the sunk-cost bias is loss aversion, or misgivings about wasting resources (Kahneman & Tversky, 1979). When a person has bought a non-refundable ticket for a theatre show and finds on the evening of the show that another way to spend the evening appears much more attractive, the sunk-cost fallacy can make that person decide to go to the theatre show after all, in order not to have wasted money on the ticket. Economists will claim that the person has the choice between double and single suffering, that is (1) the suffering of having paid for the ticket and the suffering of a suboptimal evening at the theatre, and (2) the suffering of having paid for the ticket and the pleasure of a better evening away from the theatre. Of these options, (2) is clearly the more rational, but in real life we can see a tendency to choose (1) (Arkes & Blumer, 1985).

Bolstad’s decision to terminate the E-archive project can be seen as a successful effort to overcome the sunk-cost fallacy. Considerable resources had already been invested in the project, and a decision to stop it would reflect badly on those who decided to go ahead with it. The first explanation of sunk-cost fallacy indicates that Bolstad and her top management team may have been inclined to continue the project, to keep the cognitive dissonance of admitting a previous mistake at bay. Furthermore, they faced a voice between (1) the suffering of having spent time and money on a failed project coupled with the suffering of failing to create a well-functioning digital depot and (2) the same suffering of having used resources on a failed project, coupled with an opportunity to pursue new initiatives, better designed for the purpose of delivering a functional digital archive for the Norwegian public sector.

The bystander effect is another psychological phenomenon that can stand in the way of effective communication about actual and immerging failures. Studies show that the presence of other people in a critical situation reduces the likelihood that a person will help. The more people who are present as bystanders, the less likely that the person will take an initiative to help (Fischer et al., 2011; Latané, 1981; Latané & Darley, 1976; Latané & Nida, 1981). It has also been documented that people do not have to be physically present in order for bystander effects to occur, as it can also affect interactions on the internet (Barron & Yechiam, 2002; Blair, Thompson, & Wuensch, 2005). The phenomenon is alluded to in explanations of social networking (Chiu & Chang, 2015) and the effectiveness of loyalty program marketing (Steinhoff & Palmatier, 2016). Bystander effects can also occur among small children (Plötner, Over, Carpenter, & Tomasello, 2015).

It has not been empirically tested whether bystander effects can occur in organizational setting where employees are aware of weaknesses or mistakes in projects, but findings in other areas of research make it plausible that even in such contexts, the likelihood that anybody will intervene to help in a project crisis can be affected by the size of the group of bystanders. The two main explanations of the bystander effect probably transfer over to organizational settings. First, diffusion of responsibility is the tendency we have to attribute individual responsibility based on the number of people who are present (Darley & Latané, 1968). We tend to see a responsibility to intervene and do something as one particular entity, shared evenly and fairly among the people who are present. According to this line of thinking, if we are 100 people present, we each have roughly 1/100 responsibility to do something. That is a very tiny piece of responsibility, and each of us can move away from the situation without having done anything, without a bad conscience. If we are 50 people present, that gives each of us about 1/50 responsibility to intervene, which is twice as much as in the first scenario, but still only a minimal amount of responsibility. The moral reasoning behind diffusion of responsibility is flawed (Parfit, 1984). It seems reasonable to attribute responsibility more on the basis of what each individual is capable of doing, and give less weight to the number of people present. Despite philosophical arguments to the contrary, however, diffusion of responsibility is a common and stable feature in human behavior.

The second cause of the bystander effect is the well-documented phenomenon of pluralistic ignorance, the tendency we have to adjust and correct our own judgement of the situation at hand, in light of what we take to be other people’s judgements of it (Beu, Buckley, & Harvey, 2000; Zhu & Westphal, 2011). A person may initially believe that the individuals in front of him or her need help. If a crowd of other people are behaving as if that is not the case, the person can mistakenly assume that (i) he or she is the only one present who believes that those individuals need help and (ii) that the initial belief is false. A bystander effect can occur in a real and acute crisis when individuals start to doubt their own judgement due to the passivity of the people around them. Initial alarm at seeing other people in distress can vanish at the sight of a calm crowd.

It is possible to imagine similar processes in organizations, when initially promising ideas and plans turn out to have significant weaknesses. Bystander effects can put the detection of failure in a project on hold. First, a large group of people may have access to the relevant information, but diffusion of responsibility can set in and make each of them believe that they only have a microscopic responsibility for voicing their concern, given the considerable size of the group that has the same information. Second, pluralistic ignorance can make each of those who have doubts about the project adjust their judgement because nobody else shows any signs of questioning the quality of the project. These two phenomena in tandem can cause a bystander effect, and thus a continuation of projects that should have been identified as failures.

Even though the bystander effect lacks a reasonable foundation, it poses a challenge in organizational contexts where it is important to detect failure quickly and forcefully. One way to neutralize it can be to address individuals one by one and ask them for feedback about the particular project. If the project owner asks 100 people simultaneously about their beliefs about the current state of the project, face-to-face in an auditorium or through digital media, each them are likely to assume that they only have 1/100 responsibility to respond. In order to overcome that effect, the project owner can address one individual at the time, and invite a response. That places the task of responding firmly in the lap of one individual and preempts diffusion of responsibility. A move of this kind is also likely to puncture pluralistic ignorance, since the respondent is now invited to express his or her personal beliefs, and not those of the entire group. The move of addressing one respondent at the time does not guarantee that the feedback has high quality, but at least it appears to be an effective way of neutralizing the bystander effect.

The third psychological phenomenon that can affect identification of failure is confirmation fallacy. People tend to notice information that confirms their current beliefs, and disregard information that provides them with reasons to reconsider those beliefs (Hart et al., 2009; Nickerson, 1998; Shefrin, 2007). Perception psychology has identified one particular way that the confirmation fallacy can set in, focusing on the assumption that in order to see something, one simply needs to direct one’s eyes toward it. Simons and Chabris (1999) have challenged that assumption, most notably through their so-called gorilla experiment. In that experiment, an audience watches a short film, where three people in white clothes and three people in black clothes walk around on a small area, passing basketballs to each other. The task for the audience is to count the number of times the white team manages to pass the ball to each other, while they ignore what the black team is doing. After seeing the film, the audience is asked whether they noticed anything else happening in it. Some people claim to have seen a black figure walking across the playing field. When watching the film for the second time, now without the task of counting passes, everybody can see that a person dressed up as a gorilla walks slowly into the frame, stops in the middle of it, bangs his or her chest, and walks slowly out again. The gorilla is big, and people who do not see it the first time are amazed and surprised that they could fail to do so. Kahneman (2010, p. 24) has noted how the gorilla experiment illustrates the double nature of this blindness: “We can be blind to the obvious, and we are also blind to our blindness.” The research label for the phenomenon is inattentional blindness (Kreitz, Furley, Memmert, & Simons, 2016; Mack, 2003; Simons & Chabris, 1999).

In an organizational context, the people involved can have fixed beliefs about the quality of a project or idea and about the competence of the people involved in realizing it and overlook information that gives them reason to reconsider. The beliefs may be more optimistic and positive than the available information gives a foundation for, but also more pessimistic and negative. Looking back on examples from the current chapter, the confirmation fallacy can stand in the way of realizing that:

  • What appears to be a good idea is actually a failure (E-Archive).

  • What appears to be a failure is actually a good idea (Post-It/Viagra/Bottora’s lemon tart).

There can be similar challenges with regard to taking in information about the competence and behavior of people who have a particular status in their professional environments:

  • A person who has the status of being an expert is actually making or proposing a mistake.

  • A person who has the status of being not that good is actually doing or proposing the right thing.

In order to overcome the confirmation fallacy, it can be necessary to invite other people to look at the situation and inquire about their perceptions of it. Research and experience provide emphatic evidence of how powerful and pervasive the fallacy is, and how dependent we are at individual, group, and organizational levels on a communication climate where people speak up when they notice events and occurrences out of the ordinary.

This chapter has focused on the role of failure in innovative processes. Failure is an integral part of testing hypotheses and ideas about how things work, and in competitive contexts, it can be crucial to be able to fail fast. However, the stigma of failure can be present in many organizational contexts, leading to continuation of projects that should have been terminated. The National Archives of Norway managed to break the stigma and stop the first attempt to develop a comprehensive digital depot for the public sector. In the process of doing so, they more or less explicitly overcame three psychological obstacles to learning from mistakes, in that they were not derailed by (i) the sunk-cost fallacy, (ii) the bystander effect, or (iii) the confirmation fallacy. They were also able to avoid the kind of blame-game that often characterizes the periods after an organization has experienced failure. The coming chapters will discuss examples from other organizational settings, where the ambition may be different from innovative processes, but the obstacles to detecting failure and voicing concern are similar. Even in those contexts, individuals can be blind to important aspects of their work, and blind to that blindness. They depend upon colleagues or other individuals in their proximity to speak up and intervene in critical quality moments, the situations where what happens next will determine whether things turn out well, nor not.