Keywords

On a day when I was working on this book, I got a phone call from a woman who wanted me to contribute with a presentation on an ethics day in her organization. Her idea was to make leaders and employees in the organization familiar with the Navigation Wheel and other ethical tools and principles. I was willing to contribute, but had to check my calendar first. It turned out that I had an appointment for that particular day, to hold an ethics talk in another organization, so regrettably I had to say no to the invitation. When the woman heard this news, she hesitated for a brief moment, before she said: “What if we pay you a bit extra to come to us?” To my astonishment, she offered to pay me for breaking the promise of giving an ethics talk in one organization, in order to come and give an ethics talk in her workplace instead. I responded by asking her to think through that offer one more time, and consider whether she meant it seriously. It did not take her long to realize how inappropriate her suggestion was, particularly in the light of the topic of the seminar day. She had just been so eager to get the program for the day in place, with me as one of the contributors. For a moment, she had been blind to the moral aspect of the situation, and suggested something that she realized on second though was out of the question.

Moral blindness is something that can strike any decision-maker in an organization. People are engaged in absorbing and complex tasks and are supposed to deal with them quickly in order to be ready for further challenges at work. In the heat of the moment, they can become blind to important aspect of the situation. The perception psychologists Christopher Chabris and Daniel Simons have made a short film to illustrate how people’s attention in a given situation is selective and vulnerable to manipulation (Simons & Chabris, 1999). The film shows six people, three of them in white clothes and three of them in black clothes, walking around in circles while they are passing basketballs to each other. Each team has one basketball, and the team members pass it among each other while they are constantly on the move.

The assignment to the film’s audience is to count the number of times the team wearing white manages to pass the basketball to each other. Those who really concentrate on the task come up with the correct answer, which is 15. A facilitator then asks if they noticed anything else during the film. Some report that they have glimpsed a black figure walking across the screen. To check this observation, the film runs one more time, and on this occasion, everybody can see the big gorilla figure walking slowly into the frame, stopping in the middle of it, banging its chest, and then moving slowly out of the picture. The gorilla takes up a lot of space, and the people who are blind to it on the first showing find it hard to believe that it can be the same film. Those who fail to spot the gorilla the first time around are typically very surprised by their inability to do so, and this has led Kahneman (2013) to reflect that they are not only aspect blind, but also blind to that particular blindness. The gorilla experiment challenges the assumption that all people need to do in order to figure out what is happening in a particular part of their surroundings, is to turn their faces in that direction and take a look.

The invisible gorilla can function as a symbol of the significant aspects of people’s own working environment that they can be blind to during their efforts to perform complex tasks with a tight time schedule. Some of these aspects can be morally significant. People can be morally blind due to the complexity of the situation and the demands that are put on them, and also as a result of economic and other incentives. Bird (1996, p. 85) defined moral blindness in the following way: “People are morally blind when they fail to see or recognize moral concerns and expectations that bear upon their activities and involvements.”

Moral blindness is a different form of blindness from the one mentioned in first chapter of this book, where the decision-maker deliberately adopts the position of willful blindness, deciding to turn a blind eye to the case at hand, not wanting to know details. Gorilla blindness occurs involuntarily, as a result of our limited perception capacities. Conflict of interest issues can typically become invisible to professionals, when high personal ambitions can make self-interest overshadow client interest. Airely’s physician, mentioned in Chap. 11, appears to have become morally blind in this manner, in his efforts to recruit the third patient he needed in order to move forward with an academic paper. Financial advisors in many countries appeared to lose sight of client interest prior to the financial crisis, making it possible for them to recommend and sell questionable products, without experiencing moral dissonance. Moral blindness can occur in any organization, including institutions where people research and teach on the topic of ethics in organizations. In line with Kahneman’s remark that people are blind to their own aspect blindness, it may also be that leaders and employees in organizations are blind to moral aspects of what they are doing at work, and also blind to that particular blindness.

One of the paths to moral blindness goes through the process of moral neutralization, where the decision-maker convinces himself or herself to leave behind initial moral misgivings about a particular option. Once a person or an environment has crossed that hurdle, it seems difficult to return to the state where the option in question seemed morally dubious. The moral aspects we could see from the old perspective are now invisible, like the gorilla in the film. As noted in the previous chapter, Hamilton and Coyle (2012) have described how individuals in a tight and loyal collective like the cycling team of Lance Armstrong, can strengthen each other’s firm beliefs that their cheating behavior is beyond reasonable reproach.

Organizations that are serious about ethics depend on a communication climate where the normal response when an employee has moral doubts about a particular course of action, is for him or her to speak up and address the issue. When deciding to voice a moral concern, the employee should ideally not experience fear over what comes next in terms of possible negative sanctions from colleagues and leadership. Moral muteness (Bird, 1996) can be a feature in organizations where people are afraid to speak their minds on moral matters: “Many people hold moral convictions yet fail to verbalize them. They remain silent out of deference to the judgments of others, out of fear that their comments will be ignored, or out of uncertainty that what they might have to say is really not that important” (Bird, 1996, p. 1). Individuals in organization can have the impression that they are alone in having moral misgivings about how their workplace operates. They can be unaware that colleagues in the same unit actually share their moral concerns, since they never raise the issue and address the topic collectively.

It is in this context that the category of relational moral luck, briefly introduced in Chap. 4, makes good sense. A decision-maker can be fortunate or unfortunate with the people who are in the social surroundings at the crucial moment when he or she is about to respond to a moral dilemma—whether it be a real or false one. That particular social environment can be one where people normally challenge and support each other critically in such situations, or one where nobody lifts an eyelid when a colleague enters into morally questionable territory. It is not merely due to luck whether you are in one or the other of these kinds of surroundings, as we do make decisions about the kind of organization we want to work in and belong to. However, the communication climate of the workplace might be something that we only gradually become aware of, and coincidences and luck can play a part in deciding whether we end up with colleagues who care enough to intervene, or not.

Two phenomena identified in social psychology highlight how crucial it can be to establish a constructive communication climate in organizations. They are relevant for judgment and decision-making in the workplace beyond ethics. Confirmation bias is the tendency we have to notice and seek information that confirms our beliefs, and to be inattentive to information that provides us with reasons to change our beliefs. The phenomenon is well documented in research (Nickerson, 1998), and produces formidable challenges in many professions. Police investigators can make up their minds about which person has committed a crime, and only pursue and notice information that confirms that conclusion. Teachers can have preconceptions about the intelligence and abilities of their pupils, and fail to see upward and downward spirals in their developments. Researchers can be so satisfied with their hypotheses and explanations of phenomena that they become blind to glaring counterevidence and reasons to revise them. In these and other professions, knowledge about confirmation bias is part of the professional training. This is nevertheless a pervasive decision-making trap, and one that emphasizes the need to have communication climates where colleagues look out for each other and intervene when someone at work stubbornly holds on to one belief or viewpoint rather than revises it in the light of new and relevant information.

The other psychological phenomenon that can slow down a process of identifying and addressing morally relevant aspects of behavior in an organization is the bystander effect. Research on human behavior in real situations and in experiments show that the greater the number of bystanders to an event where somebody needs help, the less likely is it that any one of them will actually help (Darley & Latané, 1968; Greitemeyer & Mügge, 2015; Latané & Darley, 1976). One cause for this effect seems to be what Darley and Latané (1968) called diffusion of responsibility. People consider their responsibility to help in a situation to be one unit that they share evenly with the other people at the scene. If they are one hundred bystanders to a critical situation, they seem unconsciously to split responsibility into one hundred tiny pieces, leaving each of them with one hundredth of a responsibility to intervene and help. That is a very small piece of responsibility for one individual, who can turn his or her back to the situation, without experiencing bad conscience. If there instead are 50 bystanders, the responsibility is double that of in the previous situation, but one fiftieth of a responsibility is still very little. This way of thinking is what Parfit (1984) labeled mistakes in moral mathematics. People do have individual responsibilities to help, no matter how many others are present. It is unreasonable to consider responsibility to be one cake people share evenly in thin slices. Each individual has his or her own cake of responsibility.

Another cause for the bystander effect is pluralistic ignorance, or the fact that each of us tend to interpret the inactivity of the others as a sign that nothing serious is going on, and that there is no reason to engage (Darley & Latané, 1968). From my perspective, it looks like the man is hurt and needs help, but nobody else in the crowd appears to think so. My initial judgment of the situation appears to be wrong, since everybody else is passive. I might be too sensitive in my interpretation, overdramatizing the situation in my head. It looks like a gorilla, but nobody else shows any sign of seeing it, so perhaps it is an illusion. The strength of this tendency to doubt one’s own evaluations tends to be proportionate to the number of bystanders.

The bystander effect is relevant for ethics in organizations in that the number of people who perceive that there is something morally wrong with the setup of a particular project, with the relationships with the suppliers, or with the new products or sales methods, affects the likelihood that anybody will take initiatives to be critical of them (Beu, Buckley, & Harvey, 2000; Zhu & Westphal, 2011). Even here, it is probable that the higher the number of bystanders, the lower the likelihood of an intervention. It may be that knowledge about the bystander effect can weaken it, as suggested by Mele and Shepherd (2013). It is thus worthwhile to make leaders and employees in organizations aware of it, for reasons that go beyond ethics. It is also possible to counter the effect by delegating responsibility to particular individuals. If you need help and are surrounded by bystanders, you should point to one person and ask for help, rather than shout for help in the general direction of everyone. Addressing one person directly with a call for help has the positive double effect of both (1) disrupting the mistake in moral mathematics of splitting responsibility up in tiny pieces, and of (2) avoiding pluralistic ignorance, or the misperception that everything is as it should be.

Maria Gentile has developed the concept of Giving Voice to Values (GVV) as a method for individuals at work to stand up for their moral beliefs and values, even when they are under pressure from colleagues, leaders, customers, and other stakeholders not to do so (Gentile, 2010, 2012). GVV has generated considerable research interest (Edwards & Kirkham, 2014; Gonzalez-Padron, Ferrell, Ferrell, & Smith, 2012) and also inspired practitioners in organizations. It encourages people to overcome moral muteness and speak their minds when they observe decision-making and conduct that goes against their moral values. It also provides concrete action plans and scripts for people who want to become better at giving voice to their values at work. In many ways, GVV seems designed to address the needs identified in this book, to intervene when colleagues engage in moral neutralization and gradually become blind to moral aspects of their own behavior.

There is much to commend and admire in Gentile’s approach, but it is problematic in one important respect. The subtitle of the GVV book is “How to speak your mind when you know what’s right” and the tone of actually knowing what is right is prevalent in the text. Gentile offers practical advice to individuals who clearly see how things stand, and what it will mean to stand up for one’s values in the situation, and need to go from conviction to enactment. Research on the bystander effect and similar phenomena indicate that people are often in situations where they do not know what is right, but have doubts about how to interpret what is enfolding in front of them. They somehow need to give voice to that doubt, and not remain passive. The starting position of being a person who knows full well what is right and true does not invite dialogue or attention to how other people see the situation. It is not the position of listening to other perspectives and being open to revise one’s beliefs and assumptions.

The label for an alternative approach can be to give voice to doubt rather than to value. Uncertainty and doubt appears to be a more realistic and constructive starting point for conversations about right and wrong than one where people have made up their minds in advance. One frame of reference can be that of Socratic dialogue, where the aim is to engage in inquiry and questioning in order to reach consensus on an issue. Brinkmann, Lindemann, and Sims (2016) have suggested a design inspired by the idea of Socratic dialogue, where people engage in collaborative search for truth regarding a particular normative question. In essence, the Socratic design invites respect for the myriad of perspectives that deserve a hearing when people try to reach a common understanding of how to act in a particular situation.

The aim of this book has been to suggest ways to rethink and restructure ethics in organizations. Decision-makers in organizations, both leaders and employees, face moral dilemmas where they need to give appropriate weight to legal, ethical, moral, reputational, economic, and value based aspects of the situation. They cannot rely solely on moral intuition or gut feeling—Kahneman’s System 1 thinking—but also need to be able to analyze the situation carefully—Kahneman’s System 2 thinking. The combination of good analytical skills and stable character can make an individual well equipped to meet moral dilemmas, but studies in behavioral economics, social psychology, and criminology show that more or less anybody can become entangled in moral wrongdoing, depending on the circumstances. In organizations, people depend upon colleagues to intervene and stop them when ambition and other factors tempt them to take moral shortcuts. It can be enough that they raise doubts about the path colleagues are contemplating, since that gives them reasons to rethink and reconsider. Ethics in organizations can build on a rich array of research and knowledge, from well beyond the traditional sources of moral philosophy. Doing so can make the workplaces less vulnerable to the unpredictable and erratic activities of invisible gorillas.