Introduction to Volume 2—The “Twisted Road” to Auschwitz

  • Nestar Russell
Open Access


In this chapter, Russell provides the reader with an insightful yet concise overview of Volume 1’s key arguments and conclusions (in fact, if the reader is cognizant of Stanley Milgram’s Obedience studies, this overview makes it possible for those most interested in the Holocaust to read Volume 2 without having read Volume 1). Russell also presents the intended direction that, in the remainder of Volume 2, he plans on pursuing—the specific Milgram-Holocaust linkage.

What, in terms of a brief synopsis, were Volume 1’s key arguments and conclusions? At the expense of repeating what I said at the end of Volume 1, this book set out by presenting a resilient conundrum in Holocaust studies. That is, considering that many specialist historians agree that during the Nazi era most Germans were only moderately antisemitic, 1 how during the Holocaust did they so quickly prove capable of slaughtering millions of Jews? I argued that social psychologist Stanley Milgram’s Obedience to Authority experiments may hold key insights into answering this perplexing question. Milgram’s main discovery was that 65% of ordinary people in his laboratory willingly, if hesitantly, followed an experimenter’s commands to inflict seemingly intense—perhaps even lethal—electrical shocks on a “likable” person. 2 When participants were asked why they completed this experiment, much like the Nazi war criminals, they typically said they were just following higher orders. 3 There is no shortage of scholars who, like Milgram, sensed similarities between the Obedience studies and the Holocaust. 4 These parallels have so frequently been drawn that Arthur G. Miller collectively termed them the Milgram-Holocaust linkage. 5 An increasing number of scholars, however, have challenged the validity of these similarities by demonstrating how the Obedience studies differ to, or conflict with, the Holocaust’s finer historical details. One of many possible examples is that unlike during the Holocaust, Milgram’s participants were clearly concerned about the well-being of their “victim.” Despite this trend, Miller notes one behavioral similarity that I believe merits further attention: “Milgram’s results could be likened to the Holocaust itself. Both scenarios revealed ordinary people willing to treat other people with unimaginable cruelty…” 6 Extrapolating from this observation, I suggest that if it was possible to delineate Milgram’s start-to-finish inventive journey in transforming most of his participants into compliant inflictors of harm on a likeable person, perhaps the insights gained might shed new light into how only moderately antisemitic Germans so quickly became willing executioners.

So how then was Milgram able to quickly transform most ordinary people into torturers of a likeable person? I argue he did so by deploying formally rational techniques of discovery and organization. To be clear, what exactly did I mean by the term formal rationality?

Formal Rationality

Max Weber conceives formal rationality as the search for the optimum means to a given end—the “one best way” to goal achievement. Weber’s model of a formally rationalized strategy was bureaucracy, an organizational process designed to find the one best way to goal achievement. To construct the “one best” bureaucratic process, managers break an organizational goal into a variety of discrete tasks, the achievement of which they allocate to different specialist functionaries or bureaucrats. Using a predetermined sequence, each bureaucrat performs their specialist task by following certain rules and regulations, after which the next bureaucrat in the organizational chain performs their specialist task until the goal is achieved.

The specific rules and regulations each bureaucrat follows are determined by what “past history” has suggested to managers is probably the one best way to goal achievement. 7 That is, as bureaucrats perform particular tasks, over time a manager’s intuitive feel, previous experiences, and observations of the process in action lead them to the incremental discovery of even better strategies, generating new and even more efficient rules and regulations for their bureaucrats to follow. Weber’s characteristics of bureaucracy (as an ideal type) include specialized labor, a well-defined hierarchy, clearly defined responsibilities, a system of rules and procedures, impersonality of relations, promotion based on qualifications, the centralization of authority, and written records. 8

Building on Weber’s legacy, George Ritzer argues that organizational strategies like bureaucracy have four main components: efficiency, predictability, control, and calculability (E.P.C.C.). 9 Efficiency is the pursuit of a shorter or faster route to goal achievement—the optimal means to a desired end. Predictability is the preference that all variables operate in a standardized and thus foreseeable way, thereby enabling managers to steer an organization toward future beneficial outcomes. Control is greater manipulative command over all factors and therefore the elimination of as many uncertainties as possible. Greater control enables greater predictability (especially as human labor is, over time, replaced by more controllable, predictable, and efficient non-human technologies). Finally, calculability involves the quantification of as many factors as possible. Advances in calculability enable greater measurement, which extends control over more variables and in turn improves the predictability of future outcomes. The greater the degree of formal rationality (advances in E.P.C.C), the greater the chance of discovering the “one best way” of arriving at organizational goal achievement, whatever it might be.

The one best way of producing motor cars over the past century or so provides an excellent example of advancing E.P.C.C. The production of the first-ever motor cars involved a few skilled engineers and tradespeople laboriously constructing and then attaching handcrafted parts to a stationary vehicle frame. This technique was not only slow (inefficient) but also unpredictable as the variable, non-standardized car parts ensured an equally variable end-product. Furthermore, because the engineers and tradespeople’s skills were rare, they could resist management’s coercive attempts to make them work faster by, for example, threatening to quit or go on strike (uncontrollable). Because control and predictability were low, management struggled to calculate daily, monthly, and annual production outputs. Thus, E.P.C.C. in relation to the one best way of manufacturing motor vehicles was low.

Henry Ford then invented the inherently bureaucratic motor car assembly-line production process. In Ford’s factory, a line of vehicle frames moved along a conveyor belt. The frames moved past many specialist assembly workers, each of whom sequentially attached a standardized car part. At the end of the moving line, a constant flow of assembled vehicles emerged. Ford’s moving line caused production efficiency to greatly increase. The standardized car parts meant identical end products, and thus predictability also increased. The set speed of the moving line enabled Ford to quantify daily, monthly, and annual output, thus increasing calculability. But it was control perhaps that advanced the most. If one worker failed to keep up with the speed of the moving line, to the frustration of other workers and management alike, a bottleneck might form. Therefore, the set speed of the moving line in conjunction with a fear of falling behind pushed workers to perform their tasks faster than they probably would have on their own accord. The assembly line is therefore an early example of a more efficient non-human technology capable of imposing greater workforce control—all felt pushed by an unsympathetic machine into working quickly. 10 And if workers resisted the set speed of Ford’s moving line (by quitting or going on strike), because they were unskilled, he could more easily replace them. Ford’s “one best way” of producing motor vehicles increased all four components of a formally rational system. It transpires Ford developed this revolutionary system by relying on his intuitive feel of what might work best, his previous life experiences, and his real-time observations of the emerging production process. Thus, it was past history that supplied him with new and potentially more effective “one best ways” of ensuring goal achievement—improved rules and regulations for his workers to follow. Ford was repeatedly supplied with new ways of producing motor cars, and eventually, he settled on what he believed to be the one best way.

But rationalization did not stop there. Because workers’ tasks were purposefully simple, advances in technology eventually rendered their labor susceptible to replacement. By the end of the twentieth century, the automation of the motor vehicle industry had taken Fordism to new heights, substituting (where possible) human labor with computer-guided, high-tech robots. These robots could be programmed (greater calculability) to perform the same tasks without variation (greater predictability), with no risk of labor disputes (greater control), and without a break at higher speeds (greater efficiency).

As the history of motor vehicle production illustrates, formally rational organizational processes have gained greater and greater control over employees. These organizational processes modified human behavior in Ford’s factories to the point that workers’ movements started to resemble machine-like actions. And the closer human actions resembled those of machines, the easier it became to eventually replace them with actual machines—what Ritzer terms “the ultimate stage in control over people…” 11 Ritzer implies here that perhaps the greatest threat to a desired end is human labor—that is, people. Humans are notoriously unpredictable, because, unlike non-human technology, they have proven very difficult for goal-directed managers to control. 12

So how, then, did Milgram deploy formally rational techniques of discovery and organization to convert (ostensibly) most of his ordinary participants into torturers of a likeable person? Documents obtained from Milgram’s personal archive held at Yale University reveals this transformative journey.

The Invention of the Obedience Studies

Volume 1 illustrated that throughout and beyond his formative years, Milgram took an uneasy yet keen interest in the Holocaust. Around the time Milgram was completing his Ph.D. in social psychology, Nazi bureaucrat Adolf Eichmann was captured, put on trial, and executed. Like many Nazis before him, Eichmann justified his actions by arguing that he had only followed higher orders to send millions of Jews to the Nazi death camps. The Nazi perpetrators’ favorite justification caused Milgram to wonder if most ordinary (albeit American) people in a social science experiment would also follow orders to inflict harm. For such an experiment to garner scholarly attention, Milgram knew it would have to obtain eye-catching results (nobody would be surprised by a low rate of obedience to hurt an innocent person). So Milgram’s research was founded on a preconceived goal: to run an experiment that would “maximize obedience.” 13 Because Milgram did not have an experimental procedure capable of generating such a result, in the role of project manager, he had to invent a means capable of achieving his preconceived end. At some level, he obviously sensed that inventing such a procedure might be possible.

His first attempt at developing a basic procedure was—as first attempts usually are—rudimentary. Drawing on his previous experience as an observer of Nazi war crimes trials and what he thought had caused the Holocaust—small steps toward a radical outcome, pledges of allegiance, group pressure, and strict obedience to harmful orders—he envisioned a procedure where participants pledged to obey orders to “Tap” and eventually “Slug” an innocent person. During this experiment, Milgram planned to insert a participant among a group of actors who all happened to be in favor of inflicting harm on an innocent person. He also envisioned a control condition: A higher authority figure was to instruct a singular participant to inflict harm on an innocent person. By inserting into his experimental program what he then thought were the Nazis’ most effective techniques of coercion, Milgram aimed to simulate the Holocaust in a laboratory setting. Despite his ambitions, however, he soon sensed his initial idea would fail to maximize obedience. With one eye on his end goal, Milgram developed a new idea drawing on his previous experience as a psychologist and an intuitive feel of what he thought was more likely to work. He sensed participants would be more likely to inflict harm using a shock machine than by engaging in direct physical violence. Effectively, he substituted human labor with a more predictable, controllable, calculable, and efficient source of non-human technology. Rather than relying on a pledge to obey, Milgram furthermore sensed participants would more likely inflict harm on an innocent person if doing so was morally inverted into a social good. More specifically, participants were told that the purpose of the experiment was to “scientifically” determine if their infliction of “punishment” on a learner would affect this person’s ability to learn (Milgram’s so-called persuasion phase). Although his emerging procedure aimed to ensure that most ordinary people inflicted harm, the basic idea also started to look a little less like the Holocaust captured in the laboratory setting. Nonetheless, to determine if the procedure was indeed capable of generating the results he desired, Milgram tasked his students at Yale with running the first series of Obedience study pilots.

By late November 1960, the class was ready to run variations on both the participant among a “group” condition and participant “alone” (control) condition. Throughout the student-run pilots, participants could see the “shocked” learner through a translucent screen. The “group” experiment confirmed Milgram’s prediction that some people would follow along with the crowd. It was the results from the “alone” (control) condition, however, that caught Milgram by complete surprise: About 60% of the participants willingly administered the most intense shocks when an actor dressed as an experimenter instructed them to do so.

During this first series of pilots, Milgram also observed an unexpected behavior: Some participants refused to look at the learner through the translucent screen, yet they continued to inflict every shock asked of them. Similarly, in subsequent variations, other participants attempted to anticipate when exactly the learner was likely to react in pain to the “shocks,” and then, they would try to neutralize these stressed verbal reactions by talking over the top of them. 14 Milgram termed all such behavior “avoidance,” whereby “the subject screens himself from the sensory consequences of his actions.” 15 In doing so, participants did “not permit the stimuli associated with the victim’s suffering to impinge on them […] In this way, the victim is psychologically eliminated as a source of discomfort.” 16 Thus, for participants, avoidance seemed to make it psychologically easier for them to do as they were told and deliver more shocks. Avoidance behavior intrigued Milgram because it raised an interesting question: What would happen if, during future pilots, he substituted the translucent screen with the non-human technology of a solid wall? Would doing so make it even easier for participants to inflict every shock? Would introducing a partition perhaps increase the completion rate above the first pilot’s 60% figure, thus edging Milgram ever closer to his preconceived goal “to create the strongest obedience situation”? 17 Milgram intended to find out.

What becomes apparent is that during the invention of the Obedience studies, Milgram’s basic strategy to improve his emerging official baseline procedure was to retain those innovative ideas that helped “maximize obedience” and abandon those that didn’t. For example, he replaced his idea that participants physically assault the victim with one where they use a shock generator. And he dropped the pledge to obey in favor of an experiment where the infliction of harm was morally inverted into a social good. Finally, although before running the first pilots Milgram intended for the single-participant variation to serve as the control experiment for what he thought would be the more coercive (and successful) group variation, after running the first pilots, the single-participant experiment’s auspiciously high completion rate led him to make it the central concern of the entire experimental venture.

Milgram ended up terming his more effective manipulative techniques either strain resolving mechanisms or binding factors. Strain resolving mechanisms are techniques designed to reduce the tensions normally experienced by a person inflicting harm. Examples of strain resolving mechanisms include the emotionally distancing shock generator and the participants’ comforting belief that their infliction of harm would (apparently) contribute to a greater (scientific) good. Binding factors are powerful bonds that can entrap a person into doing something they might otherwise prefer not to do. Examples of binding factors include the experimenter’s $4.50 payment to participants (which likely promoted feelings of being contractually obligated to do as they were asked); the experimenter’s coercive prods that it was “absolutely essential” participants “continue”; and the shock machine’s gradual escalation in shock intensity that drew many participants into “harming” an innocent person. It seems the more strain resolving mechanisms and binding factors Milgram added to his emerging procedure, the cumulatively stronger his so-called “web of obligation” became. 18

On completing the first pilots, Milgram did “not believe that the students could fully appreciate the significance of what they were viewing…” 19 He knew, however, that the first pilots tested a variety of situational forces he suspected may have played some role in producing the Holocaust. In other words, what the students regarded as a fascinating spectacle, Milgram suspected, might provide insight into the perpetration of the Holocaust. It was probably then that Milgram sensed the enormous potential of his research idea.

More than half a year later, in late July and early August 1961, Milgram, in an attempt to iron out the kinks of his research idea, 20 completed a second and more professional series of pilot studies. In the final variation of these trials, Milgram ran the “Truly Remote Pilot study,” wherein having introduced a solid wall into the basic procedure, participants could neither see nor hear the learner’s reactions to being “shocked.” Milgram’s hypothesis about the effect of a wall proved correct, because in this pilot “virtually all” participants inflicted every shock. 21 The leap from a 60% completion rate in the student-run pilots to something approaching 100% in the Truly Remote Pilot saw Milgram achieve his preconceived goal of maximizing obedience.

So, as shown, before running both pilot series, Milgram relied exclusively on his past experiences and intuitive feel of what strain resolving mechanisms and binding factors he imagined might aid his quest to maximize the emerging basic procedure’s completion rate. But during the pilots Milgram clearly relied on his skills of observation. For example, Milgram’s suspicion (correct, as it turned out) that substituting the translucent screen with a wall might increase the emerging procedure’s completion rate beyond 60% was stimulated by the participants in the first pilot series who turned away from their victim but inflicted every shock. Thus, Milgram’s real-time observations of the pilots led him to a very powerful strain resolving idea—one that was clearly beyond his undeniably impressive powers of imagination.

After completing the second pilot series, the Truly Remote Pilot study’s maximized completion rate signaled to Milgram that he had likely developed a basic procedure that, should he use it as his first official baseline, nearly all participants would complete. That is, his latest procedure was very likely to generate the high rate of obedience he had all along desired. Using Ritzer’s terminology, over time Milgram had gained so much “control” over his participants’ likely behavior that should he make his next trial the official baseline, he was able to roughly “predict” that it would obtain a high (“calculable”) completion rate. Thus, he had found the most “efficient” means of arriving at his preconceived end. 22 A new one best means to his end had emerged. And it was past history —Milgram’s intuitive feel, past experiences, and real-time observations of the pilot studies—that had, through a process of trial and error, gradually led him to the Truly Remote Pilot study’s new and more effective “rules and regulations.” If Milgram wanted to achieve his preconceived goal in the first official trial, all his helpers—his research assistants (Alan Elms and Taketo Murata), actors (John Williams and James McDonough), and participants—just needed to follow the latest and most effective “rules and regulations.” 23

However, running an official baseline experiment that nearly every participant completed raised an unanticipated problem: Such a result would deprive Milgram of any way of identifying individual differences between them. 24 Consequently, Milgram deemed it necessary to introduce a strain inducing force to the official baseline procedure—an alteration that he anticipated would slightly increase the proportion of disobedient participants. With the intention of reducing (slightly) the basic procedure’s probable completion rate by increasing participant stress, Milgram decided that the first official baseline experiment would include some auditory perceptual feedback. That is, after the participant inflicted the 300- and 315-volt shocks, Milgram instructed the learner McDonough to kick the wall and then fall silent. In contrast to Milgram’s repeated approach across the pilots to reduce participant tension (and increase their probability of inflicting every shock), the intention behind this procedural addition was obviously to increase slightly their stress levels. Having increased participants’ stress levels, he presumed a small proportion would refuse to complete the experiment. Clearly, Milgram had gained so much “control” over his participants’ likely reaction to being in the experiment that he was able to guess (“predictability”) that this latest procedural alteration would likely send his otherwise rising completion rate into a sudden (albeit sight) reverse.

Indeed, on 7 August 1961, Milgram ran his first official baseline experiment, producing a 65% completion rate. Milgram was probably expecting a slightly higher completion rate considering he made only subtle changes (infrequent wall-banging) to the Truly Remote Pilot. Nevertheless, the still surprisingly high completion rate, which garnered much media attention, became his “best-known result” and thus had its intended effect. 25 This was the rationally driven and somewhat circuitous learning process that guided Milgram during the invention of his “one best way” to preconceived goal achievement.

In an attempt to develop a theory capable of explaining this baseline result, Milgram then undertook twenty-two slight variations, the fifth of which he made his “New Baseline.” Unlike its predecessor, in the New Baseline, the learner’s intensifying verbal reactions to being “shocked” could be heard by the participant up until the 330-volt switch (thereafter becoming silent). The more disturbing (eye-catching) New Baseline (or cardiac condition) also obtained a surprisingly high 65% completion rate and went on to serve as the basic model for all subsequent variations. One of the most interesting of these many variations was, in my view, the Peer Administers Shock condition where the experimenter only required the participant to perform the subsidiary (although necessary) task of posing the word pair questions, while another participant (actually an actor) administered shocks for any incorrect answers. In comparison with the New Baseline, this variation ended in a much higher completion rate—92.5% continued to perform their subsidiary role until the three consecutive 450-volt “shocks” had been inflicted. Participants who completed this variation revealed in post-experimental interviews that they did not believe their involvement made them in any way responsible for the learner being shocked—they asserted that only the shock-inflicting peer was at fault (although, of course, if the participant refused to ask any questions, the peer would have been deprived of their rationale for “shocking” the learner). Interestingly, as the results of the other official variations demonstrated, even when participants had to shock the learner themselves, those who completed the protocol were more inclined than those who refused to shift the blame to either the experimenter or learner. 26

Another particularly interesting variation was the Relationship condition where the participant was earlier told to bring to the laboratory someone who was at least an acquaintance. One of this pair became the teacher, the other the learner. Once the learner was strapped into the shock chair and the teacher and experimenter left the learner’s room, Milgram appeared and informed the learner that the experiment was actually trying to determine if their friend would obey commands to shock them. Then, Milgram trained the learner how to react to the “intensifying shocks” (so that their reactions were similar to those of the usual New Baseline learner). This incomparably unethical condition obtained a 15% completion rate.

Milgram also ran a New Baseline variation where all the participants were women and it too obtained a 65% completion rate. With both male and female participants, something selfish seemed to lie behind the individual decision to fulfill their specialist role in the experiment, as the pseudonymous participant Elinor Rosenblum perhaps best illustrated. That is, after completing the experiment, Rosenblum met her actually unharmed learner and explained to him: “You’re an actor, boy. You’re marvelous! Oh, my God, what he [the experimenter] did to me. I’m exhausted. I didn’t want to go on with it. You don’t know what I went through here.” 27 Upon it being revealed that the experiment was a ruse, she interpreted this new reality to mean that she was in fact the only victim of the experiment. And since she was now the victim, Rosenblum felt the learner should be informed about her painful experience—one which it should not be forgotten ended in her deciding at some point to perhaps electrocute an innocent person.

Milgram anticipated (incorrectly as Volume 1 shows) that his many variations would eventually isolate what caused most participants to complete the New Baseline condition, and thus lead him to a comprehensive theory of obedience to authority. What Milgram overlooked, however, was not only the omnipotent strain resolving power of his shock generator, but also the formally rational and inherently bureaucratic organizational machine that unobtrusively lay behind it.

Milgram’s Reliance on Formally Rational Techniques of Organization

In conjunction with the above learning process, Volume 1 also revealed how Milgram, again in the role of project manager, recruited his many specialist helpers. To achieve his preconceived goal and collect a full set of data, he required institutional and financial sponsors, along with the aid of several research assistants, actors, and technicians. All, it will be noted, agreed to become complicit in the unethical infliction of potentially dangerous levels of stress on innocent people. Milgram obtained their consent much as the experimenter did with the participants: He convinced them that despite any ethical reservations they might hold, in order to “conquer the disease” of “destructive obedience…” 28 it was necessary they fulfill their specialist roles. That is, by contributing to the infliction of harm, they would help bring about a greater good. On top of morally inverting harm into a social good, Milgram further tempted all his helpers into performing their specialist roles by appealing to their sometimes different self-interested needs or desires: the provision of financial reimbursement, the prospect of organizational prestige, the offer of article co-authorship, and other material benefits. Eventually, a cognitive thread of personal benefit connected every link in the emerging Obedience studies’ organizational chain. Thus, as Milgram anticipated and then applied the most effective motivational formula for each of his helpers, the non-human technology of bureaucracy started to take shape. In turn, the division of labor inherent in this organizational system inadvertently ensured that every functionary helper could, if they so chose, plead ignorance to, displace elsewhere (“pass the buck”), or diffuse (dilute) responsibility for their contributions to a harmful outcome. As Milgram and his helpers made their fractional contributions to organizational goal achievement, a physical disjuncture arose between individual roles and any negative effects. And this disjuncture could stimulate responsibility ambiguity among functionaries. Responsibility ambiguity is, as outlined in Volume 1, a general state of confusion within and beyond the bureaucratic process over who is totally, mostly, partially, or not at all responsible for a injurious outcome. 29 When the issue of personal responsibility becomes debatable, some functionaries may genuinely believe they are not responsible for the harmful end result. Responsibility ambiguity, however, can also encourage other functionaries to sense opportunity amid the confusion: They realize they can continue contributing to and personally benefiting from harm infliction safe in the knowledge they can probably do so with impunity. In this case, the responsibility ambiguity across the bureaucratic process likely provided potent strain resolving conditions that made it possible, even attractive, for every link in the Obedience studies’ organizational chain to plead ignorance to, displace, or diffuse responsibility for their harmful contributions. Because Milgram and his helpers either genuinely didn’t feel responsible for their eventually harmful contributions or (more likely) realized that even if they did they at least probably didn’t appear so, individual levels of strain subsided, clearing the ethical way to remain involved in the personally beneficial study.

At the end of the organizational chain, however, where voluntary participants were burdened with the specialist task of (ostensibly) inflicting harm on the learner, Milgram’s first idea that they engage in a direct physical assault ensured for an undeniable connection between cause and effect. For the participants in such an experiment, the compartmentalization inherent in the division of labor could not protect them from knowing—in both concept and perceptual reality—about their harmful actions. For participants who might inflict this assault, responsibility clarity, not ambiguity, awaited them. Achievement of Milgram’s preconceived goal therefore necessitated the inclusion of something sufficiently capable of separating cause from effect. The solution came when Milgram, as he frequently did, from the top-down of his (loosely) hierarchical organizational process (first link Milgram instructed second link experimenter to pressure the final link participant into harming the learner) introduced the emotionally distancing and inherently strain resolving “shock” machine (along with his subsequent idea to separate participants from the learner with a translucent screen, or even better, a solid wall). It is important to note that the idea to introduce a wall was, as just mentioned, actually initiated by bottom-up forces within the Obedience study’s wider organizational process: Some participants during the student-run pilots looked away from the person they were “harming.” Interestingly, on their own accord, these participants were effectively inventing and then applying their own strain resolving means of better ensuring they could do as they were told. Participants, however, were not the only low-ranking innovators: As Volume 1 shows, the experimenter quickly sensed what his boss Milgram likely desired and, in the hope of maximizing the completion rate, he (Williams) started inventing his own highly coercive binding prods.

Nonetheless, once Milgram introduced the combination of the strain resolving, non-human technologies of the shock machine and wall into the Truly Remote Pilot, the participant became physically and emotionally disconnected from their victim, and suddenly, a strong dose of responsibility ambiguity became available to participants (a suddenly more ambiguous situation that would not have been possible in the absence of these non-human technologies). In fact, as I argued in Volume 1, no other combination of variables could stimulate responsibility ambiguity like the shock generator when combined with the wall—they were the most powerful strain resolving elements in the entire experimental paradigm. 30 And when participants, as they did during the Truly Remote Pilot, could no longer hear, see, or feel the implications of their actions, their specialist contributions to the wider process now more closely resembled the unemotional, technocratic, seemingly innocent, and somewhat banal contributions of all the other functionary helpers further up the organizational chain. The combination of these two strain resolving mechanisms effectively injected the indifference into Chester Barnard’s Zone of Indifference , which is where a functionary’s higher “orders for actions” are sufficiently inoffensive to the point that they become “unquestionably acceptable.” 31 With responsibility ambiguity thereby available to every link in the chain—where all functionary helpers felt or appeared to be mere middle-men—it suddenly became much easier to persuade, tempt, and, if necessary, coerce “virtually” everybody involved into performing their specialist harm-contributing roles.

And as responsibility ambiguity structurally pulled every functionary into performing their specialist roles, simultaneously the coercive force of bureaucratic momentum—where to avoid criticism for not doing one’s job and/or to receive whatever personal benefits happened to be associated with goal achievement—structurally pushed all into following their “rules and regulations.” An excellent example of bureaucratic momentum was initiated when, to sign up to partake in one of the Obedience research program’s many variations, prospective participants had to select from Milgram’s preconceived research schedule, one of the 780 32 available 60-minute slots. And when they arrived at the laboratory to fill their particular slot, participants were then—much like a car frame in Ford’s factory—moved briskly along Milgram’s data extraction assembly-line process. That is, within the tight one-hour timeframe, various members of Milgram’s team sequentially engaged in specialist tasks that, among others, included training participants, running the experiment, collecting data, and undertaking debriefings. And much like on Ford’s assembly line, because time was limited—another unsuspecting person was due at the top of the hour—all helpers felt the push of the non-human technology of Milgram’s participant-processing schedule to quickly fulfill their specialist roles. Participants, located at the last link in this organizational assembly line, were also pushed into performing their specialist task by the force of bureaucratic momentum, more specifically in the form of the experimenter’s seemingly unrelenting prods—“Please continue” and “It is absolutely essential that you continue.” On top of the participant-processing schedule imposing greater “control” over all involved, the schedule’s inherent characteristic of “calculability” also meant, somewhat like with Ford, the young psychologist was able to “predict” when data collection would likely end (correctly as it turned out, at the end of the 1962 spring term). 33

As Milgram drove his organizational machine toward collecting a full set of data, any links in the organizational chain who may have experienced second thoughts about continuing to make their eventually harmful contributions likely found themselves trapped—both pushed and pulled—into fulfilling their specialist roles. This organizational machine rather effectively attempted to subvert all helpers’ basic humanity, their personal agency, in favor of Milgram’s preconceived and overarching policy objective: maximization of “obedience.” Thus, Milgram’s inadvertent construction of and reliance on an inherently problem-solving and formally rational bureaucratic organization was, I believe, an essential structural contributor to his high baseline completion rate. More generally speaking, in my view, some admixture of Weberian formal rationality, Ritzer’s McDonaldization, Luhmann’s sociological systems theory, 34 Russell and Gregory’s responsibility ambiguity, and Bandura’s moral disengagement are all likely to lead one to a stronger comprehension of the undeniably complex Obedience studies (see Volume 1).

The key, it would seem, to Milgram’s “success” in achieving his preconceived goal of maximizing obedience was that every functionary link’ contribution across and especially at the harm-infliction end of the chain felt (personally) or appeared (to others present) sufficiently banal and innocent, when in reality they were neither. In fact, as the Truly Remote Pilot best illustrated, the more pedestrian and colorless every helper’ piecemeal contributions felt or appeared, the greater all his helpers’ (and his own) violent capabilities became, and thus the higher the completion rate. To optimally maximize participation across the organizational chain, every functionary helper’ contributions needed to be, as was the case during the Truly Remote Pilot, reduced to the “mere” shuffling of paper or pressing of switches—an Arendtian-like banality of evil.

So at the end of this formally rational journey, what Milgram seems to have discovered was how best to socially engineer his preconceived goal from that of mere concept to disturbing reality. The Obedience studies are therefore a frightening “demonstration of power itself…” 35 —“the inexorable subordination of the less powerful by the powerful.” Edward E. Jones, it would seem, was right all along: The baseline condition was at best a “triumph of social engineering.” 36 As a powerful person with all the social, prestige, and financial capital of a fully funded Yale professor, Milgram’s preconceived desires were achieved by him imposing on the less powerful “a calculated restructuring of the informational and social field.” His organizational process proved more than capable of shunting, among most of those involved, all “moral factors…aside…” 37

Moving to a New Milgram-Holocaust Linkage

In my quest to develop a new and stronger Milgram-Holocaust linkage, the remainder of Volume 2 argues that certain Nazi project managers deployed similar Milgram-like formally rational techniques of discovery and organization to reach their same preconceived goal of converting most ordinary people into willing inflictors of harm. As I will show, the project managers that came closest to overarching goal achievement were those that most rationally utilized bureaucratic organizational techniques and, after running numerous pilot studies, went on to discover and then attach to the last link in their organizational chains, the most remote non-human harm-inflicting technologies. And the physically and emotionally more remote these harm-inflicting technologies—the less touching, seeing, and hearing experienced by those at the last of the perpetrator links in the wider organizational chain—the easier it became for the Nazi leadership to persuade, tempt, or, if needed, coerce their subordinates into exterminating other human beings. My justification for undertaking this journey is, as Alex Alvarez notes about the Holocaust, although “We know the history…our understanding of the means by which participants overcame normative obstacles to genocide is lacking.” 38

As the reader will discover, Volume 2 centers mostly on the evolution of the omnipotent strain resolving means of inflicting harm. In my view, when one pays close attention to the Nazi’s various and eventually preferred means of inflicting harm, the insights gained are, much like with Milgram’s own research, revelatory. This destructive and depressing journey of discovery demonstrates what I believe to be the most important Milgram-Holocaust linkage of all: formal rationality. I thus conclude that the means of inflicting harm at the last link of the Nazi’s inherently problem-solving malevolent bureaucratic process played a central role in quickly transforming many only moderately antisemitic Germans into willing inflictors of harm.

In terms of what follows, Chapter  2 explains how the Nazi party rose to power and how their destructive ideology spread among the German masses. This chapter delineates the Nazi regime’s “calculated restructuring of the informational and social field”—their promulgation of an “institutional justification” among ordinary and, for the most part, only moderately antisemitic Germans that eventually led to the widespread infliction of harm on Jews and others. Both just before and soon after the start of World War Two, Chapter  3 details the Nazi regime’s first forays into the killing of civilian populations. Then, during the Soviet invasion, Chapter  4 presents what, in my view, were the SS leadership’s most salient top-down strain resolving and binding forces used to encourage their ordinary and only moderately antisemitic underlings to participate in the so-called Holocaust by bullets. 39 Also in relation to the killing fields of the Soviet Union, Chapter  5 details what I suspect were the ordinary German’s most important bottom-up strain resolving and binding forces. With a particular focus on Operation Reinhard, Chapter  6 explores the rise of what evolved into the large-scale industrial gassing programs in the East. Chapter  7, then, delves into the Nazi regime’s final solution to the Jewish Question: the rise and domination of Auschwitz-Birkenau. Chapter  8 clarifies what the Nazis meant by what I show was, before and during World War Two, their ongoing pursuit for a “humane” means of exterminating the Jews and other so-called sub-humans. The concluding chapter provides a brief summary of my thesis along with some thoughts on its potentially wider applicability beyond the Holocaust.

In the following chapters, I regularly make behavioral comparisons between those involved in the Obedience studies and those who perpetrated the Holocaust. This seems incredibly unfair considering the former was a fake experiment and the latter involved the murder of millions of innocent people. It is important to note that although I think these analogies demonstrate a similarity in kind, I also believe they differ enormously in terms of degree. 40 Some readers may deem these comparisons completely odious, and if so, they will be on firm ground: The industrialized aims of the Holocaust remain unprecedented in human history. However, this does not mean that analogies cannot be drawn between Milgram’s project and the Nazis’ “Final Solution.” As the Welsh writer Dannie Abse put it when reflecting on the Obedience studies, “in order to demonstrate that subjects may behave like so many Eichmanns the experimenter had to act the part, to some extent, of a Himmler.” 41


  1. 1.

    See, for example, Bankier (1992, pp. 72, 84), Bauer (2001, p. 31), Browning (1998, p. 200), Heim (2000, p. 320), Johnson and Reuband (2005, p. 284), Kershaw (1983, p. 277, 2008, p. 173), Kulka (2000, p. 277), Merkl (1975), and Mommsen (1986, pp. 98, 116).

  2. 2.

    Milgram (1974, p. 16).

  3. 3.

    Milgram (1974, p. 175).

  4. 4.

    See Bauman (1989), Blass (1993, 1998), Browning (1992, 1998), Hilberg (1980), Kelman and Hamilton (1989), Langerbein (2004), Miller (1986), Russell and Gregory (2005), and Sabini and Silver (1982).

  5. 5.

    Miller (2004, p. 194).

  6. 6.

    Miller (2004, p. 196).

  7. 7.

    Ritzer (2015, p. 30).

  8. 8.

    Gerth and Mills (1974, pp. 196–204) and Russell (2017).

  9. 9.

    Ritzer (1996).

  10. 10.

    Ritzer (2015, p. 37).

  11. 11.

    Ritzer (2015, p. 120).

  12. 12.

    Ritzer (2015, p. 128) and Russell (2017).

  13. 13.

    Quoted in Russell (2009, pp. 64–65).

  14. 14.

    SMP, Box 153, Audiotape #2301.

  15. 15.

    Milgram (1974, p. 158).

  16. 16.

    Milgram (1974, p. 58).

  17. 17.

    SMP, Box 46, Folder 165.

  18. 18.

    Quoted in Russell and Gregory (2011, p. 508).

  19. 19.

    Quoted in Blass (2004, p. 68).

  20. 20.

    Blass (2004, p. 75).

  21. 21.

    Milgram (1965, p. 61).

  22. 22.

    Russell (2017).

  23. 23.

    Russell (2017).

  24. 24.

    Milgram (1965, p. 61).

  25. 25.

    Miller (1986, p. 9).

  26. 26.

    Milgram (1974, p. 203).

  27. 27.

    Milgram (1974, pp. 82–83).

  28. 28.

    SMP, Box 62, Folder 126.

  29. 29.

    Russell and Gregory (2015, p. 136).

  30. 30.

    So although the efficacious effect of strain resolving mechanisms and binding factors appears to have been cumulative (the more the Milgram added, the higher the completion rate), it is important to note that some of these individual forces were far more powerful than others. And although it was not a sufficient cause of the baseline result, as illustrated in Volume 1, none was singularly more powerful than the shock generator when used in conjunction with a wall.

  31. 31.

    Barnard (1958, pp. 168–169).

  32. 32.

    Perry (2012, p. 1).

  33. 33.

    SMP, Box 43, Folder 127.

  34. 34.

    See Kühl (2013, 2016).

  35. 35.

    Stam et al. (1998, p. 173).

  36. 36.

    Quoted in Parker (2000, p. 112).

  37. 37.

    Milgram (1974, p. 7).

  38. 38.

    Alvarez (1997, p. 149).

  39. 39.

    Desbois (2008).

  40. 40.

    Any reader unsettled by my comparisons should keep in mind that, as Volume 1 shows, although Milgram’s experiments were a ruse, for participants there was a real-time possibility that the shocks were real (the experimenter could have been, as some participants noted, a rogue mad scientist pursuing an actually harmful experiment). And had the experiments been real, the learner could have been critically injured. Also, although early on at least one participant later informed Milgram that his experiments placed the lives of participants with heart problems in great danger (implying thereafter medical screening should be introduced), Milgram chose to ignore this warning. It should be remembered that this negligence really could have cost somebody their life.

  41. 41.

    Abse (1973, p. 29).



  1. Abse, D. (1973). The dogs of Pavlov. London: Vallentine, Mitchell.Google Scholar
  2. Alvarez, A. (1997). Adjusting to genocide: The techniques of neutralization and the Holocaust. Social Science History, 21(2), 139–178.CrossRefGoogle Scholar
  3. Anderson, T. B. (2007). Amazing alphabetical adventures. Auckland, NZ: Random House.Google Scholar
  4. Bankier, D. (1992). The Germans and the final solution: Public opinion under Nazism. Oxford, UK: Blackwell.Google Scholar
  5. Barnard, C. I. (1958). The Functions of the Executive. Cambridge, MA: Harvard University Press.Google Scholar
  6. Bauer, Y. (2001). Rethinking the Holocaust. New Haven, CT: Yale University Press.Google Scholar
  7. Bauman, Z. (1989). Modernity and the Holocaust. Ithaca, NY: Cornell University Press.Google Scholar
  8. Blass, T. (1993). Psychological perspectives on the perpetrators of the Holocaust: The role of situational pressures, personal dispositions, and their interactions. Holocaust and Genocide Studies, 7(1), 30–50.CrossRefGoogle Scholar
  9. Blass, T. (1998). The roots of Stanley Milgram’s obedience experiments and their relevance to the Holocaust. Analyse & Kritik, 20(1), 46–53.CrossRefGoogle Scholar
  10. Blass, T. (2004). The man who shocked the world: The life and legacy of Stanley Milgram. New York: Basic Books.Google Scholar
  11. Browning, C. R. (1992). Ordinary men: Reserve Police Battalion 101 and the final solution in Poland. New York: HarperCollins.Google Scholar
  12. Browning, C. R. (1998). Ordinary men: Reserve Police Battalion 101 and the final solution in Poland. New York: Harper Perennial.Google Scholar
  13. Desbois, F. P. (2008). The Holocaust by bullets: A priest’s journey to uncover the truth behind the murder of 1.5 million Jews. New York: Palgrave Macmillan.Google Scholar
  14. Gerth, H. H., & Mills, C. W. (1974). From Max Weber: Essays in sociology. New York: Oxford University Press.Google Scholar
  15. Heim, S. (2000). The German-Jewish relationship in the diaries of Victor Klemperer. In D. Bankier (Ed.), Probing the depths of German Antisemitism: German society and the persecution of the Jews, 1933–1941 (pp. 312–325). New York: Berghahn Books.Google Scholar
  16. Hilberg, R. (1980). The anatomy of the Holocaust. In H. Friedlander & S. Milton (Eds.), The Holocaust: Ideology, bureaucracy, and genocide (The San José papers) (pp. 85–94). Millwood, NY: Kraus International Publications.Google Scholar
  17. Johnson, E. A., & Reuband, K. H. (2005). What we knew: Terror, mass murder and everyday life in Nazi Germany, an oral history. London: John Murray.Google Scholar
  18. Kelman, H. C., & Hamilton, V. L. (1989). Crimes of obedience: Toward a social psychology of authority and responsibility. New Haven, CT: Yale University Press.Google Scholar
  19. Kershaw, I. (1983). Popular opinion and political dissent in the Third Reich: Bavaria 1933–1945. Oxford, UK: Oxford University Press.Google Scholar
  20. Kershaw, I. (2008). Hitler, the Germans, and the final solution. New Haven, CT: Yale University Press.Google Scholar
  21. Kühl, S. (2013). Organizations: A systems approach. New York: Routledge.Google Scholar
  22. Kühl, S. (2016). Ordinary organizations: Why normal men carried out the Holocaust. Cambridge, UK: Polity Press.Google Scholar
  23. Kulka, O. D. (2000). The German population and the Jews: State of research and new perspectives. In D. Bankier (Ed.), Probing the depths of German Antisemitism: German society and the persecution of the Jews, 1933–1941 (pp. 271–281). New York: Berghahn Books.Google Scholar
  24. Langerbein, H. (2004). Hitler’s death squads: The logic of mass murder. College Station: Texas A&M University Press.Google Scholar
  25. Merkl, P. H. (1975). Political violence under the Swastika. Princeton, NJ: Princeton University Press.Google Scholar
  26. Milgram, S. (1965). Some conditions of obedience and disobedience to authority. Human Relations, 18(1), 57–76.CrossRefGoogle Scholar
  27. Milgram, S. (1974). Obedience to authority: An experimental view. New York: Harper and Row.Google Scholar
  28. Miller, A. G. (1986). The obedience experiments: A case study of controversy in social science. New York: Praeger.Google Scholar
  29. Miller, A. G. (2004). What can the Milgram obedience experiments tell us about the Holocaust? Generalizing from the social psychology laboratory. In A. G. Miller (Ed.), The social psychology of good and evil (pp. 193–237). New York: Guilford Press.Google Scholar
  30. Mommsen, H. (1986). The realization of the unthinkable: The ‘final solution of the Jewish question’ in the Third Reich. In G. Hirschfeld (Ed.), The policies of genocide: Jews and Soviet prisoners of war in Nazi Germany (pp. 97–144). London: Allan & Unwin.Google Scholar
  31. Parker, I. (2000). Obedience. Granta: The Magazine of New Writing, 71, 99–125.Google Scholar
  32. Perry, G. (2012). Beyond the shock machine: The untold story of the Milgram obedience experiments. Melbourne: Scribe.Google Scholar
  33. Ritzer, G. (1996). The McDonaldization of society: An investigation into the changing character of contemporary social life (Rev. ed.). Thousand Oaks, CA: Pine Forge Press.Google Scholar
  34. Ritzer, G. (2015). The McDonalization of society (8th ed.). Los Angeles, CA: Sage.Google Scholar
  35. Russell, N. J. C. (2009). Stanley Milgram’s obedience to authority experiments: Towards an understanding of their relevance in explaining aspects of the Nazi Holocaust. Unpublished doctoral thesis, Victoria University of Wellington, New Zealand.Google Scholar
  36. Russell, N. J. C. (2017). An important Milgram-Holocaust linkage: Formal rationality. Canadian Journal of Sociology (Online), 42(3), 261–292.Google Scholar
  37. Russell, N. J. C., & Gregory, R. J. (2005). Making the undoable doable: Milgram, the Holocaust and modern government. American Review of Public Administration, 35(4), 327–349.CrossRefGoogle Scholar
  38. Russell, N. J. C., & Gregory, R. J. (2011). Spinning an organizational “web of obligation”? Moral choice in Stanley Milgram’s “obedience” experiments. The American Review of Public Administration, 41(5), 495–518.CrossRefGoogle Scholar
  39. Russell, N. J. C., & Gregory, R. J. (2015). The Milgram-Holocaust linkage: Challenging the present consensus. State Crime Journal, 4(2), 128–153.CrossRefGoogle Scholar
  40. Sabini, J., & Silver, M. (1982). Moralities of everyday life. New York: Oxford University Press.Google Scholar
  41. Schleunes, K. A. (1970). The twisted road to Auschwitz: Nazi policy toward German Jews, 1933–1939. Chicago: University of Illinois Press.Google Scholar
  42. Stam, H. J., Lubek, I., & Radtke, H. L. (1998). Repopulating social psychology texts: Disembodied “subjects” and embodied subjectivity. In B. M. Bayer & J. Shotter (Eds.), Reconstructing the psychological subject: Bodies, practices, and technologies (pp. 153–186). London: Sage.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Nestar Russell
    • 1
  1. 1.University of CalgaryCalgaryCanada

Personalised recommendations