1 Introduction

Although there is a great deal of literature on the role of scientists, especially Fritz Haber, in the development of chemical warfare during the First World War, relatively little has appeared on the role of the chemical industries and their collaboration with the military authorities. Chemical warfare was admittedly only a part, in some ways a relatively insignificant part, of the wartime activities of academic and industrial chemists. Yet in the popular mind it rightly looms much larger. For Germany’s introduction of poisonous gas to the battlefield clearly violated the spirit of the 1899 and 1907 Hague Conventions, thereby introducing an era of literally unconventional warfare. Wilfred Owen’s well-known poem “Dulce et Decorum Est” created an unforgettable image of the horrifying effect of gas on an unprotected soldier, which was also used to great effect in the commemorative ceremonial concert concluding the symposium at which the present chapter was presented. Images such as these highlight the role of chemical warfare in making the Great War increasing “total,” in the process blackening the image of chemistry, especially the German variety. This chapter examines on this transformation after examining the development of interactions between the chemical industry and the military (as well as the interactions of both of these with academically-trained experts) in Germany, France, Britain, and later the United States, as a special case of the broader technological meta-system created by these opposing national systems on the Western Front from 1914 to 1918.

2 The Western Front as a Technological Meta-System

The Western Front can be viewed as a large technological system, or rather meta-system composed of several interacting national systems, within which military, industrial, and academic subsystems interacted in various ways. The idea of studying the chemical industry as a large technological system in a wartime setting goes back to Thomas Parke Hughes. Hughes conceptualized the development of high-pressure hydrogenation processes during and after the First World War as a case of “technological momentum,” whereby a large-scale technological system tends to grow by maximizing existing productive capacities (when necessary, by adapting them to new uses) and by applying the experience of its scientists and engineers with previously successful approaches to the development of new products and productive capacities (Hughes 1969, 111–112). This pattern of growth is thus creative but also conservative, making fundamental changes in direction only when influenced by external forces. In a later full-length study of the electrical industry 1880–1930, Hughes developed valuable conceptual insights and a comparative regional approach, but with little attention to the First World War, whose impact on the electrical industry was far less than on the chemical industry. Hughes identified three sorts of technological systems: a purely technical, a technical-institutional, and a more “loosely structured system” whose components interconnect and interact, but which is “neither centrally-controlled nor directed toward a clearly defined goal” (Hughes 1983, 6). The present chapter examines this latter type as a meta-system and examines the development of chemical warfare as part of the larger meta-system of the Western Front—in which three and ultimately four large-scale systems operated and interacted in response to each other’s initiatives: that of the Germans on the east side of No Man’s Land, and those of the Allies to the west—the French, British, and finally the Americans—which cooperated increasingly well yet never grew into a seamlessly operating single system. The growth and interactions of these systems thus shaped chemical warfare during the First World War.

From a military-economic perspective, the war was an extension of prewar competition among technological systems originating during the Second Industrial Revolution, in which since the 1860s large firms had utilized systematic innovation by teams of academically trained chemists, physicists, and engineers to carve out oligopolistic positions in the world market. With the advent of war, the opposing systems carried on—with apologies to Clausewitz—a kind of “economic competition by other means.” That is, the war refashioned the process of systematic technological innovation, shifting it to military settings, whereby the oligopolistic opponents now stood on the other side of No Man’s Land, and it was on the front rather in the marketplace that product testing took place. Depending upon results in the “battlefield marketplace,” each side might expand its production, modify its product, or imitate or improve upon competing products of the opposition. Success could thus depend upon the ability of each system to function effectively as an innovative system, potentially on a very large scale, in a manner not very different from the process of peacetime competition, albeit without regard to questions of intellectual property at least for the duration of the war.

From a systems perspective, the war on the Western Front was actually two successive wars, which one could call the “Great War” and the “Total War” (Chickering and Förster 2000). Each of these involved a type of mobilization; the first, beginning on both sides in August 1914, was limited and based mainly on prewar structures and capacities, in proportion to the degree of technological momentum in the peacetime systems. New problems and constraints posed in particular by the advent of static trench warfare at the end of 1914 created growing pressures that led to a “second mobilization” in each nation, representing a much more “total” and more innovative utilization of their resources and marking the advent of a wartime system with its own technological momentum. The timing of these second mobilizations depended in part upon a variety of unanticipated developments such as the continuing German occupation of a significant part of the industrial region in northwest France, as well as the inadequacy of prewar tactics and equipment for achieving breakthroughs. Thus by the spring of 1915, pressures toward a second mobilization already existed on the Allied side, marked in the British case by the creation of the Ministry of Munitions in May 1915, which took over from the War Office the coordination of national production, began systematically to mobilize scientific and technical expertise, and notably departed from the British tradition of free enterprise by supervising the construction of a series of National Factories for munitions production, initially intended only to cover wartime needs (Simmonds 2012, 67–96). The French, reeling from the loss of industrial capacity to the German invasion, had already begun the task of remobilizing their economy for war in the fall of 1914, but a major transition also came in May 1915 with the appointment of Albert Thomas as Under-Secretary of State for Artillery and Munitions (Hardach 1992). It can hardly be coincidental that these major innovations as well as several others occurred shortly after the first German attacks with chlorine gas at Ypres on April 22, which represented a move toward total war in two senses, not only toward unconventional warfare but also a greater mobilization of the chemical industry, which was now beginning to discover the possibility of “dual-use” chemicals. Because this initially entailed only the adaptation of existing capacities and products to wartime uses, it did not yet represent a full second mobilization. That finally came with the Hindenburg Program of September 1916, in response to the British-French offensive on the Somme in summer 1916, which had been made possible by the Allied innovations since spring 1915 (Herwig 1997, 259–266).

The advent and development of chemical warfare could be said to constitute a special case of this broader systemic interaction and the “totalizing” process it produced, which began to have a major impact on the conduct of the war in 1916 and was still gaining momentum on the Allied side—especially with the addition of the American system and its thorough-going mobilization beginning in 1917 (Steen 2014, 75–112)—when the German system collapsed in November 1918, in part because its limited resources could not sustain a total mobilization (Herwig 1997, 440–450).

3 Chemical Weapons as an Illustrative Case

Chemistry in 1914 was already a highly industrialized science, marked by well-developed, institutionalized academic-industrial relations. Primarily because of the development of high explosives based on organic compounds during the late nineteenth century, there also existed elements of a prewar military-industrial symbiosis, albeit on a relatively small scale. All of the major countries had relatively small testing facilities in their arsenals, and all had contracts with civilian companies to produce munitions and other items that could not be produced in the arsenals themselves. Far more significant would be the prewar academic-industrial relationships that had emerged outside the military system. These relationships could to some extent be carried over, or at least serve as a model for the developing military-industrial and academic system during the war. Thus a potentially decisive military advantage for the Germans, albeit unrecognized before the war, was the highly innovative academic-industrial symbiosis developed by their coal-tar dye industry (Johnson 2000, 15–23). A half-dozen large, research-intensive firms, organized in two oligopolistic alliances, had obtained a quasi-monopoly amounting to almost ninety percent of world synthetic dye production. Nearly all of the dye factories or sales outlets in Britain, France, Russia, and the United States were actually German, using German chemists and mostly German-made chemicals for key processes. With the outbreak of war, the Germans would find themselves with a “chemical weapon”—thousands of research-trained, technically-experienced industrial chemists—which the Allied system would find it very difficult to match until the Americans began systematically mobilizing chemists for war work in 1918 (MacLeod 1998; Steen 2014, 96–97). Following the logic of technological momentum, most wartime innovations redirected known technologies in novel ways. This however gave a decided advantage to systems including large, well-established firms with longstanding traditions of technical expertise and good connections to academic institutions—precisely the characteristics of the German dye industry, which would thus find itself especially suited for the chemical war.

Moreover, by August 1914 the concept of “dual use,” common today in discussions of chemical warfare, was already inherent in the nature of the chemical industry, especially in regard to synthetic organic chemicals. It was easy to modify chemical production processes so that with slight variations in raw materials, reagents, intermediates, and operating conditions, one could produce a wide variety of different final products for a wide variety of purposes, some of which could be military. At the outset of the war there were already three categories of products that included examples of what one might characterize, using a type of later American military jargon, as the “three D’s” of unplanned dual use: disinfectants (chlorine), dyes (phosgene), and drugs (arsenicals) (cf. Haber 1986, 15–16, 21, 159). Chlorine had long been used for disinfecting municipal water supplies, among other things; phosgene (a deadly compound of carbon monoxide and chlorine) was an intermediate in the coal-tar dye industry, used for producing several different dyes; and most recently the dye corporation Farbwerke vorm. Meister Lucius and Brüning—Höchst (henceforth, Höchst) had begun to market organic arsenical compounds (developed in collaboration with the 1908 Nobel laureate for medicine Paul Ehrlich) as the first effective drugs for treating syphilis. Dual use in these cases was unplanned, because none of these products had originally been intended for military purposes. But the experience and expertise gained from systematic innovation in these fields—especially dyes and drugs, for which the largest firms together had about a thousand chemists in 1914, synthesizing and testing thousands of potential products—could easily be redirected. The Farbenfabriken vorm. Bayer-Leverkusen (henceforth: Bayer) and Höchst in particular had been working with synthetic drugs for decades, and in the process they had developed medical testing facilities and collaborative relationships with physicians to test the physiological effects of their compounds. In the pharmaceutical industry as such, there were also several larger firms such as Merck-Darmstadt that had developed similar combinations of chemical and medical expertise (Baumann 2011, 36–194). Thus the basic structure of the system was already in place in 1914, especially in the German context. The Allies’ chemical industries, with less diverse product assortments (especially for organic chemicals) and less intensive processes of innovation, were less suitable for adaptation to chemical warfare; the Allies would thus require more fundamental changes in their prewar industrial systems. Nevertheless even on the Allied side there were possibilities for dual use; for example, both the British and French explosives industries produced picric acid as a high explosive; combined with chlorine, this would produce chloropicrin, which could be used as a chemical agent. Moreover, Allied producers of chlorine, for example for bleach, could also (in principle) fairly easily produce phosgene gas from chlorine and carbon monoxide, which required no organic-chemical expertise. In practice, however, inexperience and incompetence led to delays and inefficiencies, especially on the British side (Haber 1986, 83–86, 162–163).

4 Industrial Mobilization for Chemical Warfare: The Experimental Phase, 1914–15

At the outset of what was widely expected to be a short war, the German dye firms by and large did not expect to supply the military with much besides dyes for uniforms. They did produce some nitrates and nitrated products for dye manufacture, selling their surplus to the explosives industry (nitrotoluene and dinitrotoluene could be used to produce the high explosive TNT or trinitrotoluene), but they lacked the safeguards in their plants that were required by insurance regulations for producing the actual explosives. For this reason the leading German dye companies in August 1914 rejected appeals by the Prussian War Ministry to produce explosives, though they did however agree to produce nitrates as raw materials for explosives. As with the toluene products, these were not explosives as such but products for the explosives industry, however, and thus not fundamentally different from what the dye companies had already been doing before the war (Johnson 2006, 4–8).

Instead, chemical weapons became the bridge away from peacetime production patterns to the “weaponizing” of the dye chemical industry. This came about because of excess capacity in the dyeworks brought about first by the German embargo on the export of dyes imposed at the outbreak of the war, followed later by the tightening of the British blockade on the Central Powers, which cut the dye companies off from most of their global markets and forced them to consider other ways to use their idle facilities. Despite the induction of a large proportion of their staffs into the military, they wanted to use their remaining staff and facilities to produce something of value. Discovering the logic of dual use, as early as October 1914 both Carl Duisberg at Bayer (working with the physical chemist Walther Nernst as part of a secret military commission that followed up on unsuccessful secret prewar military experiments with ideas such as aerial phosgene bombs) and Albrecht Schmidt at Höchst (who had also tried to sell a chemical fog generator to the Imperial Navy before the war) began experimenting initially with non-lethal irritants that would not violate the Hague conventions, but when packed into artillery shrapnel shells could serve to drive enemy troops out of protected shelters such as cellars in buildings where the shrapnel alone could not reach them—at this point trenches were not yet the issue (Johnson 2003, 92–99; Baumann 2011, 195–271). It was relatively easy for the Germans to test these agents, as the process of synthesizing and testing such mainly organic compounds required no significant modification of their existing system. The Allies had greater difficulties, despite some prewar experimentation with chemical weapons. The French had actually entered the war with limited quantitates of non-lethal irritants packed in rifle grenades (Lepick 1998, 54–56). Although the early initiatives on each side had no significant impact on the early months of the war, the advent of trench warfare at the end of 1914 and the ensuing military stalemate fundamentally transformed the situation. The new German high commander Erich von Falkenhayn now demanded more lethal chemical agents, initiating the process of escalation that would lead to the emergence of the new system on both sides (Szöllösi-Janze 1998, 324; Johnson 2003, 94; Baumann 2011, 312–313, 738–739).

5 Scaling up, Innovation and Integration, 1915–17

The German introduction of chlorine cloud attacks at the Second Battle of Ypres in April 1915 was both the catalyst for the development of the new military-industrial-academic system on all sides, as well as—given the war’s ultimate emphasis on delivery by artillery—a false start, albeit an inevitable one. After all, Fritz Haber (who had from the beginning of the war put his expertise and that of his Kaiser Wilhelm Institute (KWI) for Physical Chemistry and Electrochemistry at the service of the military) had originally proposed the chlorine cloud because of the shortage of shell casings and propellants—chlorine clouds would not require artillery shells—as well as the relative abundance of domestically-produced chlorine and not least the German efforts to remain at least technically within the Hague conventions (Haber 1924, 76–77, 87; Haber 1986, 27, 41–42). A decisive shift did not come until 1916, when they began using a toxic agent in artillery shells in response to a similar initiative by the French at Verdun. It is worth noting however that the Germans used diphosgene, somewhat less toxic than phosgene as such, apparently chosen because the chemical companies producing it found it easier and less dangerous to produce and load into shells (Haber 1986, 86). Thus although Haber’s KWI became militarized in 1916 and substantially expanded its staff, the German chemical-warfare system in this period still largely depended upon the expertise, capabilities, and initiatives of its private industrial component, which had redirected its dye and pharmaceutical laboratories to systematically synthesize and test hundreds of potentially lethal compounds. And it was private industry that in 1916 established a loose “community of interests” (Interessen-Gemeinschaft, IG) encompassing all eight principal dye manufacturers in order to minimize internal competition while fostering the exchange of technical expertise and experience for war work and an expected “war after the war” with the rapidly growing and diversifying Allied systems (Abelshauser 2004, 171–173).

The British, whose domestic production of chlorine was about a tenth that of the Germans, initially chose to respond in kind to the German initiative, albeit after a characteristic delay of several months needed to produce just three-quarters of the amount their generals ordered for the unsuccessful chlorine cloud attacks at the Battle of Loos in September 1915. The British military authorities had been so unfamiliar with their own nation’s chemical industry that it had taken them two months to find a suitable supplier (Haber 1986, 150, 162; Palazzo 2000, 62–63). Ultimately the British developed a considerable productive capacity for chemical agents, they established an effective testing range at Porton in 1916, and they produced an outstanding gas mask in the small box respirator of 1917; but they remained chiefly dependent on the French for phosgene, and their system too long remained decentralized and poorly integrated, with weak communication between its academic, business, national-factory, and military components (Haber 1986, 144–147, 162–170). These weaknesses resulted in such errors as developing cyanide compounds in 1916 to counter expected use by the Germans, who had already rejected them as ineffective (Girard 2008, 105–209). The British may here have been following the French, who relied heavily on cyanide products, but in general communication with the French was weak until the British sent a formal liaison officer to Paris in August 1916 (Lepick 1998, 118). Perhaps this was not surprising, as the British had consigned gas offensives to a peripheral, harassing role, mainly using clouds (and by 1917 drums of phosgene fired at close range from Livens projectors), whereas the French took a very different approach.

The French response had begun from an even weaker position than the British. Having lost a significant part of their chemical industry when the Germans occupied the northwestern part of the country, they were forced to commit massive resources toward reconstructing lost plants and establishing new ones, just to meet the requirements of the war economy in general. Moreover, for many critical substances (such as chlorine and bromine) they had been dependent on German imports. Responding to the German chemical warfare initiative would thus require fundamental changes to the French system, which they initiated immediately following the Ypres attacks. By July 1915 they had created a central Service du matériel chimique de guerre for the overall coordination of the key academic, industrial, and military functions, including a research and development section for gas offense (under the chemist Charles Moureu) and defense, a technical and industrial section to expand production and create new factories as needed, and a logistical section, all in Paris. The city’s many laboratories allowed the French to effectively utilize their limited stock of technical expertise in chemical warfare work. The chemical service became part of the Under-Secretariat for Artillery under Albert Thomas, and from December 1916 of the new Ministry of Armaments (Lepick 1998, 109–110; also Lepick chapter, this volume). The connection to artillery reflected a tactical choice against cloud gas attacks, a logical choice because French domestic production of chlorine was insignificant. But because most chemical alternatives that the chemists initially tested (culminating in phosgene) also contained chlorine, and because they could obtain only limited amounts from the British, it was still necessary to construct a series of new chlorine plants beginning in August 1915, reaching a total of ten by 1917. Moreover, although the French could use gas shells in their rapid-firing 75-mm field gun, probably the best weapon of its kind in the war, in 1915 they still lacked heavy artillery with its higher-capacity shells. Thus to use gas effectively they first had to accumulate large quantities of 75-mm phosgene shells, which they did not begin firing until a critical situation arose with the German offensive against Verdun beginning in February 1916. In doing so they took the Germans by surprise, however, achieving the first effective Allied initiative of the chemical war (Lepick 1998, 113–119; Lepick, in this volume).

Because the ensuing German production of chemical shells was still only a tiny percentage of overall shell production in 1916, its impact on the war was still insignificant, and as yet the German chemical industry could meet demand by modifying existing plant (Johnson 2006, 12; Haber 1986, 157–159). Nevertheless, broader developments changed the balance between conventional explosives and chemical shell on the German side; in response to the Allied Somme campaign, the Hindenburg munitions program of September 1916 called for a massive increase in production of propellants and shells, but limited resources for high explosives production meant that the Germans would necessarily need to increase their dependence on chemical shells (Herwig 1997, 259–266; Johnson 2006, 13–14; Haber 1986, 260–261). Moreover, the increasing effectiveness of Allied gas defense would force the Germans to introduce new offensive weapons in 1917–1918, which would bring about the culmination of the chemical war—and help precipitate the German collapse (Haber 1986, 226–229, 275).

6 Culmination of the Chemical War, 1917–1918

The chemical war began to reach its culmination in mid-1917, when innovations on all sides, along with the addition of the United States to the Allied side, further magnified the “totalizing” tendencies that had begun to take effect in the previous year. These also produced significant institutional changes, further integration of the chemical war into the broader war effort, and the introduction of several new types of chemical agents with novel, increasingly insidious properties. These were products of the increasingly sophisticated research facilities and increasingly close academic-industrial-military collaboration that had begun to develop in the previous period. It now appeared that chemical warfare would be fully institutionalized on all sides.

In Germany Haber’s KWI expanded into many of the other institutes in Dahlem, as well as several other institutions around Berlin, while mobilizing scientists from all over Germany to become a multifunctional center for all aspects of chemical warfare under Haber’s department A10 in the War Ministry (see chapters of Bretisav Friedrich and Jeremiah James, and of Margit Szöllösi-Janze in this volume). By mid-1917 Haber saw arsenicals and sulfur compounds as the key to a new German chemical offensive, but he cautioned the High Command that Germany must win the war within a year. Once the Allies could produce the same weapons, Germany’s situation would become “hopeless” (cited in Szöllösi-Janze 1998, 332). Elaborating on French approaches, the Germans had developed artillery tactics using a variety of chemical shell types, which they called Buntschiessen—varicolored shooting—after the spectrum of chemical shells designated by colored crosses. Blue Cross arsenicals, introduced in July 1917, were intended to penetrate the Allied mask filters and cause so much irritation that soldiers would be unable to keep on their masks, thus making them vulnerable to toxic Green Cross (diphosgene) that the masks could otherwise block. The artillerists welcomed this theoretically effective weapon, and the IG produced 8,000 tons in 1917–1918, but the KWI’s scientists had not solved the practical problem of achieving fine enough particle size, so that Blue Cross caused relatively few Allied casualties. The Allies responded with their own arsenicals in 1918, but their system produced scarcely a hundred tons by the Armistice (Haber 1986, 261–269).

In contrast to the arsenicals, the Germans’ near-simultaneous introduction of Yellow Cross shells had an immediate, dramatic effect. Yellow Cross contained the sulfur compound and strong vesicant (blister agent) bis(2-chloroethyl) sulfide, misleadingly known as mustard gas. In fact, it was not a gas but an aerosol, nor did it have any chemical relation to mustard. This oily liquid’s delayed action and persistent nature (adhering to surfaces and remaining potent for days) temporarily burned and blinded thousands, producing the dramatic image captured in John Singer Sargent’s classic painting Gassed. The substance was difficult and dangerous to produce and fill into shells; thus the French could not counter with their own version, ypérite, until June 1918, and it took the British (and Americans) until September to begin mass production. As Haber had warned in 1917, Germany’s failure to win the war with Yellow Cross before the Allies produced it themselves now ensured German defeat, because it had produced an even more drastic destabilization of the German system. Although the IG had solved the production problems and the military had built new depots at Adlershof and Breloh for filling the shells, with their limited resources the Germans could not solve the decontamination problem posed by mustard gas (Haber 1986, 189–190, 265). The “only effective counter-measures” such as rubberized protective suits or even simply replacing contaminated uniforms were “practically infeasible” according to Haber (Haber 1924, 38). The Germans had no effective defense once the Allies achieved large-scale production of mustard gas shells. Moreover, even though in 1918 more than a quarter of German shells contained chemical agents, a significantly higher proportion than on the Allied side, this reflected German limitations in producing high explosives as much as their ability to produce chemical weapons. By late 1918, total Allied shell production (all types) was twice that of the Germans, and the American mobilization was about to eliminate even the German advantage in chemical warfare (Johnson 2006, 15–16).

British concerns about the German innovations of mid-1917 finally brought about the long-delayed centralization of their system with the founding within the Ministry of Munitions of the Chemical Warfare Department in October. Its able leader, General Henry F. Thuillier, finally began to coordinate the academic research, industrial production, and military testing efforts, yet even his influence could not avoid serious delays in mustard gas development, as previously noted. But by late 1918, the British had overcome these problems and had finally integrated chemical weapons into their artillery, finding (like the Germans and French) that mustard gas made an excellent counter-battery weapon against enemy artillery during the final offenses of the war. Nevertheless the British produced fewer than half as many gas shells as their French allies, and both together reached only two-thirds of the German total (Haber 1986, 147–149, 260–261; Palazzo 2000, 173–186).

There was now greater coherence within the Allied system as a whole, particularly after inter-Allied coordination of chemical supplies began in spring 1918, and its overall scale dramatically increased as the Americans began to mobilize. Whereas a “quasi-mobilization” in 1914 by private companies like DuPont to produce explosives for the Allies had developed that side of the system, an American military-industrial system for chemical warfare remained to be created in 1917. Here the government (the U.S. Bureau of Mines) and the military necessarily took the lead in gathering information from the British and French, coordinating academic research, and recruiting several thousand chemists for war work. Although direct experience was lacking, ambition was not; Americans planned for war on a large scale. The same was true on the production side, where after unsatisfactory dealings with private suppliers, the Army Ordnance Department constructed a large military chemical complex, Edgewood Arsenal in Maryland. Edgewood came under the newly organized Chemical Warfare Service in June 1918, and plants for the principal war gases and for filling shells came on line in summer 1918. Capacities continued to increase, supplemented by additional production contracts elsewhere. At the Armistice the American system could produce 900 tons of mustard gas monthly, but the planned capacity by January 1919 was 4,000 tons—vs. a German output of less than 8,000 tons for the entire war (Steen 2014, 98–110; Haber 1986, 149, 261).

By the time of the Armistice in November 1918, the German system was collapsing in mutiny and revolution, while the Allied system was still gaining momentum. Had the German military fought on into the spring, the Allies were prepared to launch massive attacks using chemical weapons, including aerial bombardments with tons of mustard gas. The American military believed that they had achieved the ultimate weapon in Lewisite, an arsenical with the vesicant properties of mustard gas, but this did not reach France before the Armistice (Steen 2014, 109–110). The Germans were thus spared the horrors that “total” chemical warfare might have produced in 1919 (Table 1).

Table 1 Innovation in the meta-system of the Western Front: introduction of chemical agents by Germans and Allies, 1914–1918 (based in part on the figure in Martinetz 1996, 98). The most significant innovations are indicated in bold type. I have not listed each of several different types of irritants introduced on both sides in the first two years of the war, nor have I listed all of the less significant war chemicals developed or used in the last two years (for a comprehensive list see Martinetz 1996, 69–71)

7 Concluding Reflections

The chemical war presents an almost ideal case of a technological meta-system, whose dynamics are perhaps best illustrated in Table 1, which highlights the timing and diversity of offensive innovations on both sides. Here one can see the way in which each system responded (or not) to its opponent’s innovations, usually after a considerable delay required to adjust to the production of a new chemical agent or to develop a new form of delivery. One can also see that the German system kept the initiative (with some exceptions) until mid-1918, whereas the Allied system was just reaching its full potential as the war ended. Henceforth chemists confronted a very different world than they had known in 1914, a change reflected in Fig. 1.

Fig. 1
figure 1

Power of a symbol: “The Spirit of German Science” (Raemaekers 1918, 195). The German scientist shown in this cartoon is clearly based on the organic chemist Adolf von Baeyer (Nobel Prize for Chemistry, 1905), and most of the objects of war shown in his laboratory are products of chemistry, including poison gas, tear shells, a flame thrower, incendiaries, bombs, and poison (cf. Johnson 2011, 99–100)

The war transformed modern chemistry as an international industrial system and as an industrialized scientific discipline. Both became increasingly multi-centered, following the wartime recognition that dual-use chemicals were vital to national security, as indicated by the physical chemist and British general Harold Hartley in his report on the first Allied inspection tour of German chemical factories in January-February 1919: “In the future … every chemical factory must be regarded as a potential arsenal, and other nations cannot … submit to the domination of certain sections of chemical industry which Germany exercised before the war” (Great Britain. Ministry of Munitions 1919, 12). Chemists in the Allied nations thus sought to carry over their wartime gains into peacetime, if possible with government support, which also meant promoting academic research and education as well as academic-industrial collaboration, both consciously emulating and seeking to weaken their German rivals (cf. Steen 2014). Thus aside from the chemical disarmament and reparations provisions of the Treaty of Versailles, leading chemists among the victorious Allies created a postwar global organization that excluded the Germans throughout the 1920s, thus greatly delaying the process of international scientific reconciliation and probably hindering diplomatic efforts to end chemical warfare research and development (MacLeod 2003; Szöllösi-Janze 1998, 590–597; cf. other chapters in this volume).

The paradoxical result was that the greater peacetime support for and general interest in chemistry came at the cost of global industrial overcapacities and oversupplies of trained chemists, which subsequent economic crises only exacerbated. Moreover, the war had tarnished the prewar image of chemistry. Thus “The Spirit of German Science,” a cartoon by the Dutch artist Louis Raemaekers (Fig. 1), appeared with the following commentary by the Princeton psychologist, J. Mark Baldwin:

The moral revulsion of the world against the Germans is justified by their use of science … They have abolished the distinction between the knight and the brute, between the man and the snake, between pure science and foul practice … To future generations this will damn the German race” (Raemaekers 1918, 194).

The primary target here is clearly German chemistry, but the demonization cut both ways, as the Allies had of course replied with many of the same agents. If anything, the war had effectively demonized all chemists, a curse that arguably still haunts the public image of the discipline (cf. Johnson 2011, 99–100).