In the end of Part I, I discussed another aspect of advancement: the willingness to take a risk in order to increase the likelihood of making considerable progress but also of coming up empty handed. Through the Silicon Valley Model, for example, industrial players have been able to incorporate risk taking and the acceptance of failure into their business models. It is not only in the tech industry that we see the repercussions. In the art world, too, risk is a sticky thing and requires systemic support to bolster change (Becker 1984, chapter 4). Considering the emergence of Cubism in Paris around 1910, for instance: “As the costs of experimentation were suddenly reduced and dealers began to assume the risk of failure, the preconditions were created for the pursuit of art that was not simply different, but radically so (Sgourev 2013).”

In fact, there is a broad consensus in the literature that risk plays a key role in change and innovation processes across many disciplines (Rosenberg 1983, p. 289), (Csikszentmihályi 1996, pp. 257–258), (Bonvillian 2014; BUND 2012; Burt 2004; Graßhoff 2008; Obstfeld et al. 2014; Sgourev 2013, 2015).

Science, however, has remained more conservative. One reason is the enigmatic nature of scientific value; the arrival of a new idea or product does not have the same power to drive science as it does innovation in the market place. Therefore, pressure to perform normal science comes from personal initiative or from the structure of the system. Only those with a secure yet peripheral position (with respect to the paradigm) can be creative and afford to take risks. In contrast to industry, the number of these individuals is limited in scientific research; young scientists (and academics in general) are actually advised to limit risk wherever possible. Yet science, too, despite its propensity for conservatism, has been shaped by the positive and negative consequences of risk taking. As Max Planck stated in 1913 in reference to the work of Albert Einstein, “it is…not possible to implement novelty in the exact natural sciences without daring to take a single risk (Planck et al. 1913).”Footnote 1 Haber took a risk when he began research on ammonia synthesis in 1903 and also when he continued to work toward industrial upscaling in the face of contemporaneous opposition. It seems we would be well advised to cultivate (and possibly increase) the acceptance of high-risk endeavors in science because the Haze is not an enduring entity. Vannevar Bush noticed this aspect in 1945: “…there is a perverse law governing research: under the pressure for immediate results, and unless deliberate policies are set up to guard against this, applied research invariably drives out pure (Bush 1945b, pp. xxvi).” And what is basic research if not the acceptance of the risk of finding something of little or no scientific value?

A new structure of our research environments—here I broaden the term research to include not just the natural sciences but all of academic research—is needed if risk taking is to be accepted and promoted. The problem is that rewarding risk in research is not an easy task, for what is the real value of an undertaking that has “failed”?

Today, there is a particular problem solving approach that has been awarded much attention without receiving corresponding opportunity (Bromham et al. 2016). It is the interdisciplinary approach (with transdisciplinarity afforded no better standing) (Scholz 2001, chapter 15). The lack of acceptance of projects filed under this rubric has been attributed to, among other reasons, the inability of a single review panel to properly assess its significance or that such proposals are too high risk. Currently, there certainly are risks of failure associated with an interdisciplinary research approach. However, I argue this concern is more a consequence of lack of proper strategy and understanding of how interdisciplinary collaboration differs from “traditional” research and less an inherent weakness of interdisciplinary work itself. For perspective, we can tie together several aspects of the Haze. In the introduction, I touched on the use of terminology and how it is important to use language and concepts in a way that remains true to their definition. Communication is straightforward if there is a common lexicon, which, almost by definition, is not the case in interdisciplinary work. What is required, then, to surmount this boundary? An initial strategy seems to require nothing more than the obvious, yet often neglected, factors stated at the end of Chap. 18: motivation to work together, access to one another, and a working environment that promotes collaboration. To these, one more attribute must be added: patience. It takes time to develop a shared lexicon, but it is entirely possible. If communication difficulties arise between experienced professionals, it is not because one side is too dumb, but rather that they do not understand each other. Most everyone is a layman outside their own area of expertise, making collaboration a two-way, yet uneven street. Everyone exposed to an interdisciplinary working environment will eventually find themselves in the position of the outsider. Or of the failure. This is the risk, and it should be viewed as something of vast potential instead of an irreparable defect.

Case in point: this book would not exist if not for a shared mindset toward risk taking and patient, supportive interdisciplinary collaboration from my colleagues.