Much of disaster response management research has focused on describing and explaining various phenomena, such as emergence (Quarantelli et al. 1966; Dynes 1970; David 2006), improvisation (Wachtendorf 2004; Frykmer et al. 2018), and sensemaking (Weick 1988; Combe and Carrington2015). The research has thus been concerned with how the world works (with respect to disaster management). At the same time, there are numerous books, reports, and guidelines describing how disaster management should be conducted in practice (IASC 2010; Coppola 2011; UNHCR 2015). The knowledge contained in the first type of publication helps us understand why things happen, and it might even allow us to make predictions. The second type of publication contains knowledge about what we should do in certain circumstances.
There are several ways to categorize science. Here we make use of Aken’s (2004, p. 224) distinction between: “…three categories of scientific disciplines: (1) The formal sciences, such as philosophy and mathematics. (2) The explanatory sciences, such as the natural sciences and major sections of the social sciences. (3) The design sciences, such as the engineering sciences, medical science and modern psychotherapy.” The misson of the explanatory sciences corresponds to our description of the first research output described above, that is, it seeks to describe, explain, and possibly predict some type of phenomenon. The mission of a design science, on the other hand, corresponds to the knowledge contained in, for example, guidelines or handbooks,that is, it develops knowledge of how to best achieve goals in a specific professional context.
We acknowledge that this categorization of sciences is coarse and that, in practice, any science most likely contains aspects of both explanatory and design character. Nevertheless, we use this distiction when describing a central claim that we wish to make: that research on emergency and disaster response management is much more developed in terms of its explanatory ambitions than when it comes to design. With this claim, we mean that arguments supporting conclusions of an explanatory nature are generally stronger and more salient compared to arguments of a design nature. When scrutinizing the arguments supporting the explanatory claims, these are most likely found in scientific papers where, for example, theories, models, or constructs are proposed. Partly, this is what good scientific conduct is all about,that is, transparency of method and data, logical consistency, and so on. On the other hand, if the focus is on response research that claims how something should be (done) in order to achieve some kind of goal, it is often hard to clearly understand the arguments supporting such conclusions. There can be several reasons for this. Here we wish to highlight two types of studies in which this might be the case.
The first type involves explanatory studies where the authors overstate the normative importance of the findings. Overstatement is a problem that has been observed in fields such as medicine where the clinical importance of the results are sometimes exaggerated (Shinohara et al. 2017). Indicative of this problem are studies, either based on data from one or a few disasters, investigating some phenomenon of interest. The main focus is here on providing typical explanatory claims like “A led to B in a specific disaster.” Alternatively, if the study involves several disasters, one might see claims like “A leads to B, in disasters in general.” However, even if B is something desirable, whether such results can infer that “you should do A in disaster response (since it leads to B)” is questionable. For example, there might be better ways of achieving B than A, or A might be so costly so that this will outweigh the benefits of achieving B.
The second type involves studies of a theoretical nature where some method or model for how to solve some type of problem in a disaster response setting is suggested. In this case, the arguments supporting the implementation of the method/model in question rely on some basic assumptions from which the method/model is “derived.” An example is the principle of maximization of expected utility in decision situations involving uncertainty. The principle is supported by very strong conceptual arguments (Neumann and Morgenstern 1944), but, as Kahneman and Tversky (1979) demonstrate, it is a poor explanatory model of human decision making. Although the principle is theoretically appealing, there are often simpler strategies that outperforms it (Gigerenzer and Goldstein 1996).
Both types of studies are relevant in disaster response research, but neither of them lead to strong normative arguments, that is, evidence, with respect to what works in practice. Therefore, our ambition is to contribute to the development of response management research by describing how “design knowledge” (knowledge intended to improve professional practice) can be generated in a transparent and logically consistent way. To that end, we draw upon the design science literature in fields where such research is more developed, notably organizational research (Romme 2003), information systems research (Hevner et al. 2004), and management research (Aken 2004).
Developing Design Knowledge
Our focus is on propositional (“know that”), rather than procedural (“know how”) knowledge. Procedural knowledge is essential when implementing various measures to improve disaster response management, but to implement such measures, we first need to conclude that they are likely to work. For that, propositional knowledge is essential. Here, we use the term (propositional) knowledge to mean “justified beliefs.” This is in line with the Society for Risk Analysis Glossary (SRA 2018) and with Aven (2018), Aven and Renn (2019), and Hansson and Aven (2014). The key idea is that knowledge is the same as the most epistemically-warranted statements, in other words, “justified beliefs” about nature, humans, physical constructions, and so on. However, the justified beliefs we seek when pursuing design knowledge are not related to how things are but rather to what should be. Put differently, design knowledge is not about understanding disaster response management per se, but what we should do in order to achieve something that is desirable in this context. Design knowledge is thus normative or prescriptive, rather than descriptive. More precisely, it has the logical form of a so-called “design proposition” (Aken 2004, p. 227): “If you want to achieve O in context C, do something like I.”
The design proposition includes an objective (O), which is something one wants to achieve; a context (C) in which the knowledge is claimed to be applicable; and an intervention (I), which describes what should be done in order to achieve the objective (O) in the specific context (C).
Design propositions vary greatly in terms of how concrete they are. For example, an algorithm specifying a very precise method to do something or, more likely in the context of response management, a heuristic similar to the general proposition given above (that is, it includes the notion of “…do something like…”). Importantly, the proposition does not need to be condensed into an algorithm or resemble the statement above. It reflects the intervention-outcome logic of a specific proposition, but the actual description might be contained in, for example, a guideline, book, or instruction video (Aken 2005b). The ongoing construction of a body of design knowledge is a key activity in any professional context. It requires constantly asking questions about how to best achieve purposes relevant to the profession in question, whether curing people from a disease or managing the consequences of an emergency or a disaster. There must be a continuous evaluation of which statements (design propositions) are most justified (epistemically warranted); the process necessarily includes producing new statements and refining old ones. Changes can be justified by either empirical testing or reasoning.
Using Experiments to Support the Development of Design Knowledge
We suggest that controlled experiments become an integral part of the emergency and disaster response management research agenda, in order to develop design knowledge in the area of response management. Although various scholars have used experimental settings to examine different aspects of response management (Brehmer 1992; Pramanik et al. 2015; Danielsson 2016; Kalkman et al. 2018), examples are few. Moreover, as far as we know, there have been no efforts to explicitly support the development of design knowledge. We argue that controlled experiments have several benefits in the context of emergency and disaster response management research.
We draw upon an analogy with the development of modern medicine to underline our point. Up until the twentieth century, “it was not unusual for a sick person to be better off if there was no physician available because letting an illness take its natural course was less dangerous than what a physician would inflict. And treatments seldom got better, no matter how much time passed” (Tetlock and Gardner 2015, p. 28). Many treatments were available, and occasionally they were changed, giving the impression that they were improved; however, in most cases, they did not have any effect at all. The breakthrough that brought medicine from a practice-based craft into a research-based discipline was the scientization of the field (Aken 2005a). The key was an increased use of controlled experiments to test and evaluate treatments (design propositions), and the accumulation of general design knowledge that could be taught to new students and practitioners. Although medicine and disaster response management are different, the use of controlled experiments to develop design knowledge should be equally important in the two fields.
Although experiments have certainly been paramount to the development of modern medicine, we must acknowledge the limitations of the experimental approach to generating design knowledge in a context such as emergency and disaster response management. One obvious problem is that it is impossible to control these adverse events and, therefore, it is difficult to study the effects of an intervention, even if data are collected from several events. There are so-called field experiments, where the effects of different policies are investigated (Falk and Heckman 2009). Like antiterrorism interventions (Arce et al. 2011), however, it is hard to imagine actors conducting experiments during actual events. Therefore, in the remainder of this article, “experiment” refers to an experiment run in the laboratory. One major concern that has been raised regarding such experiments in the present context concerns their inability “to incorporate factors that are crucial to much real-life decision-making” (Eiser et al. 2012, p. 14). Similar opinions have also been described more generally in other areas of research (Leonard and Donnerstein 1982).
These concerns are part of a wider discussion related to the external validity of experiments, notably, the extent to which results can be generalized to other contexts and, specifically, the extent to which they can be generalized to a real-world context (sometimes referred to as ecological validity). Such concerns are, of course, important in explanatory research, where the aim is to explain and/or predict phenomena relevant to response management. But if the purpose is to support development of an artefact, the situation is different. In the latter case, the key question is whether the experimental context is a valid model of the practical context that it seeks to represent. As Mäki (2005, p. 306) notes, “[experiments are] mini-worlds that are directly examined in order to indirectly generate information about the uncontrolled maxi-world outside the laboratory.” Thus, whether an experiment is valid or not should be judged by the extent to which we have reason to believe that the effect of the intervention in the experimental context is correlated with its effect in practice. Although an experiment may have little external (ecological) validity (that is, the experimental context is unlike the practical context and results cannot be generalized), this does not discount the experimental method. From a design perspective, a single experiment could support development of the artefact by asking questions like, “based on the results of the experiment, do we have reason to believe that intervention I will lead to outcome O in context C?” The answer should then be used as a basis to determine if, and if so how, the development process should continue.
This highlights the fact that the purpose of an experiment is essential in determining whether it is a valid model. If the purpose is to support a decision early on in the development process, the model might be very simple compared to the practical context. On the other hand, if the purpose is to support a decision later in the development process, a more complex model might be warranted. To exemplify and make use of the analogy with medicine: a decision whether to continue the early development of a new drug (an intervention) might be based on experiments involving mice. Thus, even though we know that it is difficult to generalize from mice to humans (Leenaars et al. 2019), such experiments are still extremely valuable since they are very useful to support development decisions,that is, whether to continue developing the drug or not. Similarly, if we are developing design propositions in the field of emergency and disaster management we could use experiments as a basis for design choices. For example, is it worth pursuing the development of the intervention of interest or focus on something else?
These ideas can be combined into a model that shows how to relate and integrate explanatory and design research in the field of response management. The model, which builds upon Kuechler and Vaishnavi (2008), is illustrated in Fig. 1.
Both explanatory and design research address what we call the practical context. This is the context in which we would like our design propositions to be valid. Explanatory research can produce statements that suggest a cause and effect relationship in a specific context. For example, the phenomenon of “drift into failure” (Dekker and Pruchnicki 2014) explains why disasters happen in high-risk industries: pressures of scarcity and competition lead to the normalization of signals of danger, thereby eroding safety margins and eventually leading to a disaster. Such explanatory statements can then be transformed into a prescriptive statement, or a design proposition, linking the desired effect to an objective, and the cause to some kind of intervention. An example of a prescriptive statement involving an intervention supposedly leading to fewer failures is taken from studies of high reliability organizations: “Continuously communicate rich, real-time information about the health of the system and any anomalies or incidents; this should be accurate, sufficient, unambiguous and properly understood; be aware that juniors are unlikely to speak up” (Denyer et al. 2008, p. 406). However, even if the supporting evidence for an explanatory statement is strong, it might not be sufficient to support the corresponding prescriptive statement. This is because justifying a prescriptive statement requires two elements. First, to support the cause/effect relationship between the intervention and the objective, we need to show that the intervention actually leads to the objective. Second, we require evidence to support the claim that the design proposition in question is the best one available, given our current level of knowledge and the practical context in which it is valid. There may, for example, be other interventions that achieve the same objective, but cost less or have fewer side effects.
This brings us to the experimental context, shown in Fig. 1. In this setting, the independent variable represents the proposed intervention, and the dependent variable represents the objective (what is measured and evaluated). Obviously, the context differs when running experiments that investigate cause and effect derived from a practical context. Here, the idea is, first, to identify factors that are needed to test the intervention, and evaluate them against the desired objective. Then, these factors are replicated in an experimental context that resembles the relevant parts of the practical context. Below, we apply the ideas outlined in this section to a study of goal alignment in emergency and disaster response management.