Letter to the Editor: Burning Down the Silos in a Multidisciplinary Field. Towards Unified Quality Criteria in Human Behaviour in Fire
Research on human behaviour in fire (HBiF) is rooted in fire protection engineering but is multidisciplinary in its nature . Conference and journal articles are often authored by experts from various fields such as fire protection engineering, architecture, evacuation modelling, human factors, psychology, traffic engineering, neuroscience, applied mathematics, computer science, sociology, and probably many more. This makes intuitively sense, given that HBiF research is situated at the intersection of the (built) environment, fire, and people. While the promises of such a diverse field are manifold, Kuligowski has warned about the inherent challenge of “silo-ing” : while researchers may be productive within their own disciplines, they risk ignoring work from other disciplines as well as failing to communicate their work beyond their peers. A way forward could lie in the development of a common glossary, similar to recent attempts in related fields .
The lack of a common research vocabulary in HBiF poses a particular challenge for researchers, who often need to assess research quality outside of their own field. Empirical sciences, such as psychology, typically refer to the “objectivity, reliability, and validity” of a method , whereas engineering oriented disciplines like evacuation modelling require “verification and validation” . The former is based on the dilemma that researchers in HBiF often need to measure variables that are not directly observable (e.g., perceived risk , cognitive biases , or some other psychological construct). These latent variables have to rely on manifest observations (e.g., responses in a questionnaire, measurements taken in a lab, video recordings, etc.) . The latter is based on the need to adequately define and execute testing procedures. Here, we discuss and compare these approaches, illustrate challenges, and hopefully will suggest a way forward to burn down some of the silos in HBiF.
Quality Criteria of Research Methods in Empirical and Engineering Sciences
Objectivity refers to the independence of observations from outside influences . For example, an objective intelligence test, must be independent from the individual administering the test. Further, a test will return similar result regardless of how the test is administered. For instance, the results self-report questionnaires should not differ between administration methods (e.g., online vs. in person interview)
Verification determines if the implementation of a (calculation) method accurately represents the corresponding conceptual description of the method and its solution . For instance, when modelling occupant movement, this refers to the ability of a model to qualitatively produce results which reflect the current knowledge and understanding in HBiF.
Reliability refers to the precision and consistency of a method over time (test–retest reliability) and across observers (inter-rater reliability) . For example, a questionnaire measuring a stable personality trait should return comparable results if measured several times in a row. Objectivity is a necessary but not sufficient criterion for reliability
Validity refers to the degree a method measures what it is designed to measure and can be further differentiated between predictive, convergent, construct, internal, external and ecological validity. Construct validity refers to how well observed data (e.g., scores in an intelligence test) describe a latent variable (e.g., intelligence) . Predictive validity describes how well a test (e.g., a questionnaire on behavioural intentions in a hypothetical fire emergency) can predict future behaviour (e.g., actual observed behaviour in a fire emergency) . Ecological validity evaluates how well research methods represent the real world scenario that is being examined (e.g., how true are effects observed in lab in the real world) . This concept is similar to external validity, which describes the degree to which findings from one setting can be generalized to other situations or populations (e.g., can estimates of egress behaviour in ships be applied to buildings?) . Validity requires both objectivity and reliability
Validation determines how well a model represents real world phenomena  and refers to the truthfulness of a model. A valid model will be able to accurately predict the outcome of an evacuation.
It is not surprising that the general workflows for generating evidence are comparable across disciplines: typically, a researcher would start out with some data or observations (e.g., “the tendency of building occupants to be biased towards exits during egress they have previously used to enter a building”), then test falsifiable hypotheses (e.g., “participants in an experiment will be more likely to use a known than an unknown exit”) and formulate theories (e.g., “people are biased towards familiar exits”). Predictions derived from theories then can be fit into a model (e.g., a statistical test or a computer simulation of evacuation behaviour), and this model can then be used to predict observations in new data sets . Alternatively, hypotheses could be postulated a priori and then tested with dedicated data collection efforts.
Established behavioural facts which are thought to be explained by non-observable processes (e.g., “movement towards the familiar”, implies that knowledge of space is somehow affecting route choice [18, 19]). Here, the researcher has to use indirect methods (e.g., questionnaires), to establish links between the observed behaviour (e.g., a person going to an exit) and the postulated psychological processes (e.g., the exit was chosen because it was familiar). The degree of trust is linked to the validity of the methods.
Common misconceptions on regarding HBiF (e.g., “panic”, “stampede”), where observations may be correlated to behaviour but neither provide plausible explanations nor predictions of behaviour .
One key question is whether a proposed mental process falls into category two or three. Replicability of effects can be seen as an indicator of whether or not a postulated effect is real and is widely (and wildly) discussed in other fields (e.g., ). For instance, recent work found conflicting support for the “faster-is-slower” hypothesis [22, 23] and certainly will re-ignite the debate. Similarly, replication of well-established effects can support the validity of emerging research methods  or question existing established engineering use of human behaviour in fire principles .
HBiF is multidisciplinary and benefits massively from the various approaches each field brings to the table. Here, we explored the currently existing mismatches in research vocabulary and classification of physical and behavioural phenomena across disciplines within HBiF. Future endeavours are needed to burn down even more silos and develop standardized methodologies and quality criteria for a unified research on HBiF.
- 5.Kinateder MT, Kuligowski ED, Reneke PA, Peacock RD (2015) Risk perception in fire evacuation behavior revisited: definitions, related concepts, and empirical evidence. Fire Sci Rev 4:1Google Scholar
- 8.McDermott RJ, Rein G (2016) Special issue on fire model validation. Fire Technol 52:1–4Google Scholar
- 10.RonchiE, Kuligowski ED, Reneke PA, Peacock RD, Nilsson D (2013) The process of verification and validation of building fire evacuation models. US Department of Commerce, National Institute of Standards and TechnologyGoogle Scholar
- 12.Lin W-L, Yao G (2014) Predictive validity. In: Michalos AC (ed) Encyclopedia of quality of life and well-being research. ed Dordrecht: Springer Netherlands, 2014, pp. 5020-5021.Google Scholar
- 14.Shadish WR, Cook TD, Campbell DT (2002) Experimental and quasi-experimental designs for generalized causal inferenceGoogle Scholar
- 19.Song D, Park H, Bang C, Agnew R, and Charter V (2019) Spatial familiarity and exit route selection in emergency egress. Fire TechnolGoogle Scholar
- 25.Averill JD (2011) Five grand challenges in pedestrian and evacuation dynamics. In: Pedestrian and evacuation dynamics. Springer, pp. 1–11Google Scholar