Letter to the Editor: Burning Down the Silos in a Multidisciplinary Field. Towards Unified Quality Criteria in Human Behaviour in Fire

  • M. KinatederEmail author
  • E. Ronchi
Letter to the Editor

Research on human behaviour in fire (HBiF) is rooted in fire protection engineering but is multidisciplinary in its nature [1]. Conference and journal articles are often authored by experts from various fields such as fire protection engineering, architecture, evacuation modelling, human factors, psychology, traffic engineering, neuroscience, applied mathematics, computer science, sociology, and probably many more. This makes intuitively sense, given that HBiF research is situated at the intersection of the (built) environment, fire, and people. While the promises of such a diverse field are manifold, Kuligowski has warned about the inherent challenge of “silo-ing” [1]: while researchers may be productive within their own disciplines, they risk ignoring work from other disciplines as well as failing to communicate their work beyond their peers. A way forward could lie in the development of a common glossary, similar to recent attempts in related fields [2].

The lack of a common research vocabulary in HBiF poses a particular challenge for researchers, who often need to assess research quality outside of their own field. Empirical sciences, such as psychology, typically refer to the “objectivity, reliability, and validity” of a method [3], whereas engineering oriented disciplines like evacuation modelling require “verification and validation” [4]. The former is based on the dilemma that researchers in HBiF often need to measure variables that are not directly observable (e.g., perceived risk [5], cognitive biases [6], or some other psychological construct). These latent variables have to rely on manifest observations (e.g., responses in a questionnaire, measurements taken in a lab, video recordings, etc.) [7]. The latter is based on the need to adequately define and execute testing procedures. Here, we discuss and compare these approaches, illustrate challenges, and hopefully will suggest a way forward to burn down some of the silos in HBiF.

Table 1 provides a comparison of concepts in empirical and engineering sciences. In engineering sciences, verification refers to a correct implementation of data/models obtained through empirical sciences or theory, and validity is linked to how well a model matches the real world (also see the special issue on Fire Model Validation in Fire Technology [8]). In the context of equation-based models of human behaviour in fire, this is often colloquially referred in engineering as “doing the maths right” (verification) versus “doing the right maths” (validation); concepts that can easily be translated into behavioural modelling.
Table 1

Quality Criteria of Research Methods in Empirical and Engineering Sciences

Empirical sciences

Evacuation modelling

Objectivity refers to the independence of observations from outside influences [3]. For example, an objective intelligence test, must be independent from the individual administering the test. Further, a test will return similar result regardless of how the test is administered. For instance, the results self-report questionnaires should not differ between administration methods (e.g., online vs. in person interview)

Verification determines if the implementation of a (calculation) method accurately represents the corresponding conceptual description of the method and its solution [10]. For instance, when modelling occupant movement, this refers to the ability of a model to qualitatively produce results which reflect the current knowledge and understanding in HBiF.

Reliability refers to the precision and consistency of a method over time (test–retest reliability) and across observers (inter-rater reliability) [3]. For example, a questionnaire measuring a stable personality trait should return comparable results if measured several times in a row. Objectivity is a necessary but not sufficient criterion for reliability


Validity refers to the degree a method measures what it is designed to measure and can be further differentiated between predictive, convergent, construct, internal, external and ecological validity. Construct validity refers to how well observed data (e.g., scores in an intelligence test) describe a latent variable (e.g., intelligence) [11]. Predictive validity describes how well a test (e.g., a questionnaire on behavioural intentions in a hypothetical fire emergency) can predict future behaviour (e.g., actual observed behaviour in a fire emergency) [12]. Ecological validity evaluates how well research methods represent the real world scenario that is being examined (e.g., how true are effects observed in lab in the real world) [13]. This concept is similar to external validity, which describes the degree to which findings from one setting can be generalized to other situations or populations (e.g., can estimates of egress behaviour in ships be applied to buildings?) [14]. Validity requires both objectivity and reliability

Validation determines how well a model represents real world phenomena [10] and refers to the truthfulness of a model. A valid model will be able to accurately predict the outcome of an evacuation.

It is not surprising that the general workflows for generating evidence are comparable across disciplines: typically, a researcher would start out with some data or observations (e.g., “the tendency of building occupants to be biased towards exits during egress they have previously used to enter a building”), then test falsifiable hypotheses (e.g., “participants in an experiment will be more likely to use a known than an unknown exit”) and formulate theories (e.g., “people are biased towards familiar exits”). Predictions derived from theories then can be fit into a model (e.g., a statistical test or a computer simulation of evacuation behaviour), and this model can then be used to predict observations in new data sets [9]. Alternatively, hypotheses could be postulated a priori and then tested with dedicated data collection efforts.

The challenge of emergent behaviours illustrates the overlap between the verification and validation concepts. Emergent behaviours describe phenomena at an aggregated level (e.g., a group or crowd of people) that arise from local interactions between individuals with each other and the environment [15]. Emergent behaviours develop over time and are thought to be, at least in part, driven by psychological and perceptual processes. Unfortunately, a leap of faith is often taken from the observed behaviours to the underlying latent processes. Roughly, emergent behaviours can be grouped into three categories:
  1. 1.

    Observable phenomena that can be described in physical terms (e.g., lane formation; observed flow rates, crowd turbulence, e.g., [16, 17]). Here, no leap of faith is required.

  2. 2.

    Established behavioural facts which are thought to be explained by non-observable processes (e.g., “movement towards the familiar”, implies that knowledge of space is somehow affecting route choice [18, 19]). Here, the researcher has to use indirect methods (e.g., questionnaires), to establish links between the observed behaviour (e.g., a person going to an exit) and the postulated psychological processes (e.g., the exit was chosen because it was familiar). The degree of trust is linked to the validity of the methods.

  3. 3.

    Common misconceptions on regarding HBiF (e.g., “panic”, “stampede”), where observations may be correlated to behaviour but neither provide plausible explanations nor predictions of behaviour [20].


One key question is whether a proposed mental process falls into category two or three. Replicability of effects can be seen as an indicator of whether or not a postulated effect is real and is widely (and wildly) discussed in other fields (e.g., [21]). For instance, recent work found conflicting support for the “faster-is-slower” hypothesis [22, 23] and certainly will re-ignite the debate. Similarly, replication of well-established effects can support the validity of emerging research methods [24] or question existing established engineering use of human behaviour in fire principles [25].

HBiF is multidisciplinary and benefits massively from the various approaches each field brings to the table. Here, we explored the currently existing mismatches in research vocabulary and classification of physical and behavioural phenomena across disciplines within HBiF. Future endeavours are needed to burn down even more silos and develop standardized methodologies and quality criteria for a unified research on HBiF.



  1. 1.
    Kuligowski E (2017) Burning down the silos: integrating new perspectives from the social sciences into human behavior in fire research. Fire Mater 41:389–411CrossRefGoogle Scholar
  2. 2.
    Adrian J, Bode N, Amos M, Baratchi M, Beermann M, Boltes M et al (2019) A glossary for research on human crowd dynamics. Collectiv Dyn 4:1–13CrossRefGoogle Scholar
  3. 3.
    Adams HF (1936) Validity, reliability, and objectivity. Psychol Monogr 47:329–350CrossRefGoogle Scholar
  4. 4.
    Ronchi E, Kuligowski ED, Nilsson D, Peacock RD, Reneke PA (2016) Assessing the verification and validation of building fire evacuation models. Fire Technol 52:197–219CrossRefGoogle Scholar
  5. 5.
    Kinateder MT, Kuligowski ED, Reneke PA, Peacock RD (2015) Risk perception in fire evacuation behavior revisited: definitions, related concepts, and empirical evidence. Fire Sci Rev 4:1Google Scholar
  6. 6.
    Kinsey MJ, Gwynne SMV, Kuligowski ED, Kinateder M (2019) Cognitive biases within decision making during fire evacuations. Fire Technol 55:465-485CrossRefGoogle Scholar
  7. 7.
    Bollen KA, Latent variables in psychology and the social sciences. Annual Review of Psychology 53:605–634CrossRefGoogle Scholar
  8. 8.
    McDermott RJ, Rein G (2016) Special issue on fire model validation. Fire Technol 52:1–4Google Scholar
  9. 9.
    Bode N, Ronchi E (2019) Statistical model fitting and model selection in pedestrian dynamics research. Collect Dyn 4:1–32CrossRefGoogle Scholar
  10. 10.
    RonchiE, Kuligowski ED, Reneke PA, Peacock RD, Nilsson D (2013) The process of verification and validation of building fire evacuation models. US Department of Commerce, National Institute of Standards and TechnologyGoogle Scholar
  11. 11.
    Cronbach LJ, Meehl PE (1995) Construct validity in psychological tests. Psychol Bull 52:281CrossRefGoogle Scholar
  12. 12.
    Lin W-L, Yao G (2014) Predictive validity. In: Michalos AC (ed) Encyclopedia of quality of life and well-being research. ed Dordrecht: Springer Netherlands, 2014, pp. 5020-5021.Google Scholar
  13. 13.
    Anderson CA, Bushman BJ (1997) External validity of “trivial” experiments: the case of laboratory aggression. Rev Gen Psychol 1:19–41CrossRefGoogle Scholar
  14. 14.
    Shadish WR, Cook TD, Campbell DT (2002) Experimental and quasi-experimental designs for generalized causal inferenceGoogle Scholar
  15. 15.
    Warren WH (2008) Collective motion in human crowds. Curr Direct Psychol Sci 27:232–240CrossRefGoogle Scholar
  16. 16.
    Sieben A, Schumann J, Seyfried A (2017) Collective phenomena in crowds—where pedestrian dynamics need social psychology. PLoS One 12:e0177328CrossRefGoogle Scholar
  17. 17.
    Hagwood C, Reneke PA, Peacock RD, Kuligowski ED (2019) Incorporating Human Interaction into Stair Egress with an Application to Minimum Stair Width. Fire Technol 55:547–567CrossRefGoogle Scholar
  18. 18.
    Sime JD (1985) Movement toward the familiar: Person and place affiliation in a fire entrapment setting. Environ Behav 17:697–724CrossRefGoogle Scholar
  19. 19.
    Song D, Park H, Bang C, Agnew R, and Charter V (2019) Spatial familiarity and exit route selection in emergency egress. Fire TechnolGoogle Scholar
  20. 20.
    Fahy RF, Proulx G, Aiman L (2012) Panic or not in fire: Clarifying the misconception. Fire Mater 36:328–338CrossRefGoogle Scholar
  21. 21.
    Schimmack U (2018) The replicability revolution. Behav Brain Sci 41:e147CrossRefGoogle Scholar
  22. 22.
    Garcimartín A, Zuriguel I, Pastor J, Martín-Gómez C, Parisi D (2014) Experimental evidence of the “Faster Is Slower” effect. Transp Res Proc 2:760–767CrossRefGoogle Scholar
  23. 23.
    Shahhoseini Z, Sarvi M, Saberi M (2018) Pedestrian crowd dynamics in merging sections: Revisiting the “faster-is-slower” phenomenon. Phys A: Stat Mech Appl 491:101–111CrossRefGoogle Scholar
  24. 24.
    Kinateder M, Warren WH (2016) Social influence on evacuation behavior in real and virtual environments. Front Robot AI 3:43CrossRefGoogle Scholar
  25. 25.
    Averill JD (2011) Five grand challenges in pedestrian and evacuation dynamics. In: Pedestrian and evacuation dynamics. Springer, pp. 1–11Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.National Research Council CanadaOttawaCanada
  2. 2.Lund UniversityLundSweden

Personalised recommendations