As the saying goes—all models are wrong; some models are useful. In fire, we speak of three types of models (in order of complexity): empirical correlations, low to medium fidelity models (e.g., zone models for compartment fires), and, more recently, high-fidelity models based on approximate solutions to partial differential equations (e.g., computational fluid dynamics) that require substantial computing resources. This classification of complexity applies equally well to modeling compartment fires, pyrolysis, forest fires, fire and structures, and evacuation.

For these approaches to be useful, we must have confidence in their results. We acquire this confidence through the processes of code verification and model validation. Verification is the process of confirming that the mathematics has been coded properly [1]—it is a prerequisite for validation. Validation is the process of determining the accuracy and uncertainty in the physical model. The purpose of this Special Issue is to highlight recent exemplary work in the area of model validation across a broad range of topics of interest to the fire community.

In the past, fire model validation was fairly straight forward. Correlations based on many datasets had a reasonably well-defined level of uncertainty. Furthermore, the valid parameter range was automatically built into the model: it is not advisable to extrapolate beyond the limits of the data used to build the correlation. The advent of modern high-fidelity computer models changes the exercise of model validation substantially. These models blur the line between what is useful and what is simply colorful because they have the potential to generate vast amounts of numerical data. The process of reducing this data and interpreting the model results is not always simple.

Given the enormous challenge fire model validation presents, we should expect a wealth of otherwise unobtainable information in return. But in the end, can we expect more of the models than the data we used to validate them? This is a nagging question—one without a simple answer.

One argument for pushing forward with model development is to codify our understanding of the basic subprocesses (e.g., pyrolysis, buoyancy, radiation) in a given fire application. Starting from fundamental laws (e.g., conservation of mass, momentum, and energy for a thermofluids model) we can construct a mathematically consistent relationship between the physical mechanisms that govern the evolution of the fire. For the model to have any chance of extrapolation beyond the range of its validation data set, we must take care to validate each component of the model independently [1]. This is the base level of validation, and its main purpose is to prevent tuning several parameters at once to calibrate the model for a complex application. The next level of validation attempts to combine a minimal number of physical phenomena. Here we assess the model’s multi-physics capability. Submodel development may be required at this stage. This development cycle continues up to the pinnacle of the validation pyramid with the end use of the model. If we follow this hierarchical validation process, and if we achieve an acceptable level of agreement between model and experiment, then we gain confidence in our understanding of the components of the problem.

At this point, some statement must be made about the level of confidence in the model to answer a specific question. There are two lines of thought here. One is to define and calculate a validation metric based on uncertainty in the experimental data combined with propagated uncertainty from the model’s input parameters [2]. Another method relies on a large number of model runs compared to a large sample of experimental data. The internal machinery of the model is ignored and the errors are presumed to be normally distributed [3]. The mean bias between the model-predicted and experimentally measured results, combined with the scatter in the predicted values, provide, respectively, the accuracy and uncertainty in the model for a given application. As an example, if the goal is to determine the mean upper-layer temperature in a compartment fire, the model bias quantifies the expected over- or under-prediction of the temperature and the uncertainty quantifies whether that bias is significant.

Another argument for the development of high-fidelity models is to fill the void of application space. Application space is the set of potential applications we might encounter in fire safety engineering, fire research, or forensic analysis. This space is infinite and we cannot come remotely close to covering it with experimental results. What we can do is validate our models for a certain class of problems in a local region of application space. Extrapolation beyond this region comes with risks.

Mitigating risk is often the goal of a regulatory authority. Currently, these authorities are uncomfortable if the model application falls outside the parameter space spanned by the validation experiments. The ultimate goal, of course, is to develop trust in the model beyond its validation space. This may only happen when a substantial number of specified calculations [4] have been successfully validated without the need for model tuning. In their article, “Validation of Fire Models Applied to Nuclear Power Plant Safety,” McGrattan et al. [5] discuss how model uncertainty can be quantified to a level where the informed use can improve public safety.

The inherent predictability of complex models is an interesting question in its own right. In reality, the time evolution of a typical fire scenario is sensitive to initial and boundary conditions and material properties. Computer models may also exhibit this sensitive behavior. As we move toward validation of predictive models, the question of how “open” the test is becomes a concern [4]. For example, what information, if any, do the modelers get ahead of time? A framework for thinking about these questions is proposed by Spearpoint and Baker in their article, “Ranking the level of openness in blind compartment fire modeling studies” [6].

This leads directly to a discussion of the importance of initial and boundary conditions in these complex models. Even when the fire source is specified, as is the case in many design fire scenarios, the compartment ventilation controls the heat release rate and thus has a zeroth-order effect on the model results. An example of model validation with careful consideration of compartment ventilation is given by Ayala et al. in their article, “Fire experiments and simulations in a full-scale atrium under transient and asymmetric venting conditions” [7].

Stepping beyond the specified fire source to prediction of the fuel mass loss rate is a quantum leap in fire modeling. Any of the practical submodels employed for solid phase thermal degradation use “effective” material properties and reduced kinetic rate parameters that are specific to the chosen model. This is a critical area of fire research and we are fortunate to have several papers in our Special Issue on the topic of pyrolysis. These papers include an article by Stoliarov entitled, “Parameterization and Validation of Pyrolysis Models for Polymeric Materials” [8], an article by Bruns entitled, “Inferring and Propagating Kinetic Parameter Uncertainty for Condensed Phase Burning Models” [9], and finally an article by Scott et al. entitled, “Validation of Heat Transfer, Thermal Decomposition, and Container Pressurization of Polyurethane Foam using Mean Value and Latin Hypercube Sampling Approaches” [10].

Within the category of subgrid-scale model validation we have two excellent papers. The first is by Overholt et al. entitled, “Computational Modeling and Validation of Aerosol Deposition in Ventilation Ducts” [11]. The second is an interesting example of using a high-fidelity model called ‘One-Dimensional Turbulence’, or ODT, to understand detailed turbulence-chemistry interactions. The techniques described by Monson et al. in their paper, “Simulation of ethylene wall fires using the spatially-evolving One-Dimensional Turbulence model” [12], show promise for guiding future development of practical models for complex problems like soot formation and radiation, and flame-wall interactions.

Finally, we are pleased to have several special topics represented. Ronchi et al. [13] discuss the critical subject of building egress in their paper, “Assessing the Verification and Validation of Building Fire Evacuation Models.” Hoffman et al. [14] give an assessment of the state-of-the-art for modeling the spread of wildfires in “Evaluating crown fire rate of spread predictions from physics-based models.” And Zhang et al. [15] present a “Simulation Methodology for Coupled Fire-Structure Analysis: Modeling localized fire tests on a steel column.” What these advanced special topics have in common is that they are complex issues with few well-characterized experiments. Further, they tend to involve coupling between codes with different specialties. For instance, egress usually involves coupling a human behavior model with a fire model to predict smoke density and thus visibility. Wildfires depend on both fire and atmospheric physics. And the latest developments in fire-structure interaction consider two-way coupling between the fire and structural codes with time-dependent boundary conditions.

This collection of papers demonstrates the wide range of topics important to fire science and fire safety engineering. We hope this Special Issue is but a starting point for making Fire Technology an authoritative peer-reviewed repository for fire model validation.

Thanks to our contributors for their hard work in making complex models useful.