1 Introduction

Generally, atmospheric dispersion models prescribe the mean concentration field, noting that mean concentration is the key parameter to evaluate air quality for regulatory purposes. However, in a wide range of cases, such as the dispersion of toxic, flammable, or chemical reacting gases, evaluating the mean concentration field may not be sufficient, and the knowledge of the concentration variance is needed. Many Lagrangian models have been developed for the second- and higher-order moments of the concentration PDF over the last 10 years.

The study of the concentration probability density function (PDF) and its moments is central to the field of Lagrangian dispersion modelling. Since a PDF can be evaluated through its moments, it is important to verify the relationship between the higher-order moments and their implication on the shape of the PDF. For practical purposes, the concentration PDF is often considered as having a Gaussian distribution and thus completely defined by its mean and variance. In state-of-the-art Eulerian models, the choice of a Gaussian PDF corresponds to the simple down-gradient approximation for the evaluation of turbulent flows (Pope 2000). Thus, turbulence is assumed to be local, and turbulent flows at a given height are evaluated through the local gradient of the mean field at the same height, neglecting the contribution of third-order moments. This assumption is only justified for very simple flows when the turbulent mixing length is much smaller than the length scale of the mean flow heterogeneity and is not appropriate when applied to convective flows as it does not properly account for more complex mixing processes (Moeng and Wyngaard 1984; Holstag and Boville 1993; Canuto 1992; Colonna et al. 2009). Under convective conditions, turbulence is non-local, and its structure in the vertical direction is far from Gaussian, leading to a more complex concentration PDF. The non-local properties of the flow are related to the asymmetry of the velocity field and therefore (Mole and Clarke 1995) suggested that, to adequately simulate the concentration PDF of a scattering plume, the third-order moment of the concentration field is required. However, estimating higher order moments is still a challenging task and a literature review (Yee and Chan 1997) reveals that there is no agreement on their scaling properties. According to Mole and Clarke (1995), higher-order moments collapse on a single curve when plotted against asymmetry, but show no dependence on the normalized second-order moment. Schopflocher et al. (2005) showed that concentration values measured at isolated points depend only on the asymmetry. In contrast, more recent atmospheric analyses performed on a large concentration fluctuation dataset, Yee and Chan (1997), Yee (2008) and Yee (2009) showed that the higher-order moments of the concentration field depend on the second-order normalised moment of concentration. These discrepancies are important since the different scaling of the higher-order moments can have consequences in the parameterization of the concentration PDF in advanced dispersion models. In previous works (Oettl and Ferrero 2017; Ferrero and Oettl 2019), we used only the first two moments of the concentration PDF as input of the Weibull PDF to calculate the 90th percentile.

Despite its central role in olfactory research, in the chemistry of naturally emitted volatile organic compounds and in toxic, flammable, or explosive emissions, the prediction of higher-order moments of concentration in stochastic Lagrangian modelling is still an open problem. Thomson (1990) proposed the two-particle model which, however, is able to simulate concentration fields only in idealized atmosphere conditions, and they have some limitations in real atmosphere (see also, Mortarini and Ferrero 2005). The fluctuating plume model (Luhar et al. 2000; Cassiani and Giostra 2002; Franzese 2003; Gailis et al. 2007; Mortarini et al. 2009) was originally introduced by Gifford (1959), while Yee et al. (1994), introduced the concept of internal fluctuation parametrization using a pre-defined PDF formulation. The further extension to Lagrangian meandering plume model is due to Luhar et al. (2000) and Franzese (2003). Mortarini et al. (2009) extended the approach suggested by Franzese (2003) by applying the fluctuating plume model to the turbulent flow generated in a simulated canopy, while Cassiani and Giostra (2002) proposed a similar model that was based on a linear compression of the mean concentration PDF, which enables rapid evaluation of higher-order concentration moments (Bisignano et al. 2014). Recent development can be found in Marro et al. (2015, 2018). Fluctuating plume models provide a good approximation close to the source but they fail at larger distances. Cassiani (2013) and Kaplan (2014) developed single particle Lagrangian models that can simulate the concentration variance by assigning a time-dependent quantity to each particle. Cassiani (2013) extended the model to reactive chemical species by using the concept of “concentration deficit” proposed by Alessandrini and Ferrero (2009). The PDF approach to turbulent flows was originally introduced by Lundgren (1967). A formal explanation for the concentration field is due to Dopazo and O'Brien (1974) and the following developments were proposed by Cassiani et al. (2005), Cassiani et al. (2007), and Yee (2009).

Manor (2014) used a Lagrangian particle model to simulate the concentration variance dispersion and Ferrero et al. (2017) suggested a formulation for the variance dissipation time-scale to be used in the same model. This parameterization has been used by Oettl and Ferrero (2017) and Ferrero and Oettl (2019) to simulate odour dispersion in field campaigns (for an extensive review, see for instance Ferrero et al. 2020 and Cassiani et al. 2020).

The motivations for developing the present model are the following. First of all, we aim to provide a model which is able to prescribe the 90th percentile of the odour concentration distribution. To do so, we need to provide the concentration fluctuation intensity on which the distribution depends. An extensive review of the distributions used in literature to fit the concentration distributions from punctual sources in field and laboratory experiments can be found in Cassiani et al. (2020). In the case of odour annoyance estimation, models use a post-processing module to calculate the 90th percentile throughout the peak-to-mean method. Modelling odour hours requires the determination of the 90th percentile of the corresponding cumulative frequency distribution. It can be determined by assuming a probability density function (PDF) that is generally completely defined by the concentration fluctuation intensity \((i=\frac{{\sigma }_{c}}{\overline{c}})\) and the mean concentration. Obtaining an estimation of i requires a more sophisticated approach able to account for the second order moment of the concentration PDF. A second motivation, is to provide a tool to evaluate which allows a fast response in the case, for example, of an emergency due to toxic or flammable accidental releases, even though it must be remembered that the lead time also depends on the computational time of the model. To this aim, the model we consider does not need any post-processing since the calculation of the mean and concentration variance field are performed at the same time differently from the previous model based on the Manor (2014) approach (Oettl and Ferrero 2017 and Ferrero and Oettl 2019).

The model here proposed is able to give an evaluation of the 90th percentile, thanks to the estimation of the fluctuation intensity. However, it is well-known that the first two moments are not equivalent to the whole distribution. The knowledge of the first two moments is sufficient to estimate the 90th percentile only with the assumption of an underlying PDF, as done in Oettl and Ferrero (2017) and Ferrero and Oettl (2019). Otherwise, higher moments are required. Without additional hypothesis on the distribution shape, SPRAYWEB does not allow the estimation of the 90th percentile. In the present paper, we show only the results obtained for the first two moments, without any hypothesis on the PDF to have an evaluation of the moments which does not depend on this choice.

Furthermore, this is obtained run-time, and not using a post-processing code as generally done in most of the models. The novelty of this work is that we use the 3D operational model for real-time forecast SPRAYWEB (Tinarelli et al. 1994; Alessandrini and Ferrero 2009; Bisignano et al. 2017; Tomasi et al. 2019) instead of simplified models that can be used only to assess the methodology but not in real applications and in particular in complex situations (see for instance Ferrero and Maccarini 2021). Furthermore, SPRAYWEB allows introducing the scheme for the variance dispersion calculation we propose. It is worth mentioning that, as a first try, the main objective of the paper is to present the model and to give a preliminary evaluation. Even though the results in many cases are not satisfactory, we suggest that they could improve. As a matter of fact, the model and the conditions of its evaluation are designed to be used in operational conditions. For this reason, we use the meteorological model with kilometre-scale resolution coupled with SPRAYWEB without trying to improve its outputs using observations or doing sensitivity tests. In Sect. “The SPRAYWEB model”, the dispersion model and the new scheme for concentration variance calculation are described. In Sect. “The field experiment FFT07”, the field experiment is introduced, while in Sect. “Meteorological and dispersion simulations and results”, the dispersion simulations and the analysis of the result are presented and discussed. Section “Conclusions” is dedicated to the conclusions.

2 The SPRAYWEB model

The SPRAYWEB model is a 3D purely Lagrangian stochastic particle model (Tinarelli et al. 1994; Alessandrini and Ferrero 2009; Bisignano et al. 2017; Tomasi et al. 2019) which is designed to take into account the spatial and temporal variability of both the meteorological mean flow and turbulence. The model can simulate time-varying emissions from point, area, and line sources. SPRAYWEB is particularly suitable for applications over complex terrain, where the meteorological fields are characterized by local phenomena, which introduce great spatial (and temporal) inhomogeneity. Indeed, the model simulates the emitted plume with a great number of virtual particles characterized by a (small) pollutant mass, which passively follows the turbulent motion of the input meteorological field. The mean trajectory of each particle is driven by the local mean wind field (given as input to the model), while its dispersion is determined by turbulent velocities obtained by solving the Langevin stochastic differential equations (Thomson 1987), using the statistical characteristics of the atmospheric turbulence.

In one dimension, the Langevin equation for turbulent velocities can be written as follows:

$$du\left(t\right)=a\left(u,x\right)dt+b\left(u,x\right)dW$$
(1)

where \(dW\) is an incremental Wiener process with:

$$\langle dW\left(t\right)\rangle =0$$
(2)
$$\langle dW\left(t\right)dW\left({t}^{^{\prime}}\right)\rangle =\delta \left.\left(t-{t}^{^{\prime}}\right.\right)dtd{t}^{^{\prime}}$$
(3)

(\(\delta\) being the Dirac function), and it is coupled to the equation for the displacement

$$dx(t)=u(t)dt$$
(4)

The coefficients \(a(u,x)\) and \(b\left(u,x\right)\) can be calculated by solving the Fokker–Planck equation for the PDF of the atmospheric wind velocity (Gardiner 1990) and from the Kolmogorov similarity theory, respectively.

With this approach, the trajectory of each particle is independent from the others, different parts of the same plume can experience different atmospheric conditions, and the effect of inhomogeneous meteorological fields on the dispersion is accounted for. The SPRAYWEB model interpolates the input wind flow in the position of each particle to calculate its mean velocity and direction: only concentration results are returned on a user-specified grid, as they are calculated by averaging over time the mass of all the particles in a given volume (the volume of the fixed cell). The SPRAYWEB model must be fed with a meteorological mean flow from external sources, while it internally calculates the turbulent velocities. A WRF-SPPRAYWEB Interface (Bisignano et al. 2017; WSI, hereafter) interpolates the mean fields provided by WRF on a numerical grid which is congruent with the one expected as input by SPRAYWEB, and calculates the standard deviations of the wind flow, σi and the Lagrangian time scales TLi,(i = u,v,w) which are needed to define the Langevin equation for the dispersion in SPRAYWEB. A module for simulating the dispersion of the concentration variance has been added to the model which simulates the average concentrations. The scheme is similar to that proposed by Manor (2014) and subsequently by Ferrero et al. (2017), Oettl and Ferrero (2017), and Ferrero and Oettl (2019). In the Manor model (2014), the average field from which it derives the sources for the variance and the concentration variance are simulates in sequence with the same stochastic Lagrangian model. On the contrary, in the model here proposed, the two simulations are done simultaneously. At each time step, the particles carrying the mass of pollutant are moved by the stochastic equation and, at the same time, the sources of variance are calculated from the average field at that moment and assigned to the particles in the cell. At the same time the variance is also dissipated according to an exponential decay. The decay time scale \(\tau\) is calculated following the scheme proposed by Manor (2014) and modified by Ferrero et al. (2017):

$$\tau =A{\left(r\right)T}_{{L}_{w}}\left(z\right)$$
(5)

where the constant A depends on the travel distance from the source r and the characteristics (height \({H}_{s}\) and size \({d}_{s}\)) of the source:

$$A(r)={\alpha }_{1}\frac{r}{{H}_{mix}}+{\alpha }_{2}{\left(\frac{{d}_{s}}{{H}_{s}}\right)}^\frac{1}{3}$$
(6)

where \({\alpha }_{1}\) and \({\alpha }_{2}\) are empirical constants, and \({H}_{mix}\) is the boundary layer height provided by WRF simulation. At the end of the simulation, the calculated concentration variance is divided by the number of time step in which the numerical scheme was activated during the simulation. In this case, every model time step. We divide the calculated concentration variance since the source computation is repeated, in this case, each time step and not only a time, at the end of the simulations, as in Manor (2014). The application of this scheme is likely to introduce a statistical error that however can be reduced using a large number of particles.

The aim of this work is to show the ability, even if with large incertitude, of the model to simulate at the same time mean concentration and concentration variance dispersion. The first coefficient used to parametrize the decay time scale were estimated in a previous work (Ferrero and Oettl 2019) for neutral conditions and used in this work for both neutral and stable conditions, while in unstable conditions we reduced it of an order of magnitude. It is likely that the more the turbulence intensity is, the more the variance is dissipated, and hence the decay time scale is expected to be larger in stable and neutral conditions with respect to the unstable conditions. That is the reason why the first coefficient is larger in the first cases and lower in the second one. The second coefficient is the same in this and the previous works. For an in-depth analysis of the dissipation time scale of concentration fluctuations, see also Cassiani et al. (2020), while Ferrero and Oettl (2019) provide a useful empirical formulation.

3 The field experiment FFT07

The experiment used for comparison, conducted in September 2007 at the Dugway Proving Ground in Utah (USA) (Storwald 2007; Platt et al. 2008), consists of a series of experimental tests of emissions, and it is called the “FUsing Sensor Information from Observing Networks (FUSION) Field Trial-2007” (in short FFT-07). This short-range experiment (around 500 m) was designed with the aim of comparing prototypes and algorithms for estimating source terms (Source Term Estimation, STE). It also aims to use the information collected to highlight strengths and weaknesses regarding the choice of parameterizations (Singh and Sharan 2013). In the present paper, we consider in particular seven tests, all with continuous emission and single source.

Table 1 shows the start time, the Monin–Obukhov length L, the wind speed and direction at 2 m, and the boundary layer height (HMIX) for the seven trials. The length of the measurements is about 10 m in all the trials. As can be seen, there is one unstable case (Trial45), two neutral cases with opposite L values and four cases with increasing stability (Trials7, 30, 15, and 8). The experiment therefore allows the model to be tested under a variety of stability conditions.

Table 1 Monin–Obukhov length L, wind speed, and directions at 2 m and the boundary layer height HMIX for the seven trials

The source is located 2 m above the ground, and the emission diameter is 30 mm. The concentration measurements last about 10 min. The comparisons are made over about 10 min of releases. In this way, that atmospheric stability can be considered constant for the duration of the simulation. In Fig. 1, the experimental set-up is reproduced. There are 100 probes distributed on a regular grid. The distance between the probes is 50 m. The experiments were carried out in flat, uniform, and homogeneous terrain conditions, so as to minimize the effects of mechanical turbulence near the surface. The experiments were carried out in different stability conditions from stable to unstable. Beyond the mean field, the concentration variance field is available. Furthermore, meteorological measurements were available to compare the WRF results. The experiment was chosen, since it provides a very detailed description of both the meteorology and the dispersion. The large number of probes placed in a square matrix (Fig. 1) should allow to properly compare model results and observation in different meteorological conditions, while in the dataset used in our previous paper (Ferrero and Oettl 2019), the atmospheric stability was neutral in all the cases.

Fig. 1
figure 1

Top view of the probe’s locations. There are 100 probes separated by 50 m of each other

4 Meteorological and dispersion simulations and results

4.1 Meteorological simulation

Meteorological simulations, which provide the input for SPRAYWEB, are carried out using WRF (Skamarock et al. 2008). WRF was run using the technique similar to that described in Alessandrini et al. (2017) for downscaling. The climate forecast system reanalysis (CFSR) is used for the initial and boundary conditions for the WRF simulations. CSFR global atmosphere resolution is ~ 38 km. The WRF is used to downscale from there, with 27, 9, 3.3, and 1.1-km grid spacing in the nests, with 67 × 67 grid points for each nest, which follows very standard 3:1 ratio for nesting reductions in grid spacing. There are 38 vertical levels in the atmosphere with a model top at 50 hPa. The resolution (1.1 km) of WRF would not be compared to the size of the experimental domain, and this might cause some incertitude in the results. It is worth noticing that this is the resolution that is usually adopted in this kind of models even if certainly a higher resolution could improve the results. Finer resolutions cannot be adopted for theoretical considerations and do not improve the model performance, particularly in flat terrain (Ferrero et al. 2018). Also, the model cannot be driven by single measurements being a 3D model which need 3D fields of the meteorological variable as input. The use of the nesting technique starting from a large domain is usual also when interested in the meteorological situation at smaller domains, since the largest grids need to reproduce the large-scale circulation on which the small scale depends. NCEP Automatic Data Processing historical data are used with four-dimensional data assimilation (FDDA) to nudge model analyses toward observations (which include radiosondes, surface stations, aircraft, and satellite wind estimates). WRF has been run successfully for simulating smaller-scale phenomena with multiple nests, including with 5 or more nests. The WRF runs last from 13:09:2007 at 00:00:00 to 17:09:2007 at 00:00:00. As far as the configuration of the physical parameterizations is concerned, we use for the microphysics the WSM 6-class graupel scheme, a new scheme with ice, snow, and graupel processes suitable for high-resolution simulations; for the long-wave radiation RRTM scheme the rapid radiative transfer model, an accurate scheme using look-up tables for efficiency and accounting for multiple bands, trace gases, and microphysics species; for the short-wave radiation, Dudhia scheme, simple downward integration allowing for efficient cloud and clear-sky absorption and scattering; for the surface-layer, the Monin–Obukhov Similarity scheme, based on Monin–Obukhov with Carslon-Boland viscous sub-layer and standard similarity functions from look-up tables; for the land-surface, the Noah Land-Surface Model, unified NCEP/NCAR/AFWA scheme with soil temperature and moisture in four layers, fractional snow cover and frozen soil physics; for the boundary layer the YSU scheme; for the cumulus parameterization, the Kain-Fritsch (new Eta) scheme, deep and shallow sub-grid scheme using a mass flux approach with downdrafts and CAPE removal time scale. In Fig. 2, the comparison between WRF simulation results and observations at 2-m height is presented. In the scatter plots, the performance of the model for temperature, wind direction, and wind speed are shown. It can be noted that the agreement for the temperature is generally good, even if in some cases, there is a discrepancy of few degrees. Also, the results for wind direction can be considered satisfactory for the most part of the experiments but totally unsatisfactory in the case of the Trial15. Finally, the results of the comparison for wind speed clearly show the inability of WRF, in many cases, to reproduce the low-wind speed condition. In fact, the observed values are always less than 3 ms−1, while the simulated attain values up to about 5 ms−1.

Fig. 2
figure 2

Scatter plots between WRF simulation results and observed data for temperature, wind direction, and wind speed

4.2 Dispersion simulations

Once the meteorological fields (mean wind velocity components and temperature) have been achieved, the turbulent parameterization for wind component standard deviations and Lagrangian time scales, needed for the dispersion simulations, are provided through WSI (Bisignano et al. 2017), as described in the previous section. For the simulation presented in this paper, we selected the Hanna (1982) parameterization. The computational domain used for the SPRAYWEB simulations is 72 × 72 km2 corresponding to the inner grid of the WRF simulation. Once the grid is defined, we can limit the calculation of the concentration to a smaller domain. However, the initial grid dimension should be the same as the inner grid of the meteorological model. The cells used to calculate the mean concentration and concentration variance in which the domain is divided are of size ∆x = ∆y = 100 m in the horizontal direction; ∆z = 15 m is the first layer depth. This resolution is the same for all the stability conditions. The grid spaces were chosen on the basis of the distance between the concentration probes (50 m) and the source height (2 m). The resolution of the dispersion model (100 m) might be considered sufficient for probe distance of 50 m. However, in the model is possible to define a different grid to calculate the mean concentration and its gradient run-time. A tradeoff is, however, necessary in order to limit the computational time. We also try finer resolution (20 m), but the results do not improve. It is worth noticing that a cell 100-m wide centred at the problem location is representative of the space included between neighbouring detectors 50 m away from it. The horizontal grid size for the run-time computation was 20 m. The time interval that regulates the evolution of the trajectories of the particles is ∆t = 1 s. For the calculation of the concentration variance, we adopted different values for the coefficient \({\alpha }_{1}\) of Eq. (6) in the cases with different stability conditions, while the coefficient \({\alpha }_{2}\) was kept equal to 1.25 in all the simulations. After a series of tests, we found that the best value for the instable case was \({\alpha }_{1}\)=7.33, while for the neutral and stable case was \({\alpha }_{1}\)= 73.3. In these tests, we varied this coefficient, starting from the values used in our previous work (Ferrero and Oettl 2019), up to two orders of magnitude more. In that paper, in order to determine the values of the two constants, and in particular of α1, we considered the value proposed by Manor (2014) as an asymptotic value for the ratio td/TLs in the limit of long times (i.e., as the maximum value) and obtained α1 = 73.3. It can be noted that this value is different from that (α1 = 1.3) used in Ferrero et al. (2017), even though the Fackrell and Robins (1982) as well as the Uttenweiler experiments took place in neutral boundary layer conditions. However, the Fackrell and Robins (1982) data set was collected in a wind tunnel, while the Uttenweiler experiments in real world conditions. This difference affects more α1 than α2, whose value is important only for small times, when the turbulence scale is small. On the contrary, α1 influences the dissipation time scale for longer times, when larger eddies are responsible for variance dissipation. As for α2, the value proposed by Ferrero et al. (2017) was used, which implies \(\frac{\tau }{{T}_{Lw}}=0.7\) at t = 0, therefore, of the order of the Lagrangian time scale close to the source or even smaller. The largest eddies are responsible for the concentration variance dissipation, while the concentration variance itself is mainly generated by smaller eddies. Sykes et al. (1984) observed that the variance dissipation is controlled by eddies with length scales of the order of the plume size and that the production of the concentration variance is important only for times less than the source-scale to turbulence-scale ratio. Then, we performed the statistical analysis and evaluated the best performance. It is likely that the more the turbulence intensity is, the more the variance is dissipated, and hence the decay time scale is expected to be larger in stable and neutral conditions with respect to the unstable conditions. That is the reason why the first coefficient is larger in the first cases and lower in the second one. It can be also noted that the parameterization proposed by Ferrero and Oettl (2019) uses the lower value. It is worth noticing that the performance of the model varies slowly with the value of \({\alpha }_{1}\) since, in Eq. (6), it is multiplied by the ratio between the distance from the source and \({H}_{mix}\) and, in this experiment, the maximum distance from the source, is about 500 m, which is much lower than the boundary layer height.

The statistical analysis is performed using standard metrics as the mean and maximum value, fractional bias (FB), normalized mean square error (NMSE), correlation coefficient (R), factor of two (F2) and factor of five (F5) (Chang and Hanna 2004). If \({x}_{m}\) indicates the measured and \({x}_{c}\) the calculated values, they are defined as follows (overline indicates the mean):

$$FB=2\frac{{\overline{x}}_{m}-{\overline{x}}_{c}}{{\overline{x}}_{m}+{\overline{x}}_{c}}$$
(7)

that indicate overestimation if FB < 0 or underestimation if FB > 0;

$$NMSE=\frac{\overline{{\left({x}_{m}-{x}_{c}\right)}^{2}}}{{\overline{x}}_{m}{\overline{x}}_{c}}$$
(8)

F2 is the fraction of data that satisfy

$$0.5\le \frac{{x}_{c}}{{x}_{m}}\le 2$$
(9)

while F5 is the fraction of data that satisfy

$$0.2\le \frac{{x}_{c}}{{x}_{m}} \le 5$$
(10)

The metrics are chosen in order to account for the typical indices used in the model evaluation of dispersion models, suggested in the fundamental paper by Chang and Hanna (2004) and used in the scientific community. Also, F2, the fraction of prediction within a factor of two of the observation is recommended by Chang and Hanna (2004) in order to compare the model prediction with the observations. Similarly, we define F5 to better describe the degree of agreement between model results and measurements. The number of values available for each trail was 100, corresponding to the number of probes. In the following tables, we show the results of the different simulations for the mean concentration (MEAN), the concentration standard deviation (STD), and the concentration fluctuation intensity (INT). In the caption, the value of the Obukhov length is indicated. Remember that the Obukhov length indicates the stability condition; in particular, if it is positive, the atmosphere is stable; if negative, it is unstable; the greater its absolute value, the more the atmosphere tends towards neutrality.

4.2.1 Unstable case

Looking at the results of the statistical analysis for the unstable case (Trial45; Table 2), we can observe that FB is negative for the average concentration, indicating that, the model overestimates. Moreover, the concentration standard deviation is overestimated although to a less extent respect to the mean value. On the contrary, the intensity is underestimated (FB is positive). NMSE shows a high value as regard the mean concentration, while for concentration fluctuations it reduces a factor of two, and it is much lower for the concentration fluctuation intensity because the errors of the first two quantities compensate each other. The correlation coefficient R is good for the mean concentration (p < 0.05), low for the concentration fluctuations (even if p > 0.05), while concerning the concentration fluctuation intensity it exhibits an anti-correlation (p < 0.05). For F2 and F5, the model shows a good performance, especially as regards the concentration intensities.

Table 2 Statistical analysis for the unstable case Trial45 (L =  − 2.94 m), with \({\alpha }_{1}\)= 7.33 and \({\alpha }_{2}\)=1.25, for the mean concentration (MEAN), the concentration standard deviation (STD), and the concentration fluctuation intensity (INT)

As for Fig. 3, where the scatter-plots for the Trial45 are presented, it appears that there is an overestimation of the mean concentration, particularly for the lower values. The standard deviations are overestimated at lower values and underestimated at higher values. As a matter of fact, the behaviour of the predicted standard deviations shows an almost constant trend, while the observations vary of more than two orders of magnitude, indicating that the model is not able to resolve the small spatial scale of the field experiment. The concentration fluctuation intensity is slightly underestimated but totally uncorrelated. The low correlations are probably due to the difficulties of SPRAYWEB, designed for working on larger scales, to match the spatial distribution of the observations. The turbulence parameterizations are traditionally used as input to dispersion models that fail at these scales, as also pointed out by Luhar (2010) and Ferrero and Maccarini (2021). However, the chosen experiment reproduces the scales of typical phenomena related to concentration fluctuations, such as the spillage of toxic or explosive substances and typical odour sources. In this work, we wanted to use a dispersion model in its standard form in order to test its limitations and see what improvements need to be implemented. In addition, the model chain is used instead of the on-site measurements because in real cases, these are not always available, and the use of models is the only possible solution for emergency cases.

Fig. 3
figure 3

Scatter-plots of the mean concentration (μg m−3), concentration standard deviation (μg m−3) and concentration fluctuation intensity for the unstable case, Trial45

4.2.2 Neutral cases

Since the values of the Obukhov length is negative for Trial14 (− 126 m) and positive for Trial46 (149 m), the two neutral trials (Trial14 and Trials46) are studied separately (Tables 3 and 4  respectively). The mean concentration of Trial14 has a negative FB index; therefore, it is overestimated, while the concentration standard deviation is slightly underestimated and concentration fluctuation intensity, which is also underestimated, shows a higher value of FB. The Trial46 behaves in the same way, except for the underestimation of the mean concentration. The NMSE of the Trial14 is of the order of 8 for STD and INT, and higher (15) for the mean concentrations. Trial46 shows a similar behaviour for NMSE, except for the concentration standard deviations whose NMSE is particularly high. The correlation coefficients are unsatisfactory for the three quantities and the two trials even if in the case of Trial14, the p-value is greater than 0.05, while for Trial46, it is lower than 0.05 for the mean and standard deviation concentration. F2 and F5, for the neutral Trials are satisfactory for the mean concentration and the concentration fluctuation intensity while attaining low values for the concentration standard deviation.

Table 3 Statistical analysis for the unstable/neutral Trial14 (L =  − 126 m), with \({\alpha }_{1}\)=73.3 and \({\alpha }_{2}\)=1.25, for the mean concentration (MEAN), the concentration standard deviation (STD) and the concentration fluctuation intensity (INT)
Table 4 Statistical analysis for the stable/neutral Trial46 (L = 149 m), with \({\alpha }_{1}\)= 73.3 and \({\alpha }_{2}\)= 1.25, for the mean concentration (MEAN), the concentration standard deviation (STD), and the concentration fluctuation intensity (INT)

The scatter plots for the Trial14 in Fig. 4 confirm the low correlation for the mean concentration but less the overestimation indicated by the FB value, since the higher values are well reproduced. The simulated concentration variance agrees with the observation except for some cases which are underestimated. Fluctuation intensity show satisfactory result for at least four cases out of ten.

Fig. 4
figure 4

Scatter-plots of the mean concentration (μg m−3), concentration standard deviation (μg m.−3) and concentration fluctuation intensity for the neutral case, Trial14

Figure 5 reports the scatter-plots for the Trial46. In general, the model gives good performances for the three quantities, except for the correlation coefficient, as already noted. Both underestimation of the observations and low correlation appear. It can be observed that the highest mean values are well reproduced, while the lower values of the concentration standard deviation are better captured. The concentration fluctuation intensity is correctly reproduced only for the highest values. As for the standard deviation, it can be observed that, while the observed values vary of about three orders of magnitude, the predicted values remain in a small range. This is probably due to the underestimation of the mean concentration field (see Table 4) and the consequent low concentration gradients which cause an underestimation of the sources of the concentration variance.

Fig. 5
figure 5

Scatter-plots of the mean concentration (μg m−3), concentration standard deviation (μg m.−3), and concentration fluctuation intensity for the neutral case, Trial46

A comparison may be performed against the results obtained by Ferrero and Oettl (2019), since the experiments simulated in that paper were carried out in neutral conditions. It can be observed that the results of the statistical analysis, in the present case, are less satisfactory. This can be due to the lower wind speed and to the corresponding difficulties of WRF in simulating this condition as shown in Sect. “Meteorological simulation” and in Ferrero et al. (2018).

4.2.3 Stable cases

The four stable trials (Trial07, Trial15, Trial22, Trial30) were analyzed together. The results are shown in Table 5. We find that FB is positive for the mean concentration, the concentration fluctuations, and the concentration fluctuation intensity which means that the model tends to underestimate the observations for the three variables. NMSE for the mean concentration has a large value, and it is very large for the fluctuation standard deviation. On the contrary, the concentration fluctuation intensity shows a much better performance for this index. The correlation coefficient is close to zero for the three quantities even if the p-value is always much greater than 0.05. This suggests that the model, in the stable case and at small scale, is unable to precisely reproduce the spatial and temporal distributions of the predicted variables. F2 and F5 are relatively low for all the concentration statistics.

Table 5 Statistical analysis for the stable cases Trial07, Trial15, Trial22, and Trial30 (L = 40.1; 12.4; 8.4; 22.3 m), with \({\alpha }_{1}\)=73.3 and \({\alpha }_{2}\)=1.25, for the mean concentration (MEAN), the concentration standard deviation (STD) and the concentration fluctuation intensity (INT)

In Fig. 6, the result for the stable cases is depicted in terms of scatter-plots. The figures highlight the low correlation between predicted and observed values and suggest a very approximate agreement of the mean values for the mean concentrations and concentration standard deviation, while the concentration fluctuation intensity shows a systematic underestimation. They also indicate underestimation of the higher values and an overestimation of the lower values both for the mean concentration and concentration standard deviation. It can be observed that, as in the neutral case the concentration standard deviations show only a small variation in the general trend, but in this case, a similar behaviour is shown by the mean concentrations. In the case of the concentration fluctuation intensity, some cases show a good performance, while others are largely underestimated.

Fig. 6
figure 6

Scatter-plots of the mean concentration (μg m−3), concentration standard deviation (μg m.−3) and concentration fluctuation intensity for the stable cases, Trial07, Trial15, Trial22, and Trial30

5 Conclusions

In this paper, a new model for the concentration variance is proposed. The model is based on the SPRAYWEB Lagrangian dispersion model, which is modified in order to calculate, besides the mean concentrations, the concentration variance. From these two quantities, we are able to calculate the concentration fluctuation intensity. The results of the model simulations are compared with field measurements, and a statistical analysis is carried out. The results can be considered encouraging, even if the agreement between measured and simulated value is not satisfactory. The model should be improved particularly increasing its resolution. The model seems to be promising in calculating the concentration’s second-order statistic. Some discrepancies and, in particular, low values of the correlation coefficients are found. This poor result can be attributed to the difficulty of the model to correctly simulate the concentration and the concentration fluctuations at each specific probe, due to the very small scale of the experiments and to the reciprocal proximity of the samplers. Another reason is the large wind direction variability and the multiple concentration peaks that occur during the experiments (Singh and Sharan 2013), despite their very short duration (about 10 min). Assessing the results according to range from the source, as done in Ferrero and Oettl (2019), it can be useful for a detailed analysis of the dissipation time scale effect and of its parameterization, while the aim of the present work is to evaluate the performance of an operational model for practical applications. Concerning the dependency on the mean wind, the experiments are characterized by values in between about 1 and 2 ms−1 and a case with wind speed equals to 2.7 ms−1 which are unlikely to bring to different results.

The model here proposed does not improve the results obtained in our previous works, and sometimes they are worse, but could be an important tool to forecast the concentration fluctuation dispersion in real complex conditions. These results must be improved before the model would be applicable for regulatory purposes.

The results obtained in our previous works are better compared to those of the present paper. In the case of Ferrero and Oettl (2019), both FB and NMSE for the mean, concentration standard deviation, and concentration intensity are lower than in the present case, while the correlation coefficient is better for the mean concentration in the previous work but similar for the concentration standard deviation and concentration intensity. It is worth to mention that the field experiment taken into account for the comparison is characterized by low-wind speed conditions and both stability and instability, while the simpler models were applied to neutral condition only (Ferrero et al. 2017, Ferrero and Oettl 2019). One way for improving the model performance could be prescribed turbulence parameterizations adapted to such small time and spatial scales. Also, the parameterization of the dissipation time scale for the concentration variance could be changed. However, it is worth noticing that the new model is drastically different from the previous ones (Ferrero and Oettl 2019) since the dispersion of the concentration variance is performed at the same time of that of the mean concentrations, while in the previous models this was done offline, once the mean concentration reached the stationary conditions. Comparing our results with the ones obtained by Singh and Sharan (2013) for the mean concentrations, we can observe that the performance of SPRAYWEB is slightly worse in the case of stable conditions and worse in unstable conditions. It is worth to mention that the aim of this work is to test the 3D SPRAYWEB model that is driven by the meteorological model (WRF in this case). For this reason, we do not use the measured wind as we did in previous works, but we provide the input parameters for the dispersion model from the output of the WRF model. Furthermore, the performance of WRF is tested against measurements giving satisfactory results except for the overestimation of the wind speed in some cases.

It is worth to mention that SPRAYWEB can be used successfully in complex terrains and inhomogeneous flows and that WRF can give good results, even with several nestings between successive grids. Nevertheless, in the present case, WRF is unable to correctly reproduce the observed low-wind speed, as already shown in the previous work (Ferrero et al. 2018). As a consequence, SPRAYWEB simulation results show a poor agreement between the modelled and observed concentrations.

The model here proposed is promising in giving a realistic evaluation of the 90th percentile, thanks to the estimation of the concentration fluctuation intensity, without a priori hypothesis as the peak-to-mean.