Handbook of Geomathematics pp 1-21 | Cite as

# Model-Based Visualization of Instationary Geo-Data with Application to Volcano Ash Data

## Abstract

Driven by today’s supercomputers, larger and larger sets of data are created during numerical simulations of geoscientific applications. Such data often describes instationary processes in three-dimensional domains in terms of multi-dimensional data. Due to limited computer resources, it might be impossible or unpractical to store all data created during one simulation, which is why several data reduction techniques are often applied (e.g., only every *n*th time-step is stored). Intuitive scientific visualization techniques can help to better understand the structures described by transient data. Adequate reconstruction techniques for the time-dimension are needed since standard techniques (e.g., linear interpolation) are insufficient for many applications. We describe a general formalism for a wide class of reconstruction techniques and address aspects of quality characteristics. We propose an approach that is able to take arbitrary physical processes into account to enhance the quality of the reconstruction. For the eruption of the volcano Eyjafjallajökull in Iceland in the spring of 2010, we describe a suitable reduced model and use it for model-based visualization. The original data was created during a COSMO-ART simulation. We discuss the reconstruction errors, related computational costs, and possible extensions. A comparison with linear interpolation clearly motivates the proposed model-based reconstruction approach.

## Keywords

Linear Interpolation Vertical Wind Time Step Size Reconstruction Approach Operational Weather Forecast## 1 Introduction

Nowadays, geoscientific phenomena such as global warming and the greenhouse effect are of public interest and much research is done in these fields. Related questions interdisciplinarily refer to several applied sciences such as meteorology, geochemistry, and geomorphology amongst others. Therefore, a great diversity of models and corresponding solution procedures are used and large data sets are typically created, often several gigabytes of data or considerably more. The ever-growing computing power available in computing facilities accessible for researchers allows for more complex models, higher accuracy in the computations, and also the creation of more data.

Most often, the analysis of such data requires a complex working sequence and the recovery of an intuitive and deep-going comprehension of the implicitly described features of interest is non-trivial. Tailored imaging tools can facilitate this step of cognition by filtering existing data and displaying only the important subsets with adequate techniques of scientific visualization (Bonneau et al. 2006). Most often, the investigated data is multi-dimensional (i.e., multiple properties such as temperature and wind are given) and defined on a three-dimensional domain. Transient physical processes are typically represented by a description of the spatial state at a selection of points in time within the time horizon. A common approach to keep the total data amount economically justifiable is to store the state of the system only at a few points in time, e.g., one description per hour. By this data reduction, a large portion of the data is eliminated irrecoverably, as long as the same calculations are not repeated again.

In this article, we describe an approach of data reconstruction that incorporates a physical model additionally to the available data. By the consideration of such a model, a priori knowledge can be exploited in contrast to pure interpolation techniques. In the right panel of Fig. 1, the reconstructed trajectory of the particle assuming a reduced model is shown. The trajectory is almost circular, but the end positions of the reconstructions indicated by green circles are not located at the given particle positions (red circle). This gap between the reconstructed final state and the given data arises from the fact that the deployed reduced model is only an approximation of the original model. Certainly, this discrepancy can be minimized by different techniques which will be addressed later. While standard interpolation techniques can be applied universally, the model-based reconstruction approach requires an adequate model. Obviously, the quality and also the related computational costs of the reconstruction can be controlled by the specific choice of this model. We describe the general reconstruction approach based on models that are given by means of partial differential equations. Many phenomena in the geosciences can be modeled in terms of instationary partial differential equations which make this approach quite universal. High quality can be achieved only if the applied reduced physical model is adequate for the phenomenon included in the data.

As a proof of concept, we investigate the scenario of the eruption of the volcano Eyjafjallajkull in Iceland in April 2010. Large amount of volcanic ash was injected into the atmosphere and was transported rapidly towards Europe. High-fidelity simulations of this scenario were calculated with the on-line coupled model system COSMO-ART (Vogel et al. 2009) which will be described in Sect. 3. During the numerical simulation run, data files containing the wind and the distribution of six different ash species in a 1-h stepping were stored. We propose a simplified physical model for the reconstruction of the evolution of the ash distributions between these existing data sets. In Sect. 4 we describe the details of the applied reduced model and the simplifications made. With this system the reconstruction can be computed very efficiently on a desktop computer. The results discussed in Sect. 5 clearly motivate the proposed approach of a problem-dependent reconstruction-model.

## 2 Concept of Reduced Model for Visualization

In this section, we give an abstract description of the data reconstruction task for visualization of instationary data and discuss related quality characteristics. We demonstrate that the standard approach of linear interpolation fits into this formalism and motivate the use of a reconstruction approach that makes use of a physical model in addition to the given data.

*u*: [0,

*T*] →

*X*should be visualized for the interval [0,

*T*]. Here,

*X*denotes some arbitrary space in which the state of the system can be described (e.g., \(X = \mathbb{R}\) in case of a scalar-valued solution such as temperature). The physical model that exactly describes the aforementioned process is denoted by

*F*. In this case, the exact model

*F*, the initial state

*u*(0), and the solution

*u*fulfills the following relation:

*F*prolongates the state of the system from

*u*(0) at time

*t*= 0 through time such that at time

*t*the determined state equals

*u*(

*t*). A very general description of an approximative model can be given in a similar fashion. The approximative model

*Φ*fulfills the relation

*Φ*can include several parameters (e.g., states

*u*(

*t*

_{ i }) at some points in time

*t*

_{ i }, physical parameters such as a viscosity parameter, and so forth).

The approximative model *Φ* will be exploited as an interpolation between known states of the exact model *F*. The usage of a physical model equation to determine the interpolation motivates the term model-based reconstruction. Important aspects of an approximation are its accuracy, robustness and stability, parameter dependence, coupling, and computational effort. We denote the interpolation operator for the solution *u* between time steps *t* _{ i − 1} and *t* _{ i } depending on states *u*(*t* _{ i − 1}) and *u*(*t* _{ i }) by \(\Phi [t_{i-1},t_{i},u(t_{i-1}),u(t_{i})](t)\). Then, the accuracy of the approximative model can be analyzed using a suitable norm of deviation \(\vert \vert \Phi [t_{i-1},t_{i},u(t_{i-1}),u(t_{i})] - u\vert \vert \). If the solution is only available for specific states, a variant is to analyze at interpolated states such as \(\vert \vert \Phi [t_{i-1},t_{i+1},u(t_{i-1}),u(t_{i+1})](t_{i}) - u(t_{i})\vert \vert \). Since the approximate models are iterated for each interpolation interval, for example in [*t* _{ i − 1}, *t* _{ i }), additional continuity conditions such as \(\Phi [t_{i-1},t_{i},u(t_{i-1}),u(t_{i})](t_{i-1}) = u(t_{i-1})\) and \(\Phi [t_{i-1},t_{i},u(t_{i-1}),u(t_{i})](t_{i}) = u(t_{i})\) seem desirable, but should not be overrated, since this is always achievable using simple interpolation schemes that can decrease the overall accuracy. This motivates the use of approximation schemes based on mathematical models for a simplified physical model, that is expected to outperform general interpolation concepts. This expectation is based on the concept to provide more information to the visualization than traditional approaches that only exploit data states. This additional information is given by the simplified physical model, that now links the visualization to the numerical simulation.

The aspect of robustness is mostly defined by the numerical method and numerical parameters chosen for the approximation. We expect the approximative model to be solvable and stable for any given valid or slightly disturbed state data, yielding valid results. The treatment of instationary boundary conditions, or other outer parameters of influences might need special treatment to improve the robustness of the computed approximative model, as it will be discussed in the following text.

The approximative models need state information for the computation. As simplified physical models, we generally expect that only partial information of the full state is needed. This data austerity decreases the amount of data that is to be managed for visualization, but in general we cannot expect continuity if the final state information remains unused. This approach is used for fast forward schemes. This is no great loss, since the results can be amended using simple linear interpolation schemes late in the interval. Commonly, the amount of coupling within the interpolation scheme is an important issue for the computational effort, and especially for a potential speed-up using parallelization. While a linear interpolation can indeed provide a coupling from starting state to the end, this is of course a trivial approach only applicable to slowly changing phenomena, as we will see in the following, where we give examples of models that are covered by this abstract formulation.

### 2.1 Linear Interpolation Reconstruction

*t*∈ [

*t*

_{ i − 1},

*t*

_{ i }] the linear interpolation between the two states

*u*(

*t*

_{ i − 1}) and

*u*(

*t*

_{ i }). By definition, it is guaranteed that the reconstructed data matches the two states at the corresponding points in time. In the following, we will sometimes omit the arguments in squared bracket for better readability. As illustrated by Fig. 2, the linear interpolation yields acceptable results for slowly changing phenomena, but does not perform well in general. This limits the use of this general approach to either slowly changing data, or as an amendment of near-accurate results, as we expect to gain from approximate models.

### 2.2 Model-Based Reconstruction

*Ω*and time interval [

*t*

^{start},

*t*

^{end}] with

*t*

^{start}<

*t*

^{end}has the form

*F*, the external force term

*f*, and the initial state

*U*are defined according to the considered scenario. We assume the problem to be well-posed and apply a discretization method by means of a time-stepping scheme, given on a partitioning \(t^{\text{start}} = t^{(0)} < t^{(1)} < \cdots < t^{(N)} = t^{\text{end}}\) of the time interval. In that case, an approximation

*u*

^{(i)}of the solution at time

*t*

^{(i)}can be calculated successively by means of a corresponding solution-operator

*A*

^{(i)}for any \(i = 1,\ldots,N\):

*t*

^{(i)}∈ [

*t*

^{start},

*t*

^{end}], the reconstructed state of the process is given by

*U*and the solution operator which does not take any future states into account. Therefore, this reconstruction approach will not make sure that a given final state of the reconstruction interval will be achieved.

For any two states corresponding to the points in time *t* _{ i − 1} and *t* _{ i } of some given data, a partitioning can be inscribed and approximate solutions can be calculated as previously described. Since the underlying PDE and its discretization can be arbitrarily chosen, this reconstruction approach is very generic and can be applied to many problems. In the next section, we will give an example based on the convection diffusion problem.

## 3 Scenario of Volcano Ash Data

In this section, we present the scenario of the eruption of the volcano Eyjafjallajökull in Iceland in April 2010. The details related to the model system that was used to calculate the high-fidelity simulation are given. Subsequently, the resulting output data of the volcanic ash that is the starting point of the model-based reconstruction is described.

### 3.1 The Simulation in the Model System COSMO-ART

The COSMO model is the operational weather forecast model of the German weather Service DWD (Deutscher Wetterdienst). It is a non-hydrostatic regional model and is based on the thermo-hydrodynamical equations describing compressible flow in a moist atmosphere. Details about the dynamical core and the numerical scheme can be found in Steppeler et al. (2003) and Baldauf et al. (2011).

COSMO-ART (Vogel et al. 2009; Bangert et al. 2012) is an extension of COSMO, where ART stands for Aerosols and Reactive Trace gases. It is a comprehensive model system to simulate the spatial and temporal distributions of reactive gaseous and particulate matter. The model system is mainly used to quantify the feedback processes between aerosols and the state of the atmosphere on the continental to the regional scale with two-way interactions between different atmospheric processes.

The model system treats secondary aerosols as well as directly emitted components like soot, mineral dust, sea salt, volcanic ash, and biological material. Secondary aerosol particles are formed from the gas phase. Therefore, a complete gas phase mechanism is included. Modules for the emissions of biogenic precursors of aerosols, mineral dust, sea salt, biomass burning aerosol, and pollen grains are included. For the treatment of secondary organic aerosol (SOA) chemistry the volatility basis set (VBS) was included (Athanasopoulou et al. 2013). Wet scavenging and in-cloud chemistry are taken into account (Knote and Brunner 2013). Processes as emissions, coagulation, condensation (including the explicit treatment of the soot aging), deposition, washout, and sedimentation are taken into account. In order to simulate the interaction of the aerosol particles with radiation and the feedback of this process with the atmospheric variables the optical properties of the simulated particles are parameterized based on detailed Mie-calculations. New methods to calculate efficiently, the photolysis frequencies and the radiative fluxes based on the actual aerosol load were developed based on the GRAALS radiation scheme (Ritter and Geleyn 1992) and were implemented in COSMO-ART. To simulate the impact of the various aerosol particles on the cloud microphysics and precipitation COSMO-ART was coupled with the two-moment cloud microphysics scheme of Seifert and Beheng (2006) by using comprehensive parameterizations for aerosol activation and ice nucleation.

The advantage of COSMO-ART with respect to other models is that identical numerical schemes and parameterizations are used for identical physical processes as advection and turbulent diffusion. This avoids truncation errors and model inconsistencies. COSMO is verified operationally by DWD. The model system can be embedded by one way nesting into individual global scale models as the GME model or the IFS model. All components of the model system are coupled on line with time steps on the order of tenth of seconds. Nesting of COSMO-ART within COSMO-ART is possible. Typical horizontal grid sizes vary between 2.8 and 28 km.

For the simulation of the volcanic ash, the model domain is consistent with the domain covered by the operational weather forecast of Deutscher Wetterdienst for Europe. This means 665 ×657 ×40 grid points. The horizontal resolution is 0. 0625^{∘}, in the vertical the resolution is between 20 m close to the surface up to several 100 m at the top of the domain in 20 km height. The time step is 40 s. The reference simulation was performed for 120 h. The volcano emissions were represented by 6 classes of particles with a diameter between 1 and 30 μm. Details about the parameterization of the source height and the source strength can be found in Vogel et al. (2013). Sinks for the ash particles are wet and dry deposition as well as sedimentation. The initial and boundary conditions for the meteorological variables were taken from the operational runs of the GME. Since the output of one time step is in the order of 1 GB it is restricted to hourly output.

The numerical simulation of this scenario was calculated on the HP XC3000 computer system hosted at the Steinbuch Centre for Computing (SCC) at the Karlsruhe Institute for Technology (KIT). On this machine, the calculation using 64 CPUs (Intel^{Ⓡ} Xeon^{Ⓡ} Processor E5540, 2.53 GHz, quad-core) takes about 16 h.

### 3.2 Description of the Model Output

*Ω*over a time period of 5 days

*I*= [0, 120] in units of hours. There is one snapshot given at

*t*

_{ n }=

*n*, \((n = 0,\ldots,120)\), with a 1-h stepping.

^{∘}00

*′*0”S, 18

^{∘}00

*′*0”W to 21

^{∘}00

*′*0”N, 23

^{∘}30

*′*0”E. The height above sea level is given in pressure levels. For values near the earth’s surface, the pressure levels are aligned to the orography. The ash and wind fields are given on this grid in the GRIB data format (Wor 2003). Figure 3 shows a visualization of the ash plume developing over Europe.

The evaluation and interpretation of the model output is usually done using two-dimensional horizontal or vertical cross sections. However due to the huge amount of data and due to the time dependency of the atmospheric processes, only a small fraction of the data can be really looked at. This limits the understanding of the interaction of the atmospheric processes with the ash plume. With three-dimensional visualizations as shown in Fig. 3, complex spatial structures can intuitively be experienced. For instationary processes, new methods for displaying the data within a reasonable time-frame are urgently needed. Such a method for the reconstruction of the time evolution is described in the following.

## 4 Low-Fidelity Model for the Dispersion of a Volcano Plume

In the previous section, a model for the evolution and dispersion of the Eyjafjallajökull plume was presented. The density distributions of different particle species and wind fields calculated using this model was exported into files, one file per hour. In the following, we present a low-fidelity model for the reconstruction of the ash plume dispersion from this hourly data. While in the introductive example (see Fig. 1) the trajectory of one single particle was reconstructed, we are interested in the distribution of the particle densities and apply a partial differential equation to describe its development. Firstly, we describe the structure of the COSMO-ART output data which is the starting point of the model-based reconstruction. Subsequently, we give details of this method related to this scenario including the physical model for the dominating processes.

### 4.1 Conversion of the Mesh Structure

*k*is assigned to its average height

*z*(

*k*), see Fig. 4. This results in the following discrete representation \(\hat{\Omega }\) of the domain

*Ω*= [0, 4610] ×[0, 4550] ×[0, 22. 2] in units of kilometers:

### 4.2 The Continuous Model

In this section, we describe a simplified continuous model that we use in the following to reconstruct the motion of the volcano ash distribution. This model is represented by a parabolic partial differential capturing effects of advection and diffusion. One major simplification is that the three-dimensional domain *Ω* is replaced by a set of horizontal slices *Ω* _{ k } which are independently regarded. This makes it possible to calculate the additional snapshots very efficient on a workstation computer instead of a high-performance parallel computer.

The reduced model contains artificial diffusion which is included not only to represent molecular diffusion, but to represent mixing effects due physical processes that are not resolved such as turbulence and the omitted vertical advection. In a numerical point of view, the problem is more stable due to the higher diffusion. It must be noted that the correct level of diffusion is not known a priori. Instead, it is a model parameter on which the approximation quality and also the related computational costs will depend on. For the numerical tests described later, we determined a good choice for this parameter by the solution of an optimization problem.

The volcano as the only source of particles should get particular attention. It spreads ash particles into the atmosphere continuously in time, which is described indirectly by the given COSMO-ART output data. For the reduced model, the effect of the volcano eruption should be considered. On the one hand, the reconstructed ash distributions should correspond to the given data as closely as possible, on the other hand, the particle distribution should be governed by the model equation. One way to include this effect in the reduced model would be by means of a source term for the ash. Since no additional information related to the volcano should be used for the reconstruction, we include a localized interpolation and smoothing step into the discrete model.

*ρ*, in each horizontal layer is described by a two-dimensional convection-diffusion problem. For the reconstruction in the time interval

*I*

_{ n }, the particle density

*ρ*

_{ n }is initiated by the respective snapshot at time

*t*

_{ n }. The partial differential equation for each horizontal level

*k*and time interval

*I*

_{ n }has the form:

*ν*. The zero boundary conditions can be justified by the vanishing ash densities at the domain boundary at all times in the COSMO-ART data. The wind field \(\hat{\mathbf{v}}\) is calculated from the snapshots by linear interpolation in time

### 4.3 Discretization

In this section, a standard finite difference discretization based on the explicit Euler scheme of problem (8) is described. The resulting algorithm allows for efficient calculation of the particle concentrations at intermediate points in time between *t* _{ n } and *t* _{ n + 1}. The numerical scheme is easy to implement and leads to a fast algorithm. Details can be found in Hindmarsh et al. (1984).

*δ*

_{ t }> 0, the following scheme has to be solved for each point in time \(\tau _{m}^{n} = t_{n} + m\delta _{t}\) with \(m = 1,2,\ldots,M\), where \(M := (t_{n+1} - t_{n})/\delta _{t}\):

*f*≡ 0, since the effect of the volcano eruption is considered in a subsequent interpolation and smoothing step described later. The density at all boundary nodes are fixed to be zero and are not changed in any state of the procedure during the numerical simulation.

### 4.4 Stability

*P*does not depend on the time step size

*δ*

_{ t }, but on the velocity field \(\mathbf{v}\) as well as on the grid spacing

*h*which in our case should not be changed (e.g., by grid-refinement). Therefore the Péclet number can only be changed by means of the viscosity

*ν*. A viscosity high enough to guarantee the Péclet number to be smaller than one would lead to a very strong mixing effect. This mixing would be much stronger than needed and would lead to non-physical, overemphasized smoothing of the particle densities. Therefore, we apply a post-processing procedure in which the negative particle concentration values are increased to zero.

### 4.5 Discrete Volcanic Particle Injection Model

*k*, the linear interpolation in time of the particle concentration at the volcano’s position \(\mathbf{x}_{V }^{k} \in \Omega _{k}\) is given by

*ω*

_{ ij }represent a discretized Gaussian bell function on a 11 ×11 stencil. This stencil is located at the volcano position and covers a sub-domain denoted by \(\tilde{\Omega }_{k}\). At the grid points \(\mathbf{x}_{ij} \in \tilde{ \Omega }_{k}\) the weights are defined by \(w_{ij} :=\exp (-\frac{8} {47}\vert \mathbf{x}_{ij} -\mathbf{ x}_{V }\vert ^{2}) \in [0,1]\) and tend to zero at the boundary, see Fig. 6. In each time step, after the solution has been updated according to Eq. (10), this solution is modified by

### 4.6 Implementational Aspects

We implemented a discrete model of the previously described scenario in C++. The conversion of the original COSMO-ART data from the GRIB format to a structured VTK format was done in a preprocessing step. The work-flow for the reconstruction of the particle concentration between any two successive time steps *t* _{ n } and *t* _{ n + 1} is listed in Algorithm 1.

### Algorithm 1 Work-flow of the implementation

Read initial snapshot *ρ*(*t* _{ n }) using VTK library

Determine highest stable time step size due to equation (11)

Setup stencil for discrete scheme

**for** *m* = 1*…N* **do**

Update ash particle concentration \(\rho _{\text{lfm}}(\tau _{m}^{n})\) in any horizontal layer

**if** \(m\,\mathrm{mod}\,N_{vis} == 0\) **then**

Visualize \(\rho _{\text{lfm}}(\tau _{m}^{n})\)

**end** **if**

**end** **for**

Calculate error \(\left \|\rho _{\text{lfm}}(\tau _{N}^{n}) -\rho (t_{n+1})\right \|\) if required

For this scenario, in each snapshot of the original data at least one half of the domain has vanishing particle concentrations. Taking this fact into account, the computational costs for the reconstruction of the densities can significantly be reduced. The computational cost scales linearly with the number of nodes that have to be updated in each time step. In our implementation, we determined the smallest rectangle in each horizontal layer that contains all non-zero particle concentrations within the initial and the target snapshot (1 h later). We applied the numerical scheme only to the nodes in these sub-domains which led to a fraction of the original computational costs. In particular for the first intervals, *I* _{ n } with *n* < 50 the ash particles are localized very strongly.

The presented low-fidelity model is an approximation of the original COSMO-ART model and therefore its (exact) solution is already related to some error as discussed in the next section. Therefore, it is justifiable to calculate only approximative solutions with moderate accuracy to increase the performance additionally. In numerical tests we verified that the reconstructed data based on single precision accuracy computations are related to errors in the order of 0. 01 *%* compared to those with double precision accuracy. This motivates to utilize very performant hardware for the data reconstruction (e.g., GPUs) that can exploit the highest performance using single precision accuracy.

## 5 Numerical Results

In this section, we investigate the results of numerical test series with respect to the quality and also the related computational costs. We compare data reconstructions calculated by linear interpolation and by the low-fidelity model approach both qualitatively and quantitatively.

### 5.1 Qualitative Comparison

*n*= 97. At that state, the linear interpolation corresponds to the mean average and therefore contains features of both snapshots at

*n*= 96 and

*n*= 98. The physical evolution process is not captured correctly. High particle concentrations are reconstructed only at places where both the initial and the final snapshot have such high concentrations. In contrast, the reconstruction by means of the low-fidelity model can structurally reproduce the evolution process, see Fig. 7b. A simulation started at

*n*= 96, indicates good agreement with the original data at

*n*= 97 and even at

*n*= 98.

*n*= 80 (left) and at

*n*= 81 (right), the two small separated ash clouds clearly can be seen. Regarding the model-based visualization, the isosurfaces continuously move from the start position to the target position. This is indicated by the visualization after half of the interval time,

*n*= 80. 5, in the lower panel. In contrast, the concentration values computed by the linear interpolation at that time fall below the iso value for the visualization. This physically incorrect artifact leads to a vanishing ash cloud in the upper panel of Fig. 8.

As additional material, a 3D animation of this scene (“Comparison Volcano Ash Distribution”) can be found on the Springer website http://www.springerimages.com. It shows a comparison between the original non-interpolated data, the reconstructed data by linear interpolation, and the data reconstructed using the low-fidelity model. The ash particle concentrations in the animation are represented by iso surfaces, similar to Fig. 3. The wind is indicated by colored arrows at an average height of 4. 8 km above sea level. For clarification purposes, the vertical axis is scaled by a factor of 75 of the original height above sea level. The linear interpolation in time gives the impression of a pulsating ash dispersion, arising from the artifact described in the previous paragraph. The model-based visualization shows a flowing transition from one snapshots to the next with the exception of a small correction at the end of each construction interval. For the animation and Fig. 8, a viscosity of \(\nu = 0.005\,\mathrm{km}^{2}/\mathrm{s}\) was applied, which seems to be a good choice as described in the following section.

### 5.2 Quantitative Comparison

*L*

^{2}-error \(E_{n}(\rho _{\mathrm{mode}}^{s}) :=\|\rho _{ \mathrm{mode}}^{s}(t_{n}) -\rho _{\mathrm{data}}^{s}(t_{n})\|/\|\rho _{\mathrm{data}}^{s}(t_{n})\|\) and has the form:

*E*

_{interpolation}, evaluated at the intermediate time-stamp, can be computed. The low-fidelity simulation is initialized at the time step

*t*

_{ n }and the convection field as well as the particle concentrations for the volcano model are identified by linear interpolation between

*t*

_{ n }and

*t*

_{ n + 2}. Thus, we compute the error with respect to the original data at

*t*

_{ n + 1}(denoted by

*E*

_{lfm1}) and

*t*

_{ n + 2}(denoted by

*E*

_{lfm2}). It is reasonable that the linear interpolation has its greatest error at the intermediate time-stamp

*t*

_{ n + 1}. For the low-fidelity reconstruction one can assume that the error grows in time and therefore the error is maximal at the end of the considered interval at

*t*

_{ n + 2}. Figure 9 shows the results for this artificial set-up. The general error of the interpolation amounts to

*E*

_{interpolation}≈ 0. 43. The errors obtained for the low-fidelity model are for both the intermediate

*E*

_{lfm1}and the final error

*E*

_{lfm2}smaller as long as the viscosity

*ν*is sufficiently small. For high viscosities the resulting mixing effects are too high for the model to correctly describe the evolution of the ash plume. With decreasing viscosity the results gain the required sharpness and the error at intermediate time falls down to

*E*

_{lfm1}≈ 0. 27 and at the final time step reaches a slightly higher value of

*E*

_{lfm2}≈ 0. 30. Here the error curves indicate the existence of a minimum, where the mixing effect seems to capture the physics at best. To conclude for this scenario, the low-fidelity model with a suitable choice for the viscosity is superior to the linear interpolation.

*t*

_{ n }to the next

*t*

_{ n + 1}. In contrast to the validation set-up described above, the data of the convection field and the volcano model is now interpolated between one and the following snapshot. Figure 9 indicates that the best reconstruction with an error of

*E*

_{lfm}≈ 0. 19 can be expected for a viscosity of \(\nu = 0.005\,\mathrm{km}^{2}/\mathrm{s}\). This was the parameter of our choice for the final reconstruction of the evolution. Within a range of \(0.003\,\mathrm{km}^{2}/\mathrm{s} \leq \nu \leq 0.01\,\mathrm{km}^{2}/\mathrm{s}\) the low-fidelity simulations show similar sizes of the error. However, the computational costs increase with decreasing viscosity. This effect is explainable by the direct connection between the viscosity and the stability condition on the time step size, see Eq. (11). Choosing the expectably highest stable time step size, we get an expression for the number of needed iterations \(N = T\|\mathbf{v}\|^{2}/(4\nu )\), i.e., the viscosity reciprocally governs the computational costs. The dashed lines in Figs. 9 and 10 refer to the computational costs by means of the average computing time in seconds and approximately show a linear relation on the logarithmic scales in correspondence to the formal relation. These results were obtained in sequentially run simulations on a desktop workstation with an Intel

^{Ⓡ}Core

^{TM}i7-3770K Processor (3.50 GHz, quad-core). Hence, depending on the available hardware in purpose of the data reconstruction, a compromise has to be found between accuracy and costs.

### 5.3 Sources of Errors and Extensions

As described previously, the orography-following mesh structure of the original data, given in a geographical coordinate system, is converted to a regular Cartesian mesh in a preprocessing step. This conversion involves an error due to the different mesh structures which leads to a smoothing of the data. This interpolation error was accepted since it allows for simple mesh data structures that facilitate the implementation of the numerical scheme.

*%*for the largest particles if the gravity force would be accounted for.

Although the vertical wind is small compared to the horizontal wind, it is not zero, as shown in Fig. 5, and would transport ash particles in the vertical direction. Also the mixing effects in the atmosphere have contribution in the vertical direction. The consideration of the three-dimensional effects of advection and diffusion would require a fully-coupled three-dimensional discrete model. This would lead to much higher computational costs compared to the presented layer approach.

A fundamental component of the proposed reconstruction approach is the model equation describing the physical processes considered during the calculation. We used a 2D-version of a convection-diffusion model to account for the transport due to wind and diffusive mixing. In Sect. 3.1 we described the COSMO-ART model used to simulate the evolution of the volcano plume. Besides the ash particle densities and the wind field, several quantities were additionally (e.g., aerosols, soot, mineral dust, sea salt) considered to account for atmospheric processes that might be important for this scenario. These quantities and corresponding equations depict possible extensions for the low-fidelity model. Important components of this full description need to be identified which typically requires deep knowledge about the models. The quality and performance efficiency of the model-based reconstruction method essentially depends on the considered model variant, simplifications made, and the discretization.

Reconstructions based on the fully coupled three-dimensional problem are computational expensive in general but might be efficiently applied by means of model reduction approaches. One simple data reduction strategy is given by neglecting a portion of the available grid points, e.g., use only every *n*th grid point in any space direction. This results in a data reduction by a factor of *n* ^{3}. However reduced spatial resolution is in general related to a strong loss of details and quality (e.g., the data reduction of the original COSMO-ART data was reduced by this technique with respect to the time structure). For many applications, very efficient reduced models can be defined by means of an orthogonal basis of the space spanned by the available snapshots. Using the proper orthogonal decomposition method (POD) (Kunisch and Volkwein 1999, 2002, 2008), many problems with very high number of unknowns can be approximated very accurately with less than 100 unknowns only. An important aspect is that the only costly calculations (the determination of the reduced model) need to be done only once which can be done on a power-full computer system. For this type of model-reduction a model equation describing the physical behavior is required as well. For the reconstruction of the volcano ash plume based on POD techniques, the proposed convection-diffusion model, even with 3D-effects of advection and diffusion, could be deployed.

The starting point of the low-fidelity model described in Sect. 4 is a parabolic partial differential equation which accounts only for the initial state of each interval. However, it is not guaranteed that the calculated data at the end of the reconstruction interval will correspond to the given original data at that time. In general there will be some error due to the use of the low-fidelity model as we discussed in Sect. 5. This error can be minimized by fitting existing model parameters (e.g., the viscosity *ν*) such that the saltation-like change at the end of the reconstruction interval is as small as possible. Of course this non-smooth behavior can be eliminated by some post-processing operation of smoothing or interpolation, but these approaches lead to additional errors again. Such problems can be avoided by an alternative problem formulation that takes both states at the beginning and also at the end of the reconstruction interval into account. This can be achieved adding mixing effects related to the time-structure. This can modeled by adding temporal viscosity (e.g., a term of the form − *ν* _{ t } ⋅*∂* _{ t } ^{2} *ρ* with viscosity *ν* _{ t } > 0) leading to an elliptic problem formulated in the space-time domain. The solution of such a fully coupled system can no longer be done by means of time-stepping schemes as presented before, since all unknowns within the reconstruction interval are fully coupled. Such problems are typically tackled by means of sparse linear equation systems and related solution approaches (e.g., direct or iterative linear system solvers). Although the computational efforts to solve such a problem might be higher than the described time-stepping scheme, this approach can be desirable in many applications since the reconstructed data will catch all states given in the original data. To reduce the costs, adequate model reductions (e.g., POD techniques) might provide remedy.

## 6 Conclusion

In this article, we investigated methods for the reconstruction of time-depending processes which are described at a finite number of points in time only. We gave a general description of such reconstruction methods. We discussed linear interpolation as one of the standard approaches and proposed a model-based reconstruction method. In addition to the given data, the latter takes a physical model into account that approximately describes the underlying processes. For the eruption of the volcano Eyjafjallajökull in Iceland in the spring of 2010, we proposed such a low-fidelity model based on the convection-diffusion equation. We described the discrete model in detail and presented reconstructions of the evolution of the volcano plume by means of six species of ash particles. The reconstructions based on linear interpolation and the model-based approach were compared to the original data, calculated by a COSMO-ART model. Although the low-fidelity model was only a rough approximation of the original one, it lead to much smaller reconstruction errors compared to linear interpolation. Such results motivate the use of corresponding low-fidelity models for various fields of applications for purposes of scientific visualization or post-processing in general. Finally, some possible extension to the model for the volcano scenario were discussed.

The definition of an adequate low-fidelity model may be non-trivial since a detailed knowledge about the physics, discretization, and also software programming are needed. The better the reconstruction model suits the original problem, the less data is needed to achieve a high reconstruction quality. In that sense, powerful reconstruction approaches allow for substantial reductions of the needed amount of data to represent simulation results. While the original numerical simulations usually are conducted on large parallel computer systems, the data reconstruction might be calculated even on a standard desktop computer. Such efficient reconstruction techniques are essential for interactive visualization purposes.

The specific selection of a model equation is certainly a key point since it constitutes the physical processes that will be considered during the reconstruction. On the one hand, it must be able to describe the relevant features given in the data in an appropriate spatial and temporal scale, on the other hand, the model should not be too complex both with respect to software engineering and the computational costs. In the various fields of science and engineering a great diversity of physical features are considered for which many reconstruction strategies are required. Reduced models for standard processes (such as the convection-diffusion equation) could be defined universally, at least for certain problem classes. For highest efficiency, the reduced model might require adaptation to the particular application or at least existing model parameters might need to be adjusted.

By the freedom to choose an arbitrary low-fidelity model, the user can control the quality of the reconstruction and also the related computational costs. This allows to define highly performant methods for applications in which real-time availability or interactivity of the reconstructed result is relevant. Such methods could be integrated into visualization systems such as Amira (Kon 2009), EnSight (Com 2006), or ParaView (Henderson 2007). In case of ParaView, an extensive variety of filters exists that can be included into the visualization pipeline to manipulate the data. In particular, an often-used filter for the linear time-interpolation exists. Model-based reconstruction methods could be integrated by simply adding an additional filter which can easily be done since ParaView is an open-source development. If not high performance but high quality of the reconstruction is needed for some point in time, a more complex model – in the extreme case the original model – could be applied for reconstruction.

In recent years, numerical methods have been developed which make use of different physical model equations at the same time, see e.g., Oden and Prudhomme (2002), Braack and Ern (2003), Bales et al. (2009). This is often realized by a posteriori estimations of the error related to the applied models and their control by switching between a selection of available physical models of different complexities, known under the term *model adaptivity*. Such developments motivate to think no more of the numerical simulation and the reconstruction as two separate steps. Instead, a model system can be seen as some abstract mechanism that describes the physics in a given scenario up to some (needed) accuracy. From that perspective, a desired visualization of some process serves as initiating impulse causing the calculation of a numerical simulation of the corresponding scene. Visualization is then no more a post-processing step of the simulation, but is part of the model system. Such a combined simulation and visualization system would allow for physical-aware visualizations of zoomed views based on more complex physical models instead of purely interpolated images. The discretization (spatial mesh and time partitioning) as well as the complexity of the physical model could be adjusted automatically by means of mathematical error estimators to guarantee high accuracy of the result.

## References

- Amira 5 User’s Guide. Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) and Visage Imaging (2009). http://www.amira.com
- Athanasopoulou E, Vogel H, Vogel B, Tsimpidi AP, Pandis SN, Knote C, Fountoukis C (2013) Modeling the meteorological and chemical effects of secondary organic aerosols during an eucaari campaign. Atmos Chem Phys 13(2):625–645. doi:10.5194/acp-13-625-2013CrossRefGoogle Scholar
- Avila LS (2004) The VTK users’s guide. Kitware. ISBN:1-930934-13-0Google Scholar
- Baldauf M, Seifert A, Förstner J, Majewski D, Raschendorfer M, Reinhardt T (2011) Operational convective-scale numerical weather prediction with the COSMO model: description and sensitivities. Mon Weather Rev. doi:10.1175/MWR-D-10-05013.1. e-ViewGoogle Scholar
- Bales P, Kolb O, Lang J (2009) Hierarchical modelling and model adaptivity for gas flow on networks. Volume 5544 of lecture notes in computer science. Springer, pp 337–346. ISBN:978-3-642-01969-2Google Scholar
- Bangert M, Nenes A, Vogel B, Vogel H, Barahona D, Karydis VA, Kumar P, Kottmeier C, Blahak U (2012) Saharan dust event impacts on cloud formation and radiation over Western Europe. Atmos Chem Phys 12(9):4045–4063. doi:10.5194/acp-12-4045-2012CrossRefGoogle Scholar
- Bonneau GP, Ertl T, Nielson G (2006) Scientific visualization: the visual extraction of knowledge from data. Mathematics and visualization. Springer, HeidelbergGoogle Scholar
- Braack M, Ern A (2003) A posteriori control of modeling errors and discretization errors. Multiscale Model Simul 1(2):221–238MathSciNetCrossRefzbMATHGoogle Scholar
- EnSight User Manual. Computational Engineering International, Inc., 2166 N. Salem Street, Suite 101, Apex, NC 27523, (2006). http://www.ensight.com
- Henderson A (2007) ParaView guide, a parallel visualization application. Kitware Inc. http://www.paraview.org/
- Hindmarsh AC, Gresho PM, Griffiths DF (1984) The stability of explicit euler time-integration for certain finite difference approximations of the multi-dimensional advectiondiffusion equation. Int J Numer Methods Fluids 4(9):853–897. ISSN:1097-0363, doi:10.1002/fld.1650040905, http://dx.doi.org/10.1002/fld.1650040905
- Introduction to GRIB. World Meteorological Organization, June 2003Google Scholar
- Knote C, Brunner D (2013) An advanced scheme for wet scavenging and liquid-phase chemistry in a regional online-coupled chemistry transport model. Atmos Chem Phys 13(3):1177–1192. doi:10.5194/acp-13-1177-2013CrossRefGoogle Scholar
- Kunisch K, Volkwein S (1999) Control of the burgers equation by a reduced-order approach using proper orthogonal decomposition. J Optim Theory Appl 102(2):345–371 ISSN:0022-3239, doi:http://dx.doi.org/10.1023/A:1021732508059
- Kunisch K, Volkwein S (2002) Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. J Numer Anal 40(2):492–515MathSciNetCrossRefzbMATHGoogle Scholar
- Kunisch K, Volkwein S (2008) Optimal snapshot location for computing pod basis functions. SFB-report, 2008-008Google Scholar
- Oden JT, Prudhomme S (2002) Estimation of modeling error in computational mechanics. J Comput Phys 182(2):496–515. ISSN:0021-9991, doi:10.1006/jcph.2002.7183, http://dx.doi.org/10.1006/jcph.2002.7183
- Ritter B, Geleyn JF (1992) A comprehensive radiation scheme for numerical weather prediction models with potential applications in climate simulations. Mon Weather Rev 120(2):303–325. doi:10.1175/1520-0493(1992)120¡0303:ACRSFN¿2.0.CO;2Google Scholar
- Seifert A, Beheng KD (2006) A two-moment cloud microphysics parameterization for mixed-phase clouds. Part 1: model description. Meteorol Atmos Phys 92:45–66. ISSN:0177-7971, doi:10.1007/s00703-005-0112-4Google Scholar
- Steppeler J, Doms G, Schttler U, Bitzer HW, Gassmann A, Damrath U, Gregoric G (2003) Meso-gamma scale forecasts using the nonhydrostatic model LM. Meteorol Atmos Phys 82:75–96. ISSN:0177-7971, doi:10.1007/s00703-001-0592-9Google Scholar
- Vogel B, Vogel H, Bäumer D, Bangert M, Lundgren K, Rinke R, Stanelle T (2009) The comprehensive model system COSMO-ART – radiative impact of aerosol on the state of the atmosphere on the regional scale. Atmos Chem Phys 9(22):8661–8680. doi:10.5194/acp-9-8661-2009CrossRefGoogle Scholar
- Vogel H, Förstner J, Vogel B, Hanish Th, Mühr B, Schättler U, Schad T (2013, submitted) Simulation of the dispersion of the Eyjafjallajökull plume over Europe with COSMO-ART in the operational mode. Atmos Chem Phys Discuss 13(5):13439–13463CrossRefGoogle Scholar