Advertisement

Model-Based Visualization of Instationary Geo-Data with Application to Volcano Ash Data

  • Martin BaumannEmail author
  • Jochen Förstner
  • Vincent Heuveline
  • Jonas Kratzke
  • Sebastian Ritterbusch
  • Bernhard Vogel
  • Heike Vogel
Living reference work entry

Latest version View entry history

Abstract

Driven by today’s supercomputers, larger and larger sets of data are created during numerical simulations of geoscientific applications. Such data often describes instationary processes in three-dimensional domains in terms of multi-dimensional data. Due to limited computer resources, it might be impossible or unpractical to store all data created during one simulation, which is why several data reduction techniques are often applied (e.g., only every nth time-step is stored). Intuitive scientific visualization techniques can help to better understand the structures described by transient data. Adequate reconstruction techniques for the time-dimension are needed since standard techniques (e.g., linear interpolation) are insufficient for many applications. We describe a general formalism for a wide class of reconstruction techniques and address aspects of quality characteristics. We propose an approach that is able to take arbitrary physical processes into account to enhance the quality of the reconstruction. For the eruption of the volcano Eyjafjallajökull in Iceland in the spring of 2010, we describe a suitable reduced model and use it for model-based visualization. The original data was created during a COSMO-ART simulation. We discuss the reconstruction errors, related computational costs, and possible extensions. A comparison with linear interpolation clearly motivates the proposed model-based reconstruction approach.

Keywords

Linear Interpolation Vertical Wind Reconstruction Approach Volcano Eruption Proper Orthogonal Decomposition Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1

1 Introduction

Nowadays, geoscientific phenomena such as global warming and the greenhouse effect are of public interest and much research is done in these fields. Related questions interdisciplinarily refer to several applied sciences such as meteorology , geochemistry, and geomorphology amongst others. Therefore, a great diversity of models and corresponding solution procedures are used and large data sets are typically created, often several gigabytes of data or considerably more. The ever-growing computing power available in computing facilities accessible for researchers allows for more complex models, higher accuracy in the computations, and also the creation of more data.

Fig. 1

Trajectory of a particle (red circle) and positions at time t = 0, 1, 2, 3 (left), the linear interpolation of the trajectory based on the three positions at points in time t = 0, 1, 2, 3 (middle), and the trajectory calculated using an efficient, simplified model (right)

Most often, the analysis of such data requires a complex working sequence and the recovery of an intuitive and deep-going comprehension of the implicitly described features of interest is non-trivial. Tailored imaging tools can facilitate this step of cognition by filtering existing data and displaying only the important subsets with adequate techniques of scientific visualization (Bonneau et al. 2006). Most often, the investigated data is multi-dimensional (i.e., multiple properties such as temperature and wind are given) and defined on a three-dimensional domain. Transient physical processes are typically represented by a description of the spatial state at a selection of points in time within the time horizon. A common approach to keep the total data amount economically justifiable is to store the state of the system only at a few points in time, e.g., one description per hour. By this data reduction , a large portion of the data is eliminated irrecoverably, as long as the same calculations are not repeated again.

If the time evolution of some process is described implicitly by a data sequence, the time corresponding increments and number of time steps must fit the time-scale of the inspected feature. If more data is needed (e.g., for comprehensible animated visualizations), additional data sets must be reconstructed at intermediate time steps. Simple reconstruction methods ground on interpolation techniques such as polynomial or spline interpolation. In this case, the interpolation is calculated based on the information given in the data only. In Fig. 1, an example of a rotating particle is sketched. The linear interpolation of the particle’s trajectory shown in the middle panel is piece-wise linear and only a very rough approximation of the physically correct, circular trajectory (see left panel). This is due to a very coarse time-resolution consisting of three time steps during one circular motion.

In this article, we describe an approach of data reconstruction that incorporates a physical model additionally to the available data. By the consideration of such a model, a priori knowledge can be exploited in contrast to pure interpolation techniques. In the right panel of Fig. 1, the reconstructed trajectory of the particle assuming a reduced model is shown. The trajectory is almost circular, but the end positions of the reconstructions indicated by green circles are not located at the given particle positions (red circle). This gap between the reconstructed final state and the given data arises from the fact that the deployed reduced model is only an approximation of the original model. Certainly, this discrepancy can be minimized by different techniques which will be addressed later. While standard interpolation techniques can be applied universally, the model-based reconstruction approach requires an adequate model. Obviously, the quality and also the related computational costs of the reconstruction can be controlled by the specific choice of this model. We describe the general reconstruction approach based on models that are given by means of partial differential equations. Many phenomena in the geosciences can be modeled in terms of instationary partial differential equations which make this approach quite universal. High quality can be achieved only if the applied reduced physical model is adequate for the phenomenon included in the data.

As a proof of concept, we investigate the scenario of the eruption of the volcano Eyjafjallajkull in Iceland in April 2010. Large amount of volcanic ash was injected into the atmosphere and was transported rapidly towards Europe. High-fidelity simulations of this scenario were calculated with the on-line coupled model system COSMO-ART (Vogel et al. 2009) which will be described in Sect. 3. During the numerical simulation run, data files containing the wind and the distribution of six different ash species in a 1-h stepping were stored. We propose a simplified physical model for the reconstruction of the evolution of the ash distributions between these existing data sets. In Sect. 4 we describe the details of the applied reduced model and the simplifications made. With this system the reconstruction can be computed very efficiently on a desktop computer. The results discussed in Sect. 5 clearly motivate the proposed approach of a problem-dependent reconstruction-model.

2 Concept of Reduced Model for Visualization

In this section, we give an abstract description of the data reconstruction task for visualization of instationary data and discuss related quality characteristics. We demonstrate that the standard approach of linear interpolation fits into this formalism and motivate the use of a reconstruction approach that makes use of a physical model in addition to the given data.

We assume that a physical process denoted by \(u : [0,T] \rightarrow X\) should be visualized for the interval [0, T]. Here, X denotes some arbitrary space in which the state of the system can be described (e.g., \(X =\mathbb{R}\) in case of a scalar-valued solution such as temperature). The physical model that exactly describes the aforementioned process is denoted by F. In this case, the exact model F, the initial state u(0), and the solution u fulfills the following relation:
$$\displaystyle{ F[u(0)](t) = u(t)\quad \forall t \in [0,T]. }$$
(1)
This equation states that model F prolongates the state of the system from u(0) at time t = 0 through time such that at time t the determined state equals u(t). A very general description of an approximative model can be given in a similar fashion. The approximative model Φ fulfills the relation
$$\displaystyle{ \varPhi (t) \approx u(t)\quad \forall t \in [0,T]. }$$
(2)
The definition of the approximative model Φ can include several parameters (e.g., states u(t i ) at some points in time t i , physical parameters such as a viscosity parameter, and so forth).

The approximative model Φ will be exploited as an interpolation between known states of the exact model F. The usage of a physical model equation to determine the interpolation motivates the term model-based reconstruction. Important aspects of an approximation are its accuracy, robustness and stability, parameter dependence, coupling, and computational effort. We denote the interpolation operator for the solution u between time steps t i−1 and t i depending on states u(t i−1) and u(t i ) by \(\varPhi [t_{i-1},t_{i},u(t_{i-1}),u(t_{i})](t)\). Then, the accuracy of the approximative model can be analyzed using a suitable norm of deviation \(\vert \vert \varPhi [t_{i-1},t_{i},u(t_{i-1}),u(t_{i})] - u\vert \vert\). If the solution is only available for specific states, a variant is to analyze at interpolated states such as \(\vert \vert \varPhi [t_{i-1},t_{i+1},u(t_{i-1}),u(t_{i+1})](t_{i}) - u(t_{i})\vert \vert\). Since the approximate models are iterated for each interpolation interval, for example in [t i−1, t i ), additional continuity conditions such as \(\varPhi [t_{i-1},t_{i},u(t_{i-1}),u(t_{i})](t_{i-1}) = u(t_{i-1})\) and \(\varPhi [t_{i-1},t_{i},u(t_{i-1}),u(t_{i})](t_{i}) = u(t_{i})\) seem desirable, but should not be overrated, since this is always achievable using simple interpolation schemes that can decrease the overall accuracy. This motivates the use of approximation schemes based on mathematical models for a simplified physical model, that is expected to outperform general interpolation concepts. This expectation is based on the concept to provide more information to the visualization than traditional approaches that only exploit data states. This additional information is given by the simplified physical model, that now links the visualization to the numerical simulation.

The aspect of robustness is mostly defined by the numerical method and numerical parameters chosen for the approximation. We expect the approximative model to be solvable and stable for any given valid or slightly disturbed state data, yielding valid results. The treatment of instationary boundary conditions, or other outer parameters of influences might need special treatment to improve the robustness of the computed approximative model, as it will be discussed in the following text.

The approximative models need state information for the computation. As simplified physical models, we generally expect that only partial information of the full state is needed. This data austerity decreases the amount of data that is to be managed for visualization, but in general we cannot expect continuity if the final state information remains unused. This approach is used for fast forward schemes. This is no great loss, since the results can be amended using simple linear interpolation schemes late in the interval. Commonly, the amount of coupling within the interpolation scheme is an important issue for the computational effort, and especially for a potential speed-up using parallelization. While a linear interpolation can indeed provide a coupling from starting state to the end, this is of course a trivial approach only applicable to slowly changing phenomena, as we will see in the following, where we give examples of models that are covered by this abstract formulation.

2.1 Linear Interpolation Reconstruction

A reconstruction based on linear interpolation is given by
$$\displaystyle{ \varPhi _{\mathrm{lin.intp.}}[t_{i-1},t_{i},u(t_{i-1}),u(t_{i})](t) := u(t_{i-1}) + \frac{u(t_{i}) - u(t_{i-1})} {t_{i} - t_{i-1}} (t - t_{i-1}). }$$
(3)
Fig. 2

Illustration of the effect of linear interpolation on isosurfaces for scalar data. The correct physical process is moving between the blue states, the intermediate state in red. The green state denotes the interpolated intermediate state

This function describes for any t ∈ [t i−1, t i ] the linear interpolation between the two states u(t i−1) and u(t i ). By definition, it is guaranteed that the reconstructed data matches the two states at the corresponding points in time. In the following, we will sometimes omit the arguments in squared bracket for better readability. As illustrated by Fig. 2, the linear interpolation yields acceptable results for slowly changing phenomena, but does not perform well in general. This limits the use of this general approach to either slowly changing data, or as an amendment of near-accurate results, as we expect to gain from approximate models.

2.2 Model-Based Reconstruction

Many physical processes can be considered as dynamical systems, described by means of an initial state and an evolution law, i.e., initial-value problems. Often such evolution laws are given by partial differential equations (PDE). In that case, the problem formulation on a spatial domain Ω and time interval [t start, t end] with t start < t end has the form
$$\displaystyle{ \begin{array}{l} \left \{\begin{array}{ll} F(u(t,x)) & = f(t,x),\quad (t,x) \in [t^{\text{start}},t^{\text{end}}]\times \varOmega , \\ u(t^{\text{start}},x)& = U(x),\,\;\;\quad x \in \varOmega , \end{array} \right . \end{array} }$$
(4)
with additional boundary conditions. The differential operator F​, the external force term f​, and the initial state U are defined according to the considered scenario. We assume the problem to be well-posed and apply a discretization method by means of a time-stepping scheme, given on a partitioning \(t^{\text{start}} = t^{(0)} <t^{(1)} <\cdots <t^{(N)} = t^{\text{end}}\) of the time interval. In that case, an approximation u (i) of the solution at time t (i) can be calculated successively by means of a corresponding solution-operator A (i) for any \(i = 1,\ldots ,N\):
$$\displaystyle{ \begin{array}{l} u^{(0)} := U,\quad u^{(i)} := A^{(i)}(u^{(i-1)})\quad \forall i = 1,\ldots ,N.\end{array} }$$
(5)
This is a standard approach for the solution of a parabolic problem and is typically combined with a discretization in space by means of a finite element, finite difference, or finite volume method. This concept is deployed in model-based reconstruction where for any point in time t (i) ∈ [t start, t end], the reconstructed state of the process is given by
$$\displaystyle{ \varPhi _{\mathrm{F,f}}[U](t^{(i)}) := u^{(i)}, }$$
(6)
for any \(i = 1,\ldots ,N\). The sequence of approximations depends only on the initial state U and the solution operator which does not take any future states into account. Therefore, this reconstruction approach will not make sure that a given final state of the reconstruction interval will be achieved.

For any two states corresponding to the points in time t i−1 and t i of some given data, a partitioning can be inscribed and approximate solutions can be calculated as previously described. Since the underlying PDE and its discretization can be arbitrarily chosen, this reconstruction approach is very generic and can be applied to many problems. In the next section, we will give an example based on the convection diffusion problem.

3 Scenario of Volcano Ash Data

In this section, we present the scenario of the eruption of the volcano Eyjafjallajökull in Iceland in April 2010. The details related to the model system that was used to calculate the high-fidelity simulation are given. Subsequently, the resulting output data of the volcanic ash that is the starting point of the model-based reconstruction is described.

3.1 The Simulation in the Model System COSMO-ART

The COSMO model is the operational weather forecast model of the German weather Service DWD (Deutscher Wetterdienst). It is a non-hydrostatic regional model and is based on the thermo-hydrodynamical equations describing compressible flow in a moist atmosphere. Details about the dynamical core and the numerical scheme can be found in Steppeler et al. (2003) and Baldauf et al. (2011).

COSMO-ART (Vogel et al. 2009; Bangert et al. 2012) is an extension of COSMO, where ART stands for Aerosols and Reactive Trace gases. It is a comprehensive model system to simulate the spatial and temporal distributions of reactive gaseous and particulate matter. The model system is mainly used to quantify the feedback processes between aerosols and the state of the atmosphere on the continental to the regional scale with two-way interactions between different atmospheric processes.

The model system treats secondary aerosols as well as directly emitted components like soot, mineral dust, sea salt, volcanic ash, and biological material. Secondary aerosol particles are formed from the gas phase. Therefore, a complete gas phase mechanism is included. Modules for the emissions of biogenic precursors of aerosols, mineral dust, sea salt, biomass burning aerosol, and pollen grains are included. For the treatment of secondary organic aerosol (SOA) chemistry the volatility basis set (VBS) was included (Athanasopoulou et al. 2013). Wet scavenging and in-cloud chemistry are taken into account (Knote and Brunner 2013). Processes as emissions, coagulation, condensation (including the explicit treatment of the soot aging), deposition, washout, and sedimentation are taken into account. In order to simulate the interaction of the aerosol particles with radiation and the feedback of this process with the atmospheric variables the optical properties of the simulated particles are parameterized based on detailed Mie-calculations. New methods to calculate efficiently, the photolysis frequencies and the radiative fluxes based on the actual aerosol load were developed based on the GRAALS radiation scheme (Ritter and Geleyn 1992) and were implemented in COSMO-ART. To simulate the impact of the various aerosol particles on the cloud microphysics and precipitation COSMO-ART was coupled with the two-moment cloud microphysics scheme of Seifert and Beheng (2006) by using comprehensive parameterizations for aerosol activation and ice nucleation.

The advantage of COSMO-ART with respect to other models is that identical numerical schemes and parameterizations are used for identical physical processes as advection and turbulent diffusion. This avoids truncation errors and model inconsistencies. COSMO is verified operationally by DWD. The model system can be embedded by one way nesting into individual global scale models as the GME model or the IFS model. All components of the model system are coupled on line with time steps on the order of tenth of seconds. Nesting of COSMO-ART within COSMO-ART is possible. Typical horizontal grid sizes vary between 2.8 and 28 km.

For the simulation of the volcanic ash, the model domain is consistent with the domain covered by the operational weather forecast of Deutscher Wetterdienst for Europe. This means 665 × 657 × 40 grid points. The horizontal resolution is 0. 0625, in the vertical the resolution is between 20 m close to the surface up to several 100 m at the top of the domain in 20 km height. The time step is 40 s. The reference simulation was performed for 120 h. The volcano emissions were represented by 6 classes of particles with a diameter between 1 and \(30\,\upmu \mathrm{m}\). Details about the parameterization of the source height and the source strength can be found in Vogel et al. (2013). Sinks for the ash particles are wet and dry deposition as well as sedimentation. The initial and boundary conditions for the meteorological variables were taken from the operational runs of the GME. Since the output of one time step is in the order of 1 GB it is restricted to hourly output.

The numerical simulation of this scenario was calculated on the HP XC3000 computer system hosted at the Steinbuch Centre for Computing (SCC) at the Karlsruhe Institute for Technology (KIT). On this machine, the calculation using 64 CPUs (Intel Xeon Processor E5540, 2.53 GHz, quad-core) takes about 16 h.

3.2 Description of the Model Output

The output data of the COSMO-ART simulation that is stored for visualization purposes contains six ash particle concentrations and the wind field
$$\displaystyle{ \rho :\varOmega \times I \rightarrow \mathbb{R}_{+}^{6},\qquad \vec{v} :\varOmega \times I \rightarrow \mathbb{R}^{3}, }$$
(7)
given in the atmospheric domain Ω over a time period of 5 days I = [0, 120] in units of hours. There is one snapshot given at t n  = n, \((n = 0,\ldots ,120)\), with a 1-h stepping.
Fig. 3

Snapshots of the volcano ash cloud after 1, 2, 3, and 4 days of development. Visualization of the COSMO-ART simulation data using ParaView (Henderson 2007). Ash particle concentrations are represented by iso surfaces, the wind field as arrows at an average height of 4. 8 km, the vertical axis is scaled by a factor of 75

The atmospheric domain is given as a discrete grid in a geographical coordinate system in units of longitudinal and latitudinal degrees from 2000′0”S, 1800′0”W to 2100′0”N, 2330′0”E. The height above sea level is given in pressure levels. For values near the earth’s surface, the pressure levels are aligned to the orography. The ash and wind fields are given on this grid in the GRIB data format (Wor 2003). Figure 3 shows a visualization of the ash plume developing over Europe.

The evaluation and interpretation of the model output is usually done using two-dimensional horizontal or vertical cross sections. However due to the huge amount of data and due to the time dependency of the atmospheric processes, only a small fraction of the data can be really looked at. This limits the understanding of the interaction of the atmospheric processes with the ash plume. With three-dimensional visualizations as shown in Fig. 3, complex spatial structures can intuitively be experienced. For instationary processes, new methods for displaying the data within a reasonable time-frame are urgently needed. Such a method for the reconstruction of the time evolution is described in the following.

4 Low-Fidelity Model for the Dispersion of a Volcano Plume

In the previous section, a model for the evolution and dispersion of the Eyjafjallajökull plume was presented. The density distributions of different particle species and wind fields calculated using this model was exported into files, one file per hour. In the following, we present a low-fidelity model for the reconstruction of the ash plume dispersion from this hourly data. While in the introductive example (see Fig. 1) the trajectory of one single particle was reconstructed, we are interested in the distribution of the particle densities and apply a partial differential equation to describe its development. Firstly, we describe the structure of the COSMO-ART output data which is the starting point of the model-based reconstruction. Subsequently, we give details of this method related to this scenario including the physical model for the dominating processes.

4.1 Conversion of the Mesh Structure

For simplicity, we transform the grid structure, described in Sect. 3.2, into a structured grid with rectilinear Cartesian coordinates. The horizontal components of the geographical coordinates \(\boldsymbol{\phi }_{ij}\) in units of degrees are converted to plane Cartesian coordinates \(\vec{x}_{ij} :=\pi R_{\mathrm{earth}}\boldsymbol{\phi }_{ij}/180^{\circ }\) in units of kilometers. Regarding the vertical dimension, each pressure level k is assigned to its average height z(k), see Fig. 4.

Fig. 4

Average height above sea level of each pressure level

This results in the following discrete representation \(\hat{\varOmega }\) of the domain Ω = [0, 4610] × [0, 4550] × [0, 22. 2] in units of kilometers:
$$\displaystyle\begin{array}{rcl} \hat{\varOmega }:=\bigcup _{ k=0}^{39}\varOmega _{ k},\quad \varOmega _{k} :=\{ (x,y,z(k)) : (x,y) =\vec{ x}_{ij},\,i \in (0,664),\,j \in (0,656)\}.& & {}\\ \end{array}$$
We use a corresponding data file format of the Visualization Toolkit (VTK) project (Avila 2004) since this can easily be opened in many standard visualization tools.

4.2 The Continuous Model

In this section, we describe a simplified continuous model that we use in the following to reconstruct the motion of the volcano ash distribution. This model is represented by a parabolic partial differential capturing effects of advection and diffusion. One major simplification is that the three-dimensional domain Ω is replaced by a set of horizontal slices Ω k which are independently regarded. This makes it possible to calculate the additional snapshots very efficient on a workstation computer instead of a high-performance parallel computer.

The dispersion of the volcano plume is mainly driven by advection due to the wind. Assuming small vertical wind, the reduced model accounts for the horizontal wind only. Figure 5 shows the horizontal and vertical winds on a representative vertical layer. The horizontal wind has almost everywhere values of more than 10 m∕s, on some regions even about 60 m∕s, while the vertical wind is comparably small.

Fig. 5

Magnitude of horizontal wind (left) and vertical wind (right) in units of m∕s at pressure level 20 with an average height of 4. 3 km on April 18, 2010, at 10pm

The different scales of the horizontal and vertical wind motivate to neglect the vertical wind component and regard the ash dispersion in independent horizontal layers. Obviously, the influence of gravity is ignored which will be reflected in the numerical results as described later.

The reduced model contains artificial diffusion which is included not only to represent molecular diffusion, but to represent mixing effects due physical processes that are not resolved such as turbulence and the omitted vertical advection. In a numerical point of view, the problem is more stable due to the higher diffusion. It must be noted that the correct level of diffusion is not known a priori. Instead, it is a model parameter on which the approximation quality and also the related computational costs will depend on. For the numerical tests described later, we determined a good choice for this parameter by the solution of an optimization problem.

The volcano as the only source of particles should get particular attention. It spreads ash particles into the atmosphere continuously in time, which is described indirectly by the given COSMO-ART output data. For the reduced model, the effect of the volcano eruption should be considered. On the one hand, the reconstructed ash distributions should correspond to the given data as closely as possible, on the other hand, the particle distribution should be governed by the model equation. One way to include this effect in the reduced model would be by means of a source term for the ash. Since no additional information related to the volcano should be used for the reconstruction, we include a localized interpolation and smoothing step into the discrete model.

The resulting reduced model including effects of advection and diffusion and an abstract force term is stated in the following. The evolution of the vector of six ash particle species, denoted by ρ, in each horizontal layer is described by a two-dimensional convection-diffusion problem . For the reconstruction in the time interval I n , the particle density ρ n is initiated by the respective snapshot at time t n . The partial differential equation for each horizontal level k and time interval I n has the form:
$$\displaystyle\begin{array}{rcl} \partial _{t}\rho +\hat{\vec{ v}} \cdot \nabla \rho -\nu \varDelta \rho & =& f\quad \mathrm{in}\;\varOmega _{k} \times I_{n}, \\ \rho & =& 0\quad \mathrm{in}\;\partial \varOmega _{k} \times I_{n}, \\ \rho (t_{n})& =& \rho _{n}\quad \mathrm{in}\;\varOmega _{k}, {}\end{array}$$
(8)
with the artificial viscosity ν. The zero boundary conditions can be justified by the vanishing ash densities at the domain boundary at all times in the COSMO-ART data. The wind field \(\hat{\vec{v}}\) is calculated from the snapshots by linear interpolation in time
$$\displaystyle{ \hat{\vec{v}}(t) =\varPhi _{\mathrm{lin.intp.}}[t_{n},t_{n+1},\vec{v}(t_{n}),\vec{v}(t_{n+1})](t). }$$
(9)

4.3 Discretization

In this section, a standard finite difference discretization based on the explicit Euler scheme of problem (8) is described. The resulting algorithm allows for efficient calculation of the particle concentrations at intermediate points in time between t n and t n+1. The numerical scheme is easy to implement and leads to a fast algorithm. Details can be found in Hindmarsh et al. (1984).

The Laplace-operator and the convection term are each approximated by a central difference quotient. The time derivative is approximated by a forward difference quotient. For given time step size δ t  > 0, the following scheme has to be solved for each point in time \(\tau _{m}^{n} = t_{n} + m\delta _{t}\) with \(m = 1,2,\ldots ,M\), where \(M := (t_{n+1} - t_{n})/\delta _{t}\):
$$\displaystyle{ \begin{array}{ll} \tilde{\rho }_{\text{lfm}}(\vec{x}_{i,j}\!,\tau _{m+1}^{n}) =\;&\rho _{\text{lfm}}(\vec{x}_{i,j},\!\tau _{m}^{n}) \\ & - \frac{\delta _{t}} {h}\left (\hat{\vec{v}}_{1}(\vec{x}_{i,j},\!\tau _{m}^{n})\left (\rho _{\text{lfm}}(\vec{x}_{ i+1,j},\!\tau _{m}^{n}) -\rho _{\text{lfm}}(\vec{x}_{ i-1,j},\!\tau _{m}^{n})\right )\right ) \\ & - \frac{\delta _{t}} {h}\left (\hat{\vec{v}}_{2}(\vec{x}_{i,j},\!\tau _{m}^{n})\left (\rho _{\text{lfm}}(\vec{x}_{ i,j+1},\!\tau _{m}^{n}) -\rho _{\text{lfm}}(\vec{x}_{ i,j-1},\!\tau _{m}^{n})\right )\right ) \\ & + \frac{\delta _{t}} {h^{2}} \nu \left (\rho _{\text{lfm}}(\vec{x}_{i\pm 1,j},\!\tau _{m}^{n}) +\rho _{\text{lfm}}(\vec{x}_{i,j\pm 1},\!\tau _{m}^{n}) - 4\rho _{\text{lfm}}(\vec{x}_{i,j},\tau _{m}^{n})\right ).\end{array} }$$
(10)
No force term is considered in this scheme, i.e., \(f \equiv 0\), since the effect of the volcano eruption is considered in a subsequent interpolation and smoothing step described later. The density at all boundary nodes are fixed to be zero and are not changed in any state of the procedure during the numerical simulation.

4.4 Stability

The stability of the numerical scheme (10) for the explicit Euler method for problem (8) can be studied by means of a von Neumann analysis, see e.g., Hindmarsh et al. (1984). Two stability conditions restricting the time step size can be obtained:
$$\displaystyle{ \delta _{t} \leq \frac{h^{2}} {4\nu } ,\qquad \text{and}\quad \delta _{t} \leq \frac{4\nu } {\|\vec{v}\|^{2}}. }$$
(11)
Although these conditions guarantee stability of the discrete solution, they do not guarantee the ash concentrations to remain non-negative over the simulation time. Unphysical negative values may occur if the Péclet number \(P =\|\vec{ v}\|h/(2\nu )\) is greater than one. Obviously, P does not depend on the time step size δ t , but on the velocity field \(\vec{v}\) as well as on the grid spacing h which in our case should not be changed (e.g., by grid-refinement). Therefore the Péclet number can only be changed by means of the viscosity ν. A viscosity high enough to guarantee the Péclet number to be smaller than one would lead to a very strong mixing effect. This mixing would be much stronger than needed and would lead to non-physical, overemphasized smoothing of the particle densities. Therefore, we apply a post-processing procedure in which the negative particle concentration values are increased to zero.

4.5 Discrete Volcanic Particle Injection Model

We mimic the source of ash due to the volcano eruption by adopting the time-interpolated particle concentrations in each time step within a small neighborhood of the volcano. The integration of this interpolated data into the discrete solution calculated by the low-fidelity model is guaranteed to have a smooth spatial transition. In the horizontal layer k, the linear interpolation in time of the particle concentration at the volcano’s position \(\vec{x}_{V }^{k} \in \varOmega _{k}\) is given by
$$\displaystyle{ \hat{\rho }(\vec{x}_{V },t) =\varPhi _{\mathrm{lin.intp.}}[t_{n},t_{n+1},\rho (t_{n}),\rho (t_{n+1})](\vec{x}_{V },t). }$$
(12)
Setting the interpolated particle concentration only at the volcano’s position would lead to high gradients and numerical oscillation in the surrounding domain. For a smooth and stabilizing transition, we use a weighted linear combination of \(\hat{\rho }\) and the particle concentration \(\tilde{\rho }_{\text{lfm}}\) determined by the low-fidelity model. The weighing coefficients ω ij represent a discretized Gaussian bell function on a 11 × 11 stencil. This stencil is located at the volcano position and covers a sub-domain denoted by \(\tilde{\varOmega }_{k}\). At the grid points \(\vec{x}_{ij} \in \tilde{\varOmega }_{k}\) the weights are defined by \(w_{ij} :=\exp (-\frac{8} {47}\vert \vec{x}_{ij} -\vec{ x}_{V }\vert ^{2}) \in [0,1]\) and tend to zero at the boundary, see Fig. 6.
Fig. 6

Discrete Gaussian used as interpolation coefficients represented as a stencil of the size 11 × 11. Each pixel refers to one grid point in the discrete domain \(\tilde{\varOmega }_{k}\)

In each time step, after the solution has been updated according to Eq. (10), this solution is modified by
$$\displaystyle{ \rho _{\text{lfm}}(\vec{x}_{ij}\!,\tau _{m}^{n}) := w_{ ij}\hat{\rho }(\vec{x}_{ij}\!,\tau _{m}^{n}) + (1 - w_{ ij})\tilde{\rho }_{\text{lfm}}(\vec{x}_{ij}\!,\tau _{m}^{n}),\quad \text{for}\;\vec{x}_{ ij} \in \tilde{\varOmega }_{k}. }$$
(13)

4.6 Implementational Aspects

We implemented a discrete model of the previously described scenario in C++. The conversion of the original COSMO-ART data from the GRIB format to a structured VTK format was done in a preprocessing step. The work-flow for the reconstruction of the particle concentration between any two successive time steps t n and t n+1 is listed in Algorithm 1.

Algorithm 1 Work-flow of the implementation

      Read initial snapshot ρ(t n ) using VTK library

      Determine highest stable time step size due to equation (11)

      Setup stencil for discrete scheme

      for m = 1… N do

          Update ash particle concentration ρ lfm(τ m n ) in any horizontal layer

          if \(m\,\mathrm{mod}\,N_{vis} == 0\) then

              Visualize ρ lfm(τ m n )

          end if

      end for

    Calculate error \(\left \|\rho _{\text{lfm}}(\tau _{N}^{n}) -\rho (t_{n+1})\right \|\) if required

For this scenario, in each snapshot of the original data at least one half of the domain has vanishing particle concentrations. Taking this fact into account, the computational costs for the reconstruction of the densities can significantly be reduced. The computational cost scales linearly with the number of nodes that have to be updated in each time step. In our implementation, we determined the smallest rectangle in each horizontal layer that contains all non-zero particle concentrations within the initial and the target snapshot (1 h later). We applied the numerical scheme only to the nodes in these sub-domains which led to a fraction of the original computational costs. In particular for the first intervals, I n with n < 50 the ash particles are localized very strongly.

The presented low-fidelity model is an approximation of the original COSMO-ART model and therefore its (exact) solution is already related to some error as discussed in the next section. Therefore, it is justifiable to calculate only approximative solutions with moderate accuracy to increase the performance additionally. In numerical tests we verified that the reconstructed data based on single precision accuracy computations are related to errors in the order of 0. 01 % compared to those with double precision accuracy. This motivates to utilize very performant hardware for the data reconstruction (e.g., GPUs) that can exploit the highest performance using single precision accuracy.

5 Numerical Results

In this section, we investigate the results of numerical test series with respect to the quality and also the related computational costs. We compare data reconstructions calculated by linear interpolation and by the low-fidelity model approach both qualitatively and quantitatively.

5.1 Qualitative Comparison

Fig. 7

Comparison of the reconstructed and linearly interpolated particle concentration of the ash species with lightest particles of the weight \(1.0 \cdot 10^{-6}\,\upmu \mathrm{g}\) at the mean height of 9. 8 km. The white line represents high particle distributions as described in the original data. For the reconstruction, a viscosity of \(\nu = 0.01\,\mathrm{km}^{2}/\mathrm{s}\) was applied. (a) Linear interpolation of the ash particle concentrations of time step n = 96 and n = 98. (b) Result of the low-fidelity simulation from time step n = 96 to n = 98 plotted for n = 97 and n = 98

We consider three successive time steps and evaluate the reconstructions of the 2-h interval. The intermediate time step (after 1 h) serves for purposes of error evaluation only. In Fig. 7a, the reconstruction error of a linear 2-h interpolation is shown. The white line sketches the structure of the ash distribution as given in the original data at time step n = 97. At that state, the linear interpolation corresponds to the mean average and therefore contains features of both snapshots at n = 96 and n = 98. The physical evolution process is not captured correctly. High particle concentrations are reconstructed only at places where both the initial and the final snapshot have such high concentrations. In contrast, the reconstruction by means of the low-fidelity model can structurally reproduce the evolution process, see Fig. 7b. A simulation started at n = 96, indicates good agreement with the original data at n = 97 and even at n = 98.

Fig. 8

Iso surface visualization of the ash particle concentrations. The linear interpolation at n = 80. 5 does not correctly capture the ash transport since only one of the two ash clouds are visualized

The aforementioned property of the interpolation approach that the highest function values are not necessarily preserved, has disadvantages as the following 3D visualization shows: Fig. 8 shows an ash distribution reconstructed by the interpolation approach and also by the low-fidelity model. The ash particle concentrations are indicated by means of an isosurface visualization. In the original data at n = 80 (left) and at n = 81 (right), the two small separated ash clouds clearly can be seen. Regarding the model-based visualization, the isosurfaces continuously move from the start position to the target position. This is indicated by the visualization after half of the interval time, n = 80. 5, in the lower panel. In contrast, the concentration values computed by the linear interpolation at that time fall below the iso value for the visualization. This physically incorrect artifact leads to a vanishing ash cloud in the upper panel of Fig. 8.

As additional material, a 3D animation of this scene (“Comparison Volcano Ash Distribution”) can be found on the Springer website http://www.springerimages.com. It shows a comparison between the original non-interpolated data, the reconstructed data by linear interpolation, and the data reconstructed using the low-fidelity model. The ash particle concentrations in the animation are represented by iso surfaces, similar to Fig. 3. The wind is indicated by colored arrows at an average height of 4. 8 km above sea level. For clarification purposes, the vertical axis is scaled by a factor of 75 of the original height above sea level. The linear interpolation in time gives the impression of a pulsating ash dispersion, arising from the artifact described in the previous paragraph. The model-based visualization shows a flowing transition from one snapshots to the next with the exception of a small correction at the end of each construction interval. For the animation and Fig. 8, a viscosity of \(\nu = 0.005\,\mathrm{km}^{2}/\mathrm{s}\) was applied, which seems to be a good choice as described in the following section.

5.2 Quantitative Comparison

For the quantitative examination of the reconstruction quality , we introduce an error measure which represents the error of all particle species \(s = 1,\ldots ,6\) and the snapshots \(n = 8,\ldots ,120\) by means of one scalar value. The first snapshots are neglected since no ash particles are contained therein. The error measure is defined as the mean value of the relative L 2-error \(E_{n}(\rho _{\mathrm{mode}}^{s}) :=\|\rho _{ \mathrm{mode}}^{s}(t_{n}) -\rho _{\mathrm{data}}^{s}(t_{n})\|/\|\rho _{\mathrm{data}}^{s}(t_{n})\|\) and has the form:
$$\displaystyle{ E_{\mathrm{mode}} := \frac{1} {113 \cdot 6}\sum _{n=8}^{120}\sum _{ s=1}^{6}E_{ n}(\rho _{\mathrm{mode}}^{s}). }$$
(14)
This error measure is the basis of the following quantitative evaluation.

We expand the simulation time period again from 1 to 2 h such that the error of the linear interpolation E interpolation, evaluated at the intermediate time-stamp, can be computed. The low-fidelity simulation is initialized at the time step t n and the convection field as well as the particle concentrations for the volcano model are identified by linear interpolation between t n and t n+2. Thus, we compute the error with respect to the original data at t n+1 (denoted by E lfm1) and t n+2 (denoted by E lfm2). It is reasonable that the linear interpolation has its greatest error at the intermediate time-stamp t n+1. For the low-fidelity reconstruction one can assume that the error grows in time and therefore the error is maximal at the end of the considered interval at t n+2. Figure 9 shows the results for this artificial set-up. The general error of the interpolation amounts to E interpolation ≈ 0. 43. The errors obtained for the low-fidelity model are for both the intermediate E lfm1 and the final error E lfm2 smaller as long as the viscosity ν is sufficiently small. For high viscosities the resulting mixing effects are too high for the model to correctly describe the evolution of the ash plume. With decreasing viscosity the results gain the required sharpness and the error at intermediate time falls down to E lfm1 ≈ 0. 27 and at the final time step reaches a slightly higher value of E lfm2 ≈ 0. 30. Here the error curves indicate the existence of a minimum, where the mixing effect seems to capture the physics at best. To conclude for this scenario, the low-fidelity model with a suitable choice for the viscosity is superior to the linear interpolation.

Fig. 9

Validation set-up: the solid lines show the errors of the interpolation and the low-fidelity reconstruction. The dashed line represents the average computational costs in seconds for the reconstruction of the evolution from t n to t n+2

Next we regard the set-up for the actual reconstruction of the ash particle concentration within each time interval. Here, the simulations are supposed to reconstruct the evolution from one time step t n to the next t n+1. In contrast to the validation set-up described above, the data of the convection field and the volcano model is now interpolated between one and the following snapshot. Figure 9 indicates that the best reconstruction with an error of E lfm ≈ 0. 19 can be expected for a viscosity of \(\nu = 0.005\,\mathrm{km}^{2}/\mathrm{s}\). This was the parameter of our choice for the final reconstruction of the evolution.

Fig. 10

Reconstruction set-up: the continuous lines represent the errors of the interpolation and the low-fidelity simulation. The dashed line shows the average computational costs in seconds for the reconstruction of the evolution from t n to t n+1

Within a range of \(0.003\,\mathrm{km}^{2}/\mathrm{s} \leq \nu \leq 0.01\,\mathrm{km}^{2}/\mathrm{s}\) the low-fidelity simulations show similar sizes of the error. However, the computational costs increase with decreasing viscosity. This effect is explainable by the direct connection between the viscosity and the stability condition on the time step size, see Eq. (11). Choosing the expectably highest stable time step size, we get an expression for the number of needed iterations \(N = T\|\vec{v}\|^{2}/(4\nu )\), i.e., the viscosity reciprocally governs the computational costs. The dashed lines in Figs. 9 and 10 refer to the computational costs by means of the average computing time in seconds and approximately show a linear relation on the logarithmic scales in correspondence to the formal relation. These results were obtained in sequentially run simulations on a desktop workstation with an Intel CoreTM i7-3770K Processor (3.50 GHz, quad-core). Hence, depending on the available hardware in purpose of the data reconstruction, a compromise has to be found between accuracy and costs.

5.3 Sources of Errors and Extensions

As described previously, the orography-following mesh structure of the original data, given in a geographical coordinate system, is converted to a regular Cartesian mesh in a preprocessing step. This conversion involves an error due to the different mesh structures which leads to a smoothing of the data. This interpolation error was accepted since it allows for simple mesh data structures that facilitate the implementation of the numerical scheme.

Fig. 11

Error of reconstructed concentration data for the different species of ash particles, calculated based on the viscosity of \(\nu = 0.005\,\mathrm{km}^{2}/\mathrm{s}\). The most weightiest particle species correspond to the largest reconstruction error

One major simplification is the reduction of the three-dimensional domain into a set of independent two-dimensional slices. This implicates the downward movement of particles due to their weight not to be considered. Figure 11 shows a plot of the reconstruction error for the different ash particle species. It can clearly be seen that the error is larger for the ash distribution of the most weightiest particles. This might be caused by the disregarded effect of the gravity force which would allow an error reduction of approximately 5 % for the largest particles if the gravity force would be accounted for.

Although the vertical wind is small compared to the horizontal wind, it is not zero, as shown in Fig. 5, and would transport ash particles in the vertical direction. Also the mixing effects in the atmosphere have contribution in the vertical direction. The consideration of the three-dimensional effects of advection and diffusion would require a fully-coupled three-dimensional discrete model. This would lead to much higher computational costs compared to the presented layer approach.

A fundamental component of the proposed reconstruction approach is the model equation describing the physical processes considered during the calculation. We used a 2D-version of a convection-diffusion model to account for the transport due to wind and diffusive mixing. In Sect. 3.1 we described the COSMO-ART model used to simulate the evolution of the volcano plume. Besides the ash particle densities and the wind field, several quantities were additionally (e.g., aerosols, soot, mineral dust, sea salt) considered to account for atmospheric processes that might be important for this scenario. These quantities and corresponding equations depict possible extensions for the low-fidelity model. Important components of this full description need to be identified which typically requires deep knowledge about the models. The quality and performance efficiency of the model-based reconstruction method essentially depends on the considered model variant, simplifications made, and the discretization.

Reconstructions based on the fully coupled three-dimensional problem are computational expensive in general but might be efficiently applied by means of model reduction approaches. One simple data reduction strategy is given by neglecting a portion of the available grid points, e.g., use only every nth grid point in any space direction. This results in a data reduction by a factor of n 3. However reduced spatial resolution is in general related to a strong loss of details and quality (e.g., the data reduction of the original COSMO-ART data was reduced by this technique with respect to the time structure). For many applications, very efficient reduced models can be defined by means of an orthogonal basis of the space spanned by the available snapshots. Using the proper orthogonal decomposition method (POD) (Kunisch and Volkwein 199920022008), many problems with very high number of unknowns can be approximated very accurately with less than 100 unknowns only. An important aspect is that the only costly calculations (the determination of the reduced model) need to be done only once which can be done on a power-full computer system. For this type of model-reduction a model equation describing the physical behavior is required as well. For the reconstruction of the volcano ash plume based on POD techniques, the proposed convection-diffusion model, even with 3D-effects of advection and diffusion, could be deployed.

The starting point of the low-fidelity model described in Sect. 4 is a parabolic partial differential equation which accounts only for the initial state of each interval. However, it is not guaranteed that the calculated data at the end of the reconstruction interval will correspond to the given original data at that time. In general there will be some error due to the use of the low-fidelity model as we discussed in Sect. 5. This error can be minimized by fitting existing model parameters (e.g., the viscosity ν) such that the saltation-like change at the end of the reconstruction interval is as small as possible. Of course this non-smooth behavior can be eliminated by some post-processing operation of smoothing or interpolation, but these approaches lead to additional errors again. Such problems can be avoided by an alternative problem formulation that takes both states at the beginning and also at the end of the reconstruction interval into account. This can be achieved adding mixing effects related to the time-structure. This can modeled by adding temporal viscosity (e.g., a term of the form \(-\nu _{t} \cdot \partial _{t}^{2}\rho\) with viscosity ν t  > 0) leading to an elliptic problem formulated in the space-time domain. The solution of such a fully coupled system can no longer be done by means of time-stepping schemes as presented before, since all unknowns within the reconstruction interval are fully coupled. Such problems are typically tackled by means of sparse linear equation systems and related solution approaches (e.g., direct or iterative linear system solvers). Although the computational efforts to solve such a problem might be higher than the described time-stepping scheme, this approach can be desirable in many applications since the reconstructed data will catch all states given in the original data. To reduce the costs, adequate model reductions (e.g., POD techniques) might provide remedy.

6 Conclusion

In this article, we investigated methods for the reconstruction of time-depending processes which are described at a finite number of points in time only. We gave a general description of such reconstruction methods. We discussed linear interpolation as one of the standard approaches and proposed a model-based reconstruction method. In addition to the given data, the latter takes a physical model into account that approximately describes the underlying processes. For the eruption of the volcano Eyjafjallajökull in Iceland in the spring of 2010, we proposed such a low-fidelity model based on the convection-diffusion equation. We described the discrete model in detail and presented reconstructions of the evolution of the volcano plume by means of six species of ash particles. The reconstructions based on linear interpolation and the model-based approach were compared to the original data, calculated by a COSMO-ART model. Although the low-fidelity model was only a rough approximation of the original one, it lead to much smaller reconstruction errors compared to linear interpolation. Such results motivate the use of corresponding low-fidelity models for various fields of applications for purposes of scientific visualization or post-processing in general. Finally, some possible extension to the model for the volcano scenario were discussed.

The definition of an adequate low-fidelity model may be non-trivial since a detailed knowledge about the physics, discretization, and also software programming are needed. The better the reconstruction model suits the original problem, the less data is needed to achieve a high reconstruction quality. In that sense, powerful reconstruction approaches allow for substantial reductions of the needed amount of data to represent simulation results. While the original numerical simulations usually are conducted on large parallel computer systems, the data reconstruction might be calculated even on a standard desktop computer. Such efficient reconstruction techniques are essential for interactive visualization purposes.

The specific selection of a model equation is certainly a key point since it constitutes the physical processes that will be considered during the reconstruction. On the one hand, it must be able to describe the relevant features given in the data in an appropriate spatial and temporal scale, on the other hand, the model should not be too complex both with respect to software engineering and the computational costs. In the various fields of science and engineering a great diversity of physical features are considered for which many reconstruction strategies are required. Reduced models for standard processes (such as the convection-diffusion equation) could be defined universally, at least for certain problem classes. For highest efficiency, the reduced model might require adaptation to the particular application or at least existing model parameters might need to be adjusted.

By the freedom to choose an arbitrary low-fidelity model, the user can control the quality of the reconstruction and also the related computational costs. This allows to define highly performant methods for applications in which real-time availability or interactivity of the reconstructed result is relevant. Such methods could be integrated into visualization systems such as Amira (Kon 2009), EnSight (Com 2006), or ParaView (Henderson 2007). In case of ParaView, an extensive variety of filters exists that can be included into the visualization pipeline to manipulate the data. In particular, an often-used filter for the linear time-interpolation exists. Model-based reconstruction methods could be integrated by simply adding an additional filter which can easily be done since ParaView is an open-source development. If not high performance but high quality of the reconstruction is needed for some point in time, a more complex model – in the extreme case the original model – could be applied for reconstruction.

In recent years, numerical methods have been developed which make use of different physical model equations at the same time, see e.g., Oden and Prudhomme (2002), Braack and Ern (2003), Bales et al. (2009). This is often realized by a posteriori estimations of the error related to the applied models and their control by switching between a selection of available physical models of different complexities, known under the term model adaptivity . Such developments motivate to think no more of the numerical simulation and the reconstruction as two separate steps. Instead, a model system can be seen as some abstract mechanism that describes the physics in a given scenario up to some (needed) accuracy. From that perspective, a desired visualization of some process serves as initiating impulse causing the calculation of a numerical simulation of the corresponding scene. Visualization is then no more a post-processing step of the simulation, but is part of the model system. Such a combined simulation and visualization system would allow for physical-aware visualizations of zoomed views based on more complex physical models instead of purely interpolated images. The discretization (spatial mesh and time partitioning) as well as the complexity of the physical model could be adjusted automatically by means of mathematical error estimators to guarantee high accuracy of the result.

Footnotes

  1. 1.

    Martin Baumann, present affiliation: Heidelberg University Computing Centre (URZ), Heidelberg, Germany

    Vincent Heuveline, present affiliation: Engineering Mathematics and Computing Lab (EMCL), Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany

    Jonas Kratzke, present affiliation: Engineering Mathematics and Computing Lab (EMCL), Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany

References

  1. Amira 5 User’s Guide. Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) and Visage Imaging (2009). http://www.amira.com
  2. Athanasopoulou E, Vogel H, Vogel B, Tsimpidi AP, Pandis SN, Knote C, Fountoukis C (2013) Modeling the meteorological and chemical effects of secondary organic aerosols during an eucaari campaign. Atmos Chem Phys 13(2):625–645. doi:10.5194/acp-13-625-2013CrossRefGoogle Scholar
  3. Avila LS (2004) The VTK users’s guide. Kitware. ISBN:1-930934-13-0Google Scholar
  4. Baldauf M, Seifert A, Förstner J, Majewski D, Raschendorfer M, Reinhardt T (2011) Operational convective-scale numerical weather prediction with the COSMO model: description and sensitivities. Mon Weather Rev. doi:10.1175/MWR-D-10-05013.1. e-ViewGoogle Scholar
  5. Bales P, Kolb O, Lang J (2009) Hierarchical modelling and model adaptivity for gas flow on networks. Volume 5544 of lecture notes in computer science. Springer, pp 337–346. ISBN:978-3-642-01969-2Google Scholar
  6. Bangert M, Nenes A, Vogel B, Vogel H, Barahona D, Karydis VA, Kumar P, Kottmeier C, Blahak U (2012) Saharan dust event impacts on cloud formation and radiation over Western Europe. Atmos Chem Phys 12(9):4045–4063. doi:10.5194/acp-12-4045-2012CrossRefGoogle Scholar
  7. Bonneau GP, Ertl T, Nielson G (2006) Scientific visualization: the visual extraction of knowledge from data. Mathematics and visualization. Springer, HeidelbergCrossRefGoogle Scholar
  8. Braack M, Ern A (2003) A posteriori control of modeling errors and discretization errors. Multiscale Model Simul 1(2):221–238MathSciNetCrossRefzbMATHGoogle Scholar
  9. EnSight User Manual. Computational Engineering International, Inc., 2166 N. Salem Street, Suite 101, Apex, NC 27523, (2006). http://www.ensight.com
  10. Henderson A (2007) ParaView guide, a parallel visualization application. Kitware Inc. http://www.paraview.org/
  11. Hindmarsh AC, Gresho PM, Griffiths DF (1984) The stability of explicit euler time-integration for certain finite difference approximations of the multi-dimensional advectiondiffusion equation. Int J Numer Methods Fluids 4(9):853–897. ISSN:1097-0363, doi:10.1002/fld.1650040905, http://dx.doi.org/10.1002/fld.1650040905
  12. Introduction to GRIB. World Meteorological Organization, June 2003Google Scholar
  13. Knote C, Brunner D (2013) An advanced scheme for wet scavenging and liquid-phase chemistry in a regional online-coupled chemistry transport model. Atmos Chem Phys 13(3):1177–1192. doi:10.5194/acp-13-1177-2013CrossRefGoogle Scholar
  14. Kunisch K, Volkwein S (1999) Control of the burgers equation by a reduced-order approach using proper orthogonal decomposition. J Optim Theory Appl 102(2):345–371 ISSN:0022-3239, doi:http://dx.doi.org/10.1023/A:1021732508059Google Scholar
  15. Kunisch K, Volkwein S (2002) Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. J Numer Anal 40(2):492–515MathSciNetCrossRefzbMATHGoogle Scholar
  16. Kunisch K, Volkwein S (2008) Optimal snapshot location for computing pod basis functions. SFB-report, 2008-008Google Scholar
  17. Oden JT, Prudhomme S (2002) Estimation of modeling error in computational mechanics. J Comput Phys 182(2):496–515. ISSN:0021-9991, doi:10.1006/jcph.2002.7183, http://dx.doi.org/10.1006/jcph.2002.7183
  18. Ritter B, Geleyn JF (1992) A comprehensive radiation scheme for numerical weather prediction models with potential applications in climate simulations. Mon Weather Rev 120(2):303–325. doi:10.1175/1520-0493(1992)120¡0303:ACRSFN¿2.0.CO;2CrossRefGoogle Scholar
  19. Seifert A, Beheng KD (2006) A two-moment cloud microphysics parameterization for mixed-phase clouds. Part 1: model description. Meteorol Atmos Phys 92:45–66. ISSN:0177-7971, doi:10.1007/s00703-005-0112-4Google Scholar
  20. Steppeler J, Doms G, Schttler U, Bitzer HW, Gassmann A, Damrath U, Gregoric G (2003) Meso-gamma scale forecasts using the nonhydrostatic model LM. Meteorol Atmos Phys 82:75–96. ISSN:0177-7971, doi:10.1007/s00703-001-0592-9Google Scholar
  21. Vogel B, Vogel H, Bäumer D, Bangert M, Lundgren K, Rinke R, Stanelle T (2009) The comprehensive model system COSMO-ART – radiative impact of aerosol on the state of the atmosphere on the regional scale. Atmos Chem Phys 9(22):8661–8680. doi:10.5194/acp-9-8661-2009CrossRefGoogle Scholar
  22. Vogel H, Förstner J, Vogel B, Hanish Th, Mühr B, Schättler U, Schad T (2013, submitted) Simulation of the dispersion of the Eyjafjallajökull plume over Europe with COSMO-ART in the operational mode. Atmos Chem Phys Discuss 13(5):13439–13463CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Martin Baumann
    • 1
    Email author
  • Jochen Förstner
    • 2
  • Vincent Heuveline
    • 1
  • Jonas Kratzke
    • 1
  • Sebastian Ritterbusch
    • 1
  • Bernhard Vogel
    • 3
  • Heike Vogel
    • 3
  1. 1.Engineering Mathematics and Computing Lab (EMCL)Karlsruhe Institute of TechnologyKarlsruheGermany
  2. 2.German Weather Service (DWD)OffenbachGermany
  3. 3.Institute for Meteorology and Climate Research (IMK)Karlsruhe Institute of TechnologyEggenstein-LeopoldshafenGermany

Personalised recommendations