## Abstract

As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge–Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.

## Introduction

The first overarching goal of this review is to document several higher-order methods that can now be applied to simulations in computational astrophysics. In that sense, the review seeks to bring the computational astrophysics community and the higher-order numerical methods community closer together. Even this is a daunting task because computational astrophysics has its own inner requirements. For example, for some very good reasons, computational astrophysicists prefer to have mimetic schemes for non-relativistic magnetohydrodynamics (MHD) and relativistic MHD (RMHD). Likewise, astrophysical computations usually involve stiff source terms and non-ideal effects. For that reason, this review has been split into two parts. Part I, which is this review, introduces higher-order finite volume methods to the greater computational astrophysics community. Part II, which will be a subsequent review, with present many nuances in constraint preserving schemes along with treatment of stiff source terms to the computational astrophysics community.

The second overarching goal is to show the astrophysics community that astrophysics codes are easy to understand if they are studied from the inside out. In other words, all these computational astrophysical fluid dynamics codes are based on a common core of algorithms. Usually, young computational astrophysicists are taught about a code from the outside in. That is, they learn what the inputs are and what the outputs ought to be for a specific code; but the inner workings of the code remain a mystery. By understanding the common algorithmic core, the computational astrophysical fluid dynamics codes can be demystified.

The methods presented in this review have been developed in the literature over the last several years. However, this review differs from other reviews because astrophysicists like to minimize mathematical notation and they also like to make the numerical method operationally explicit. This review minimizes the mathematical notation and displays all formulae explicitly, as much as possible. In some instances, making the numerical methods more transparent for astrophysicists has also yielded important innovations and simplifications that are catalogued here. Each useful method is followed by a box that explicitly catalogues the major steps that go into implementing the method. A sequence of pedagogically designed lectures on this topic is also available on the author’s website (http://www.nd.edu/~dbalsara/Numerical-PDE-Course). Several illustrative codes are also available from that website. The interested reader may also want to see the author’s cotributions to the 2016 Les Houches lectures on Computational Astrophysics (http://comp-phys-2016.sciencesconf.org/), which also include illustrative codes.

Because of the scope of this review, we divide this introduction into four parts. The first part focuses on the partial differential equations (PDE) systems of interest in astrophysics, cosmology and relativity. The second part focuses on achieving spatially high order of accuracy for hyperbolic PDE systems. The third part focuses on achieving high order of temporal accuracy. The fourth part gives us some useful preliminaries on hyperbolic systems.

### Focus on the PDE systems of interest to computational astrophysics

From its start in the 1970s, computational astrophysics has blossomed into a vibrant field that has been applied to many sub-disciplines of astrophysics, cosmology and numerical relativity. While it would be impossible to make a comprehensive list of all these sub-disciplines, these sub-disciplines include most types of origins questions. Thus, computational astrophysicists simulate the origins of the cosmos through cosmological simulations, the origin of stars and planetary systems around stars, the turbulent environments in molecular clouds and the interstellar medium, accretion processes around stars, compact objects and black holes, convection in stars, nova and supernova explosions and the interaction of neutron stars and black holes to produce gravitational radiation. In all these fields, simulating for the origin and evolution of an astrophysical system entails accurately evolving given initial conditions forward in time with spatial and temporal accuracy. The availability of PetaScale computers and the intended availability of ExaScale supercomputers in the next five years ensures that ever more detailed computations will be undertaken. Furthermore, the presence of ground-based and space-based observational facilities that can measure astrophysical processes with precision puts some pressure on computational astrophysics to move towards becoming a precision science. Astrophysicists have realized that turbulence regulates various astrophysical processes, like star formation, stellar convection and the physics of galactic interstellar medium. Accurately simulating turbulence also requires the use of highly accurate numerical methods.

There is an emerging interest amongst astrophysicists to carry out precise, high-accuracy simulations to support observational projects. Powerful, massively-parallel computers and GPU co-processors also make it possible to invest in computational methods that might be a little more computationally costly but provide a much more precise answer. The differential gain in accuracy per unit increase in computational cost is such as to favor the implementation of high accuracy schemes for computational astrophysics on modern computational architectures. Most astrophysical codes simulate a hyperbolic system with perhaps some additional contributions from an elliptic sector, parabolic terms or stiff source terms. For that reason too, the focus here is on hyperbolic systems. Most questions about origins in computational astrophysics entail simulating the time-evolution of astrophysical objects. For that reason, we are interested in time-dependent higher-order methods for the simulation of hyperbolic systems. It is worth pointing out that the methods that are used in computational astrophysics today were invented by astrophysicists, engineers, space physicists, mathematicians and computational scientists from all different research areas. Just as computational astrophysicists have been willing to assimilate good ideas from all these allied disciplines, they have also contributed to them. For that reason, many of the methods discussed in this review could be broadly useful to other computationalists in other STEM areas and the cross-fertilization of ideas between disciplines is always a good thing.

The hyperbolic systems of interest include, but are not restricted to, the Euler equations, the non-relativistic magnetohydrodynamic (MHD) equations, relativistic hydrodynamics (RHD) and relativistic MHD (RMHD). Initial interest focused on the Euler equations (Godunov 1959; van Leer 1974, 1977, 1979; Norman et al. 1980; Roe 1981; Harten 1983; Woodward and Colella 1984; Colella and Woodward 1984; Sweby 1984; Osher and Chakravarthy 1984; Tadmor 1988; Colella 1990; Berger and Colella 1989; Stone and Norman 1992a; Colella and Sekora 2008; McCorquodale and Colella 2011). However, it soon became apparent that the Euler equations were just one specific instance of a hyperbolic system. Appendix A gives useful information about the Euler equations viewed as a hyperbolic system.

Non-relativistic MHD next saw an initial spurt of interest where it was treated as a hyperbolic system (Brio and Wu 1988; Stone and Norman 1992b; Dai and Woodward 1994; Ryu and Jones 1995; Roe and Balsara 1996; Cargo and Gallice 1997; Balsara 1998a, b; Falle et al. 1998; Gurski 2004; Li 2005; Crockett et al. 2005; Miyoshi and Kusano 2005; Fuchs et al. 2011; Chadrashekar and Klingenberg 2016; Winters and Gassner 2016; Winters et al. 2017; Derigs et al. 2017). The realization that the magnetic field should be divergence-free (Brackbill and Barnes 1980; Brackbill 1985; Brecht et al. 1981; Evans and Hawley 1989; DeVore 1991) has prompted a lot of subsequent work in the field of constrained transport (CT) schemes for MHD (Ryu et al. 1998; Dai and Woodward 1998; Balsara and Spicer 1999a, b; Balsara 2001a, 2004, 2009; Londrillo and Del Zanna 2004; Gardiner and Stone 2005, 2008; Balsara et al. 2009, 2013; Dumbser et al. 2013; Balsara and Dumbser 2015a, b; Xu et al. 2016). Recently, the need to achieve multidimensional upwinding has led to the development of multidimensional Riemann solvers that are efficient and easy to implement (Balsara 2010, 2012a, 2014, 2015; Balsara et al. 2014; Vides et al. 2015; Balsara et al. 2016a; Balsara and Käppeli 2017). Appendix B gives useful information about the MHD equations viewed as a hyperbolic system.

Soon after the onset of interest in MHD, there was also a burst of interest in developing higher-order Godunov schemes for relativistic hydrodynamics (Martí et al. 1991; Marquina et al. 1992; Eulderink 1993; Balsara 1994; Font et al. 1994; Martí and Müller 1994; Marquina 1994; Eulderink and Mellema 1995; Falle and Komissarov 1996; Aloy et al. 1999; Pons et al. 2000; Rezzolla and Zanotti 2001; Font 2003; Martí and Müller 2003; Ryu et al. 2006). The interested reader can also see see the recent review by Martí and Müller (2015), and the textbook by Rezzolla and Zanotti (2013). That interest in relativistic hydrodynamics transitioned into a burgeoning interest in numerical relativistic MHD which continues to this day (Anile 1989; Komissarov 1999; Balsara 2001b; Koide et al. 2001; Gammie et al. 2003; Giacomazzo and Rezzolla 2006, 2007; Del Zanna et al. 2003, 2007; Noble et al. 2006; Komissarov 2006; Mignone and Bodo 2006; Tchekhovskoy et al. 2007; Mignone et al. 2009; Dumbser and Zanotti 2009; Anton et al. 2010; Beckwith and Stone 2011; McKinney et al. 2014; Kim and Balsara 2014; Radice et al. 2014; Zanotti and Dumbser 2016; White et al. 2016; Balsara and Kim 2016). Higher-order schemes and multidimensional Riemann solvers have also been developed for relativistic MHD (RMHD). Appendix C gives some useful pointers for the RHD and RMHD equations.

### Numerical methods for higher-order spatial accuracy

The previous paragraphs have paid due attention to the most important PDE systems of interest in astrophysics. To be sure, there are many further systems of PDEs that will become interesting to astrophysicists in the future. Let us now turn our attention to the solution methodologies. Astrophysicists have been amongst the earliest developers of numerical methods for fluid dynamics (LeBlanc and Wilson 1970; Norman et al. 1980; Hawley et al. 1984). However, the distinction of being the most prescient developer of fluid dynamics methods falls to Bram van Leer, who started his intellectual life as an astronomer and subsequently left the field! In an intellectual tour de force, van Leer (1974, 1977, 1979) developed a second order accurate extension to a first order accurate method by Godunov (1959). This launched the field of higher-order Godunov schemes which have gone on to become the most successful class of methods for numerically treating all manner of hyperbolic systems of partial differential equations (PDEs). Van Leer’s (1979) paper has been cited over 5000 times at the time of this writing! Higher-order Godunov methods offer robust performance over a broad range of physical conditions and for a large number of hyperbolic PDE systems. They do have their pitfalls, but their pitfalls have been well-documented in the literature and suitable fixes that overcome those pitfalls have been devised. For that reason, this review focuses on higher-order Godunov schemes. Progress in this field came rapidly on the heels of van Leer’s seminal papers. Since Godunov methods rely on Riemann solvers to provide upwinding and entropy-enforcement at discontinuities, a large number of very efficient Riemann solvers have been devised (Rusanov 1961; van Leer 1979; Roe 1981; Harten 1983; Osher and Solomon 1982; Harten et al. 1983; Colella 1985; Einfeldt 1988; Einfeldt et al. 1991; Toro et al. 1994; Batten et al. 1997; Liou et al. 1990; Liou and Steffen 1993; Liou 1996, 1998, 2006; Zha and Bilgen 1993; Ismail and Roe 2009; Toro and Vázquez-Cendón 2012; Chandrashekar 2013; Dumbser and Balsara 2016). It was also soon realized that higher-order Godunov methods achieve their stability because they restrict the reconstructed profiles within each zone so as to avoid producing spurious extrema. This gave rise to the emergence of total variation diminishing (TVD) schemes (Harten 1983; Sweby 1984; Tadmor 1988) which used piecewise linear reconstructed profiles within each zone. Inclusion of parabolic reconstruction profiles, instead of linear ones, gave rise to the piecewise parabolic method (PPM) (Woodward and Colella 1984; Colella and Woodward 1984; Colella and Sekora 2008; McCorquodale and Colella 2011). PPM has proved to be very popular with astronomers because it gives reasonably good quality solutions at a modest computational cost.

The PPM method introduced the idea of “reconstruction by primitive” which subsequently formed an integral part of essentially non-oscillatory (ENO) schemes (Harten et al. 1986; Shu 1988; Shu and Osher 1989). ENO schemes provided a pathway to increasingly high orders of accuracy. However, the early ENO schemes had their own deficiencies owing to the sudden shifts in the reconstruction stencil (Rogerson and Meiburg 1990). With the advent of weighted essentially non-oscillatory (WENO) schemes a natural path was found for designing schemes of increasingly order accuracy (Liu et al. 1994; Jiang and Shu 1996; Friedrichs 1998; Balsara and Shu 2000; Levy et al. 2000; Deng and Zhang 2005; Käser and Iske 2005; Henrick et al. 2006; Dumbser and Käser 2007; Borges et al. 2008; Shu 2009; Gerolymos et al. 2009; Castro et al. 2011; Liu and Zhang 2013; Zhu and Qiu 2016; Balsara et al. 2016c; Cravero and Semplice 2016; Semplice et al. 2016). WENO schemes will form an important fraction of this review, partly because of their intrinsic interest and partly because of their role as limiters for the DG schemes that we will introduce very shortly. A WENO scheme spatially reconstructs all the moments (except the 0th moment) of an \(\hbox {M}\)th order polynomial so as to provide \((\hbox {M}+1)\)th order of spatial accuracy.

The observant reader may well ask whether all of these moments can be reasonably reconstructed? The quick answer is that indeed they can be reconstructed. However, one can still ask whether there is a way of evolving all these moments consistent with the dynamics? This is where discontinuous Galerkin (DG) schemes step in because they give us a logical way of evolving all the higher moments in a way that is consistent with the dynamics. Let us consider a simple example that enables us to compare and contrast WENO schemes with DG schemes. A fourth order CWENO (centered WENO) scheme would reconstruct the linear, parabolic and cubic moments at each timestep while evolving only the zone averaged value of the flow variable (i.e., the zeroth moment) at each timestep. On the other hand, a fourth order DG scheme would use a Galerkin projection procedure to develop evolutionary equations for the evolution of not just the zeroth moment, but also the first, second and third moments (i.e., the linear, parabolic and cubic terms in one dimension). These additional evolutionary equations can be designed consistent with the governing PDE (i.e., the underlying dynamics). The reader can now appreciate why a third order DG scheme would be more accurate than its WENO counterpart. This advantage persists at all orders.

Having obtained an intuitive background on DG schemes, let us now document the history of these schemes. DG methods occupy an intermediate place between full Galerkin/Spectral methods which assume that the solution is described by a basis set that extends over the whole domain (think of a Fourier method) and a finite volume TVD/PPM/WENO method which assumes that the solution is specified by slabs of fluid within each zone at the beginning of each timestep. In a DG method, the solution is specified as having a certain number of moments within each zone at the beginning of each timestep. DG schemes were initially invented for solving neutron transport problems (Reed and Hill 1973). Understanding how to incorporate many of the nicer features of finite volume methods in DG schemes took over a decade of development. Cockburn and Shu (1989) made the first breakthrough for scalar hyperbolic conservation laws with the following three advances. First, by endowing each element (zone) with an \(\hbox {M}\)th order polynomial, they were able to show that \((\hbox {M}+1)\)th order of spatial accuracy could be achieved. Second, to match the spatial accuracy, they proposed the use of an \((\hbox {M}+1)\)th order Runge–Kutta timestepping for the time evolution. Third, they generalized a slope limiter method to yield TVB (total variation bounded) limiting. Extension to systems and to multiple dimensions came in Cockburn et al. (1989) and Cockburn et al. (1990) and Cockburn and Shu (1998) where it was realized that fluxes from Riemann solvers could be used at zone boundaries in order to provide upwinding and to stabilize the scheme.

The DG methods have the following four significant advantages which make them attractive for computational astrophysics: (i) DG methods of arbitrarily high order can be formulated. (ii) DG methods are highly parallelizable. (iii) DG methods can handle complicated geometries. (iv) DG methods take very well to adaptive mesh refinement (AMR). Furthermore, the degree of the approximating polynomial can be easily changed from one element to the other. The former spatial refinement is often referred to as h-adaptivity, where “h” stands for the size of a cell. The adaptivity in the approximating polynomial is referred to as p-adaptivity, where “p” stands for order of the approximating polynomial. As a result, while simpler finite volume methods can undergo h-adaptivity on an AMR mesh, a DG scheme has the potential to undergo hp-adaptivity (Biswas et al. 1994). For other reviews of DG schemes, see Cockburn et al. (2000), Hesthaven and Warburton (2008). As of this writing, DG schemes have begun to make inroads in computational astrophysics, cosmology and general relativity (Mocz et al. 2014; Schaal et al. 2015; Zanotti et al. 2015; Teukolsky 2015; Kidder et al. 2017; Balsara and Käppeli 2017; Balsara and Nkonga 2017).

We will examine DG schemes in the course of this review. Like all numerical schemes for treating non-linear hyperbolic systems, DG schemes need some form of non-linear limiting. Indeed, the quality of a DG scheme depends strongly on the limiter that is being used. If the limiter is invoked too frequently, it damages the quality of the solution. If the limiter is invoked less than it needs to be invoked, the code develops spurious oscillations that have a negative effect on the solution. Several limiters have been presented over the years (Biswas et al. (1994); Burbeau et al. (2001); Qiu and Shu (2004, 2005); Balsara et al. (2007); Krivodonova (2007); Zhu et al. (2008); Xu et al. (2009a, b), c; Xu and Lin 2009; Xu et al. 2011; Zhu and Qiu 2011; Zhong and Shu 2013; Zhu et al. 2013; Dumbser et al. 2014). The problem is that there has been no coalescence of consensus around any one particular limiter. For that reason, we will present two viable strategies for limiting DG schemes. The first strategy is based on WENO limiting; it is simple to retrofit into any pre-existing DG code and seems to work well (Zhong and Shu 2013; Zhu et al. 2013). This WENO limiter acts in an a priori fashion in the sense that the limiter is applied to troubled zones that need limiting *before* (at the beginning of) taking a DG timestep. Since limiters are applied at the beginning of taking a timestep in all other schemes for solving hyperbolic PDEs, this is the traditional style of using limiters. That makes the WENO limiter for DG schemes easy to retrofit into pre-existing codes. The other approach consists of the MOOD (Multi-dimensional Optimal Order Detection) method (Clain et al. 2011; Diot et al. 2012, 2013; Dumbser et al. 2014). The MOOD limiter is an *a posteriori* limiter in the sense that one initially takes a timestep without invoking any limiter. As a result, some of the zones that should have been limited will indeed be corrupted by the end of a timestep. *After* the timestep has been taken one identifies the corrupted zones, i.e., the zones where a limiter should have been invoked (but wasn’t). Then one tries to backtrack and redo the timestep in those zones that got corrupted. This process of backtracking and redoing can indeed take place more than once. Needless to say, MOOD limiting results in a DG code that is recursive and difficult to implement. The one virtue of MOOD limiting for DG is, however, that one only invokes the limiter in those zones where it is absolutely needed. Unlike the WENO limiter, which may apply more limiting than the absolute minimum that is needed, MOOD limiting will usually apply just the minimum amount of limiting. Since the MOOD limiter is based on heuristics, one cannot however claim that it always applies the minimum amount of limiting. On idealized problems, MOOD limiting for DG schemes has produced charming results.

As the order of accuracy of a DG scheme is increased, the permissible CFL decreases (Zhang and Shu 2005; Liu et al. 2008). The previous two citations showed this for zone-centered DG methods that apply to conservation laws. An analogous reduction in permissible timestep occurs for face-centered DG schemes for constrained transport (CT) of the magnetic field (Yang and Li 2016; Balsara and Käppeli 2017). To give but one example, the permissible CFL number for a DG scheme that is fourth order accurate in space and time can be as small as 0.14! For this reason, we invented PNPM methods (Dumbser et al. 2008). The PNPM scheme evolves an \(\hbox {N}\)th order spatial polynomial, while reconstructing higher-order terms up to \(\hbox {M}\)th order. Let us consider fourth order methods as an example. A P0P3 method is fourth order accurate and is effectively a fourth order finite volume scheme with a maximum CFL number of 1.0 in one-dimension. A P1P3 method evolves the zone averaged value as well as the first moment, while reconstructing the second and third moments. It has a maximum CFL number that is comparable to a second order in space and time DG method of 0.33. A P2P3 scheme evolves the zone averaged value as well as the first and second moments, while reconstructing the third moments. It has a maximum CFL number that is comparable to a third order in space and time DG method of 0.17. A P3P3 method is basically a fourth order DG method with a CFL of 0.10 when spatial and temporal accuracies are matched. We see, therefore, that it might be beneficial to use PNPM schemes with \(\hbox {N<M}\). Experience has shown that P1PM or P2PM schemes often give most of the sought-after accuracy of a PMPM scheme. This has been borne out via numerical experiments in Dumbser et al. (2008) for conservation laws and in Balsara and Käppeli (2017) for DG schemes for constrained transport of magnetic fields. For this reason, PNPM schemes will also form part of our study.

At least for now, the mesh structures used in computational astrophysics are simple, though there is also an emerging interest in methods that use Voronoi tessellations and Delaunay triangulations in astrophysics (Springel 2010; Vogelsberger et al. 2012; Florinski et al. 2013; Balsara and Dumbser 2015a, b; Mocz et al. 2014; Xu et al. 2016). For that reason, we will focus this version of the Living Review on structured meshes.

### Numerical methods for higher-order temporal accuracy

Unlike the plethora of numerical methods for achieving higher-order spatial accuracy, the methods for achieving high order of temporal accuracy are somewhat fewer. The most popular methods these days split into two dominant styles. There are the Runge–Kutta methods and the ADER (Arbitrary DERivative in space and time) methods. We briefly introduce them in the two succeeding paragraphs and we will describe them in detail later on in this review.

Runge–Kutta (RK) methods rely on discretizing the PDE in time in a fashion that is quite similar to the temporal discretization of an ordinary differential equation (ODE). The Runge–Kutta discretization of a time-dependent ODE splits the time evolution into a sequence of stages, each of which is only first order in time. The entire sequence of stages does indeed retain the desired order of temporal accuracy. In a similar fashion, the Runge–Kutta discretization of a time-dependent PDE also splits the time evolution into a sequence of stages. Each individual stage is high order accurate in space, but only first order accurate in time. As before, the entire sequence of stages does indeed retain the designed temporal accuracy. One almost always wants each stage to be non-oscillatory or even TVD. The strong-stability preserving (SSP) variant of RK methods guarantee that if each stage is TVD then the entire scheme will be TVD. As a result, these methods are known as RK-SSP methods. Such methods are available for treating hyperbolic systems without stiff source terms (Shu 1988; Shu and Osher 1989; Shu 1988; Gottlieb et al. 2001; Spiteri and Ruuth 2002, 2003; Gottlieb 2005; Gottlieb et al. 2011) and also hyperbolic systems with stiff source terms (Pareschi and Russo 2005; Hunsdorfer and Ruuth 2007; Kupka et al. 2012). These methods tend to be popular because each stage is practically identical to the previous stage, resulting in a simple implementation. For that reason, we will describe some of the most popular SSP-RK methods in this review.

While simplicity is the strong suit of RK-SSP methods, many of the steps in a multi-stage RK method are unnecessary. Consider the example of a three stage RK scheme, it requires the reconstruction to be done thrice and also the Riemann solvers to be invoked thrice. ADER schemes present a better alternative where the reconstruction is only done once and the Riemann solvers are invoked a fewer number of times. As a result, ADER schemes are computationally less expensive. Modern ADER schemes derive from two alternative antecedents. On the one hand, there is the generalized Riemann problem (GRP) (van Leer 1979; Ben-Artzi 1989; LeFloch and Raviart 1988; Bourgeade et al. 1989; Ben-Artzi and Birman 1990; Ben-Artzi and Falcovitz 1984, 2003; LeFloch and Raviart 1988; Qian et al. 2014; Goetz and Iske 2016 and Goetz and Dumbser 2016; Goetz et al. 2017) which seeks to understand the evolution of the Riemann problem when the flow variables on either side of it have linear or quadratic variation in space. One strain of ADER schemes derive from the development of the GRP (Titarev and Toro 2002, 2005; Toro and Titarev 2002; Montecinos et al. 2012; Montecinos and Toro 2014). Another strain of ADER schemes derive from the second order Lax-Wendroff procedure (Lax and Wendroff 1960; Colella 1985) and its higher-order extensions (Harten et al. 1987). Modern ADER schemes that stem from the Lax-Wendroff procedure rely on a very efficient Galerkin projection to iteratively solve the Cauchy problem within each zone (Dumbser et al. 2008; Balsara 2009; Balsara et al. 2013; Dumbser et al. 2013; Balsara and Kim 2016). In other words, given all the spatial moments of the reconstruction within a zone up to some level of spatial accuracy, the ADER predictor step tells us how the solution within that zone will evolve forward in time with a comparable accuracy in space and time. Modern ADER schemes of the latter type are easy to implement and converge very fast. Indeed, it can be proved that the ADER methods are convergent with or without stiff source terms (Jackson 2017). This makes them much more efficient in comparison to SSP-RK methods (Balsara et al. 2013). For that reason, we will focus on ADER methods in this review.

### Brief background on hyperbolic systems

In this review, we will be principally interested in the numerical solution of hyperbolic conservation laws of interest to computational astrophysics. We will instantiate our solution methodologies explicitly in two dimensions, because three-dimensional extensions follow trivially. Thus consider the *M*-component conservation law

Here U is the vector of “*M*” conserved variables and F(U) and G(U) are the corresponding fluxes in the *x* and *y*-directions. The conservation law is hyperbolic for *x*-directional variations if we can write

where A is an \(M\times M\) characteristic matrix, \(\Lambda \) is a diagonal matrix with an ordered set of real eigenvalues and *R* and *L* are a complete set of right and left eigenvectors. For multidimensional problems, we want a similar set of real eigenvalues to exist regardless of the direction in which we analyze the hyperbolic nature of the conservation law. In practical terms, it implies that a similar characteristic decomposition can be made for the matrix \(\hbox {B }\equiv ~{\partial \hbox { G}\left( \hbox {U} \right) }/{\partial \hbox { U}}\). Equations (1) can be discretized in a finite volume fashion on a mesh. Let the mesh be uniform with zones of size \(\Delta x\) and \(\Delta y\) in the two directions. Let (*i*, *j*) denote the zone centers of the mesh and (\(i+1/2,j)\) and (\(i,j+1/2)\) denote the centers of the x and y-faces of the mesh as shown in Fig. 1. Numerically evolving Eq. (1) entails taking a time step of size \(\Delta t\) which takes us from a time \(t^{n}\) to a time \(t^{n+1}=t^{n}+\Delta t\) as

In Eq. (3) we define the conserved variable \(\overline{\hbox {U}} _{i,j}^n \) as a volumetric average over a rectangular zone and the numerical fluxes \({\overline{\hbox {F}}}_{i+1/2,j}^{n+1/2} \) and \({\overline{\hbox {G}}} _{i,j+1/2}^{n+1/2} \) as the space-time averages over the faces of the mesh as follows

Recall that the Lax–Wendroff theorem tells us that consistent and stable schemes that are written in conservation form will indeed propagate shocks at the correct physical speed.

The prior Introduction has introduced us to the next two important ingredients. We were introduced to the importance of monotonicity preserving reconstruction. Extensive information on monotonicity preserving reconstruction is also given in Chapters 2 and 3 of the author’s website. The non-linear hybridization provided by TVD reconstruction is a very good way of getting past the limitations of Godunov’s theorem. (Godunov’s theorem says that the only linear schemes that can be constructed for monotone advection are indeed first order ones.) The same concept is important for linear hyperbolic systems where the system can be decomposed into characteristic variables. When viewed in characteristic variables, the time-evolution of an *M*-component linear hyperbolic system in one dimension is equivalent to the scalar advection of *M* characteristic variables as follows

Here \(\lambda ^{m}\) is the *m*th eigenvalue, \(l^{m}\) is the *m*th eigenvector and \(w^m \) is the *m*th characteristic variable. For non-linear hyperbolic systems, we are not quite so lucky. Equation (5) is not globally true, even in one dimension. However, we will see that the characteristic decomposition that is available within each zone can be used to make a local version of Eq. (5) which holds true for one time step within a zone. We will see that local characteristic decompositions can be used with good effect in numerical schemes. The monotonicity preserving reconstruction produces jumps at zone boundaries. A physically consistent way of resolving the jumps is through the Riemann problem. The Riemann problem simultaneously gives us an upwinded solution that also satisfies an entropy principle. The dissipation provided by the Riemann problem was seen to be essential for treating discontinuities in conservation laws. Extensive references to the Riemann problem were given in the Introduction, and more information is available in Chapters 4, 5 and 6 of the author’s website. A large compendium of Riemann solvers is also described in the book by Toro (2009). Monotonicity preserving reconstruction as well as the Riemann problem will be used as building blocks when constructing successful schemes for the numerical solution of hyperbolic conservation laws.

The present review follows a certain line of development for the solution of hyperbolic conservation laws. The schemes catalogued here are called *higher-order Godunov schemes* and are by far the most popular and well-developed solution methodology for this class of problem. Such schemes are robust and can handle shocks of almost any strength. They are relatively fast and work well in multi-dimensions, making them the workhorse of choice. In their essentials, they do not rely on any adjustable parameters, though various means for improving the solution quality are well known. Because these methods have seen extensive development and use, the instances where they have deficiencies are well-known (Quirk 1994) and good workarounds have been developed. There are, however, interesting alternatives that each have their selling points. *Flux corrected transport* schemes (Boris and Book 1976; Zalesak 1981; Oran and Boris 1987) are an interesting forerunner of higher-order Godunov schemes that have been used with success for reactive flow. *Central schemes* (Swanson and Turkel 1992; Levy et al. 2000; Kurganov and Tadmor 2000; Kurganov et al. 2001) use ideas on upwinding from Godunov schemes but bypass the use of the Riemann solver. While their use of a dual mesh increases the programming complexity, bypassing the Riemann problem may be desirable when the Riemann problem is computationally expensive. *Spectral schemes* (Canuto et al. 2007; Gottlieb and Orzag 1977) offer high accuracies for problems with simple geometries and boundary conditions in the smooth part of the flow. *Compact schemes* (Lele 1992) offer low dispersion error and have proved useful for turbulence research. *Wavelet-based schemes* for solving PDEs rely on the fast wavelet transform (Daubechies 1992). They have also reached a level of maturity where they can adaptively solve certain CFD problems to a desired level of accuracy (Rastigejev and Paolucci 2006; Zikoski 2011). Any such list of worthy numerical methods will always be incomplete, so we beg the reader’s indulgence for any omissions.

Many of the popular astrophysics codes have focused on second order of accuracy, though we have often alluded to the advantages of schemes with higher-order accuracy. Explaining and understanding a second order scheme is pedagogically simple. As a result, we will briefly open some of the sections in this review with a second order variant of a Godunov scheme. However, robust higher-order Godunov schemes that go well beyond second order accuracy are now commonplace. For that reason, we also present methods that go beyond second order. Conceptually, the design and implementation of any scheme that goes beyond second order requires one to pay careful attention to the same set of issues. For this reason, we will instantiate the schemes at third order. A student who understands the issues at third order will find it easy to go beyond third order if needed. The relevant literature base for schemes that go beyond second order is also cited in the text.

It is assumed that the reader is familiar with the eigenstructure of the hyperbolic systems being considered. However, Appendix A gives a thorough discussion of the eigenstructure for the Euler equations. Appendix B gives a similarly thorough discussion of the eigenstructure for the MHD equations and points out some of the nuances in understanding the eigenstructure of this much larger and more complicated hyperbolic system. Appendix C briefly mentions the RHD and RMHD equations and gives pointers to the literature. Usually, an exposure to the eigenstructure for one or two hyperbolic systems is sufficient to give the reader the gist of the idea; and the Euler and MHD equations are the two equations we discuss in Appendices A and B. It is also assumed that the reader has some working familiarity with Riemann solvers. However, Appendix D gives a quick introduction to the HLL Riemann solver. Appendix E gives a practical, implementation-oriented sketch of the HLLI Riemann solver (Dumbser and Balsara 2016), which can indeed be applied to any hyperbolic system with exceedingly good results. Because of its extreme simplicity and generality, as well as its ability to give superb results at a very low computational cost, it is hoped that the HLLI Riemann solver will become a workhorse in computational astrophysics. The author’s website also provides codes that encapsulate a wide array of Riemann solvers for Euler and MHD flow and the interested reader can use the codes to intercompare different Riemann solvers and assess their relative strengths and weaknesses.

This review can be read in different ways depending on the reader’s learning goals. If the learning goal is to become familiar with second order schemes, which tend to be simpler, then one can get by with the following sections: 2.1 on TVD reconstruction, 4.1 and 4.2 on second order Runge–Kutta timestepping, 5.1 on second order predictor–corrector schemes and 8 for numerical examples. Of course, one should also read the introductory parts of the sections that lead into the above-mentioned subsections. The reader who wants to make a quick, first pass through this review may well want to take in just the previously mentioned subsections. The rest of Sect. 2 as well as all of Sect. 3 make a thorough study of PPM and WENO reconstruction strategies. The rest of Sects. 4 and 5 give details on making efficient implementations of higher-order Runge–Kutta and ADER timestepping respectively. Discontinuous Galerkin schemes also see extensive use in several computational areas and are, therefore, discussed in Sect. 6. As the emphasis shifts to simulations with greater fidelity, the issues of positivity discussed in Sect. 7 assume greater importance and should be incorporated into codes. Sects. 8 and 9 provide accuracy analysis and the results of several stringent test problems that use the methods described here. Section 10 draws some conclusions.

## Reconstructing the solution for conservation laws—Part I: TVD and PPM reconstruction

At the beginning of a time step, most higher-order Godunov schemes start with a mesh function that is made up of the zone-averaged conserved variables as prescribed on a mesh. The conserved variables are evolved for a timestep using Eq. (3). Taking several timesteps, each of which is bounded by the CFL number, enables us to evolve the conservation law in time. Some higher-order Godunov schemes retain and evolve higher-order moments of the mesh function within each zone (van Leer 1979; Cockburn and Shu 1989; Cockburn et al. 1989; Cockburn and Shu 1998; Lowrie et al. 1995; Cockburn et al. 2000; Qiu and Shu 2004, 2005; Schwartzkopff et al. 2004; Balsara et al. 2007; Dumbser et al. 2008; Xu et al. 2009a, b). For such schemes, known as *discontinuous Galerkin* schemes, the conserved variables, as well as all their higher moments, are evolved in time. However, in the interest of reducing the memory footprint, most schemes simply idealize the solution as a sequence of slabs of fluid within each zone. The process of endowing these slabs with a meaningful sub-structure is known as the *reconstruction* problem. By reconstructing the solution, we hope to resolve the often contradictory requirements of increasing the order of accuracy of the solution that is represented within each zone while simultaneously preventing the solution from developing spurious oscillations in the vicinity of strong discontinuities. Schemes that rely on reconstruction to endow the mesh function with sub-structure have been studied very extensively in the literature. The happy consequence is that they can be served up as a general-purpose building block for numerical treatment of hyperbolic conservation laws.

In this section, we focus on schemes which reconstruct the solution based on the TVD principles; for details, please see Chapter 3 of the author’s website. In the next section, we will focus on schemes that refrain from truncating local extrema when it is justified. The PPM scheme discussed in this section straddles these two design philosophies since the modern versions of PPM indeed do not truncate local extrema.

### TVD reconstruction in conserved, primitive, or characteristic variables

*Piecewise linear (TVD) reconstruction* in the context of linear hyperbolic systems has been explained in detail in Chapter 3 of this author’s website. On a two dimensional mesh, like the one shown in Fig. 1, we want the solution vector in each zone (*i*, *j*) to have a piecewise linear variation in each direction. Consequently, at some time \(t^{n}\) in the zone (*i*, *j*) we start with a zone-averaged solution vector \({\overline{{\mathrm{U}}}}_{i,j}^n \), and such solution vectors are specified in all zones. Obtaining a piecewise linear reconstruction in each zone means that we want the mesh function \(\left\{ {{\overline{{\mathrm{U}}}}_{i,j}^n } \right\} \) to have linear variation as follows

Here, \(\left( {x_i ,y_j } \right) \) is the centroid of zone (*i*, *j*) and \(\left( {\tilde{x},\tilde{y}} \right) \in \left[ {-1/2,1/2} \right] \times \left[ {-1/2,1/2} \right] \) are local coordinates that we define in the same zone. The vectors \(\Delta _x {\overline{\hbox {U}}}_{i,j} \) and \(\Delta _y {\overline{\hbox {U}}}_{i,j} \) hold the piecewise linear variation of the mesh function within the zone (*i*, *j*). The three ways to carry out this piecewise linear reconstruction that are explored in this section are, reconstruction in the conserved variables, reconstruction in the primitive variables and reconstruction in the characteristic variables. Each has its strengths and uses and we catalogue them below.

Reconstruction can be easily enforced componentwise on the conserved variables. For reasons of simplicity, let \(\overline{{u}}_{i,j}^m \) denote the *m*th component of the vector \({\overline{\hbox {U}}}_{i,j}^n \). (The superscript “*n*” from \({\overline{\hbox {U}}}_{i,j}^n \) is being dropped in \(\overline{{u}}_{i,j}^m \), because the components are only being considered at a given time.) Then piecewise linear *reconstruction of the conserved variables* simply consists of specifying \(\Delta _x \overline{{u}}_{i,j}^m \) and \(\Delta _y \overline{{u}}_{i,j}^m \) in the ensuing formula

When such a specification is provided for all of the components of \({\overline{\hbox {U}}}_{i,j}^n \), we say that the solution has been reconstructed. Let “*Limiter *(a,b)” denote any slope limiter, where “a” and “b” are the left and right-biased slopes. (The box at the end of this subsection provides a smorgasbord of limiters!) The easiest way to achieve our goal is to limit the variation in each of the components of \({\overline{\hbox {U}}}_{i,j}^n \) as follows

This gives us a piecewise linear reconstruction strategy where the limiter has been applied to the conserved variables. This is the fastest form of limiting.

In some problems, like fluid dynamics, a premium is placed on retaining positive densities and pressures in the reconstruction. In such situations, it helps to reconstruct the profile within a zone using the primitive variables. Let \(\hbox {V}_{i,j}^n \) denote the vector of primitive variables that is obtained from the vector of conserved variables \({\overline{\hbox {U}}}_{i,j}^n \). Let \(v_{i,j}^m \) denote the *m*th component of the vector \(\hbox {V}_{i,j}^n \). *Reconstruction of the primitive variables* is then trivially obtained by setting \(\overline{{u}}\rightarrow v\) in Eqs. (7) and (8).

For some problems it is very beneficial to resort to piecewise linear *reconstruction of the characteristic variables*. To see this, notice from Eq. (5) that the system decomposes into a set of scalar advection problems only when the problem is decomposed in characteristic variables. Thus limiting on the characteristic variables is conceptually well justified. The other two forms of limiting, i.e., componentwise limiting on the conserved or primitive variables, are not as well justified. Furthermore, different wave families may have different properties; some may be linearly degenerate (e.g., contact discontinuity in Euler flow) while others may be genuinely non-linear (shocks in Euler flow). In order to devise a good solution strategy, different families of waves may have to be limited slightly differently. For example, the profile of a discontinuity in a linearly degenerate wave family may need to be sharpened. This can be accomplished by using a compressive limiter. Because of their tendency to self-steepen, genuinely non-linear wave families do not need any such improvement; consequently, a less compressive limiter might be appropriate for such wave families. However, it is worth recalling that if the hyperbolic system is non-convex, as is the case for MHD and RMHD, the non-linear wave families might give rise to their own further pathologies (Ibanez et al. 2015).

Reconstructing the characteristic variables is a little more intricate. Notice from Eq. (5) that for a linear problem, the left and right eigenvectors as well as the eigenvalues are constant. As a result, the characteristic equation, \(w_t^m + \lambda ^{m}~w_x^m =0\), is valid at all points along the *x*-axis. For a nonlinear problem, the eigenvalues as well as the eigenvectors depend on the solution \({\overline{\hbox {U}}}_{i,j}^n \) within a zone, and they change as the solution changes in time. However, we can still make a local linearization around a given state, and for zone (*i*, *j*) that state is \({\overline{\hbox {U}}}_{i,j}^n \). Thus the *m*th eigenvalue can be written as \(\lambda ^{m}\left( {{\overline{\hbox {U}}}_{i,j}^n } \right) \) and the *m*th left and right eigenvectors are written as \(l^{m}\left( {{\overline{\hbox {U}}}_{i,j}^n } \right) \) and \(r^{m}\left( {{\overline{\hbox {U}}}_{i,j}^n } \right) \) respectively. The dependence of the eigenvectors on the solution \({\overline{\hbox {U}}}_{i,j}^n \), around which we linearize the problem, has been made explicit. Any solution vector, even the ones from the zones that are to the right or left of the zone (*i*, *j*), can now be projected into the eigenspace that has been formed by the eigenvectors that are defined at the zone of interest. To make it explicit, please realize that the set of left eigenvectors in zone \(\left( {i+1,j} \right) \), given by \(\left\{ {l^{m}\left( {{\overline{\hbox {U}}}_{i+1,j}^n } \right) :m=1,\ldots ,M} \right\} \), will not be orthonormal with the set of right eigenvectors in zone \(\left( {i,j} \right) \), given by \(\left\{ {r^{m}\left( {{\overline{\hbox {U}}}_{i,j}^n } \right) :m=1,\ldots ,M} \right\} \). Consequently, because of the solution-dependence in the eigenvectors, we realize that each zone defines its own local eigenspace. We want to project the characteristic variables from the neighboring zones in the local eigenspace of the zone that we are considering.

Let us detail the *x*-variation; the *y*-variation can be obtained in an analogous fashion. We describe the process of making a characteristic reconstruction in three easy steps. First, for a TVD reconstruction we only need the two neighboring characteristic variables in addition to the central one. So we can use the left eigenvector \(l^{m}\left( {{\overline{\hbox {U}}}_{i,j}^n } \right) \) from the zone (*i*, *j*) to locally project the characteristic variables in the *m*th characteristic field as

The subscripts “*L*”, “*C*” and “*R*” refer to the zone that is left of the central zone, the central zone itself and the zone that is right of the central zone. This has to be done for all the characteristic fields in zone \(\left( {i,j} \right) \). Second, the local *x*-variation in the *m*th characteristic field can now be written as

This should be done for all the characteristic fields in zone \(\left( {i,j} \right) \). Third, the *x*-variation in the mesh function can now be obtained by projecting the variation in the characteristic fields into the local space of right eigenvectors \(r^{m}\left( {{\overline{\hbox {U}}}_{i,j}^n } \right) \) in the zone (*i*, *j*) as follows

Our use of the word “local” in describing Eqs. (9)–(11) is intentional. Notice that despite its conceptual elegance, the characteristic limiting described in Eqs. (9)–(11) involves matrix-vector multiplies in the first and third steps. If the hyperbolic system is large, these matrix operations can add to the computational complexity. In its defense, however, it is worth pointing out that characteristic limiting usually gives better *entropy enforcement* than componentwise limiting on the conserved or primitive variables. In other words, when the initial conditions have arbitrary discontinuities, those discontinuities will be most rapidly resolved into their entropy-satisfying simple wave solutions if characteristic limiting is used. This completes our description of characteristic limiting for TVD schemes.

It is also useful to point out that the PPM and WENO limiting that follow in the next two subsections require larger stencils. In that case, Eq. (9) can be extended to a stencil that includes more than just the immediately neighboring zones. For example, if we have a five zone stencil centered around zone (*i*, *j*), we would include the characteristic variables \(l^{m}\left( {{\overline{\hbox {U}}}_{i,j}^n } \right) \cdot {\overline{\hbox {U}}}_{i-2,j}^n \) and \(l^{m}\left( {{\overline{\hbox {U}}}_{i,j}^n } \right) \cdot {\overline{\hbox {U}}}_{i+2,j}^n \) in Eq. (9).

It is interesting to ask what sort of results we get with the reconstruction schemes catalogued in this subsection. It is easiest to demonstrate the effect of reconstruction on scalar advection because advection is indeed free of the effects of non-linear terms. To that end, Jiang and Shu (1996) constructed a very useful test problem. It consists of solving the advection equation, \(\hbox {u}_t +\hbox { u}_x = 0\), on the interval \([-\,1,1]\) in periodic geometry. The advected profile is described by

Here the functions “F” and “G” are given by

The constants in the above equations are given by

The problem has several shapes that are difficult to advect with fidelity. From left to right the shapes consist of: (1) a combination of Gaussians, (2) a square wave, (3) a sharply peaked triangle and (4) a half ellipse. It is a stringent test problem because it has a combination of functions that are not smooth and functions that are smooth but sharply peaked. The Gaussians differ from the triangle in that the Gaussians’ profile actually has an inflection in the second derivative. A good numerical method that can advect information with a high level of fidelity must be able to preserve the specific features of this problem.

The problem was initialized on a mesh of 400 zones and was run for a simulation time of 10 which corresponds to five traversals around the mesh. In doing so, the features catalogued in the above equations were advected over 2000 mesh points. The problem was run with a CFL number of 0.6. (We will introduce third and fourth order accurate Runge–Kutta time stepping in Sect. 4.) In all instances, we used a Runge–Kutta time stepping scheme with temporal accuracy that matched the spatial accuracy of the reconstruction strategy.

Figure 2a shows the result for the MC limiter, which yields second order accurate spatial reconstruction, along with a temporally second order accurate Runge–Kutta scheme. The solid line shows the analytic solution, the overlaid crosses show the computed result. Despite the MC limiter being one of the better limiters, we see that the resulting profile shows substantial degradation. None of the profiles has been preserved in such a way that their original shape can be distinguished by the end of the simulation. We also see a strong loss of symmetry in the resulting profiles, which we can understand because the scheme that was used was an upwind-biased scheme. The MC limiter is amongst the best general-purpose TVD limiters, yet we see that the quality of the solution is rather poor. This gives us added motivation to study the better reconstruction strategies in the next few sections.

### Going beyond piecewise linear reconstruction: piecewise parabolic (PPM) reconstruction

The desire to improve on piecewise linear reconstruction drove the development of the *piecewise parabolic method* (PPM) (Colella and Woodward 1984; Colella and Sekora 2008; McCorquodale and Colella 2011). An excellent review of PPM has been provided by Woodward (1986) and several stringent test problems for compressible fluid flow have been documented in Woodward and Colella (1984). In this subsection, we document the classical formulation of PPM from Colella and Woodward (1984), while leaving recent extensions (McCorquodale and Colella 2011) for the reader’s self-study. It is also interesting to point out that PPM is a forerunner of a class of schemes (Leonard et al. 1995; Suresh and Huynh 1997) that attempt to produce a higher-order reconstructed profile within a zone and then use neighboring zones to endow the profile with monotonicity preserving properties.

The PPM method is best illustrated by showing how the reconstructed profile evolves in a set of zones as the steps in the PPM reconstruction procedure are applied to an initial mesh function. To that end, the dotted line in Fig. 3a shows the function \(u\left( x \right) = 1.2 + \hbox {tanh }\left( {{\left( {0.65-x} \right) }/{0.3}} \right) \) which mimics a shock profile over the domain \(x\in \left[ {-2.5,3.5} \right] \). The domain is spanned by six zones of unit size and the hyperbolic tangent function is shown with the dotted line in Fig. 3a. Let \(\overline{{u}}_{i-2} \), \(\overline{{u}}_{i-1} \), \(\overline{{u}}_i \), \(\overline{{u}}_{i+1} \), \(\overline{{u}}_{i+2} \) and \(\overline{{u}}_{i+3} \) denote the values of the mesh function for the zones that are centered at \(x = -2, -1, 0, 1, 2\) and 3 respectively. We label these zones from “\(i-2\)” to “\(i+3\)”, and our goal is to demonstrate the steps in the PPM reconstruction especially as they are applied to zone “*i*” which spans \(x\in [-\,0.{5},0.{5}]\). The mesh function is shown with dashed lines in Fig. 3a. A third order, i.e., parabolic, reconstruction in the *i*th zone, centered at *x* = 0, is most easily enforced by using Legendre polynomials as follows

The linear and quadratic Legendre polynomials in the above formula provide the two-fold advantages of orthogonality and a zero average value. As a result, the zone average of \(u_i \left( x \right) \) over the *i*th zone is given by \(\overline{{u}}_i \). In PPM, one focuses on the values of the interpolated function at the zone boundaries. Thus for the zone being considered, we have the right and left extrapolated edge values of the parabolic profile defined by \(u_{i;R} \) and \(u_{i;L} \), see Fig. 3a. Along with the mean value \(\overline{{u}}_i \), these three values uniquely specify the parabolic profile in Eq. (12) so that for each zone we have

The finite difference-like form of Eq. (13) is readily apparent. One has still to specify \(u_{i;R} \) and \(u_{i;L}\) at the zone edges with third or better accuracy in order for the reconstruction in Eq. (12) to be third order accurate. We do that next.

Let us focus on the process of obtaining \(u_{i;R} \). In classical PPM we begin by specifying this value with fourth order accuracy. Thus one defines a cubic polynomial \(q\left( x \right) =q_0 +q_1 x+q_2 x^{2}+q_3 x^{3}\) that spans the domain \(x\in \left[ {-1.5,2.5} \right] \), i.e., the zones from “\(i-1\)” to “\(i+2\)”. The coefficients of the cubic are easily obtained by enforcing the following four consistency conditions

The above approach is known as *reconstruction via primitive function*. It is the standard method for obtaining higher-order reconstructions. The resulting value of \(u_{i;R} \) can be easily obtained from \(q\left( {x=0.5} \right) \) and we write it in two illustrative formats below.

We see that \(\Delta u_{i+1}\) and \(\Delta u_i \) in Eq. (15) are simply the undivided differences. Formulae similar to the above one can be used to obtain \(u_{i-1;R} \), \(u_{i+1;R} \) and so on in the adjoining zones. By setting \(u_{i;L} =u_{i-1;R} \) and so on, we can specify all the parabolae in all the zones. In other words, we assert that the left extrapolated edge value in one zone is equal to the right extrapolated edge value in the zone to the left of it. The right and left extrapolated edge values, \(u_{i;R} \) and \(u_{i;L} \), will then be fourth order accurate. The resulting parabolae are shown by the solid curve in Fig. 3a. Figure 3a shows the parabolae within each zone that have been obtained from the original quartic in Eq. (15) without limiting. These parabolae are only being shown by way of illustration and are never used in classical PPM. We clearly see that the parabolic profiles introduce several new extrema in the reconstructed function, making them an unsuitable starting point for a monotonicity preserving reconstruction. As shown in Fig. 3a, they also do not produce any jumps at the zone boundaries despite the fact that Fig. 3a represents a discontinuous profile. Consequently, a Riemann solver would not generate entropy and help stabilize the reconstructed piecewise parabolic profiles shown by the solid curves in Fig. 3a.

The reconstruction in Fig. 3a introduces too many extrema in several zones, which is unacceptable. The second formula in Eq. (15) suggests a way out. Since \(\Delta u_{i+1} \)and \(\Delta u_i \) are simply undivided differences, we replace them with the slopes coming from an MC limiter. Thus we get

Notice that the MC limiter has the property that when the mesh function is smooth, \(\Delta u_{i+1}\) and \(\Delta u_i\) from Eq. (16) exactly reduce to their centered equivalents in Eq. (15). Consequently, for smooth mesh functions, Eq. (15) will stay fourth order accurate. The slopes from Eq. (16) are used in the second formula in Eq. (15) to yield \(u_{i;R} \). Analogous formulae give all the extrapolated right edge values. The extrapolated right edge values can then be used to obtain the extrapolated left edge values by enforcing \(u_{i;L} =u_{i-1;R} \) and so on at all the zones. The resulting parabolae are shown by the solid curve in Fig. 3b and we can easily see that they have substantially fewer extrema within the zones compared to Fig. 3a. These parabola, with slopes that have been limited, are used as a starting point for the reconstruction. We see from Fig. 3b that the profiles within each zone do have some extrema. Furthermore, their values do match up at the zone boundaries. These reconstructed profiles would still be unsuitable for use within a higher-order Godunov scheme because the Riemann solver relies on the existence of jumps at zone boundaries to introduce the extra dissipation that is needed at shocks. We clearly see from Fig. 3b that monotonicity should be enforced within each zone and, in doing that, we will also obtain the jumps at the zone boundaries that represent discontinuities.

The last step in PPM, therefore, consists of enforcing monotonicity within each zone. For our example profile in Fig. 3b we see that zones “*i*”, “\(i+1\)” and “\(i+2\)” introduce new extrema in the reconstructed profile. The first, and most natural, enforcement of a monotonicity condition indeed consists of requiring that the zone average \(\overline{{u}}_i \) must stay within \(\hbox {min}\left( {u_{i;L} ,u_{i;R} } \right) \) and \(\hbox {max}\left( {u_{i;L} ,u_{i;R} } \right) \). I.e., we require that the parabolic profile should not introduce new extrema. When such a condition is applied to the zone “\(i+2\)”, we see from Fig. 3b that the reconstructed profile would be immediately flattened. This is borne out in Fig. 3c. Thus the first condition for enforcing monotonicity that we apply to all the zones is given by

While the above choice is suggested by Colella and Woodward (1984), this author’s own preference for the above equation would be

We see, however, that the two zones labeled by “*i*” and “\(i+1\)” in Fig. 3b would be unaffected by the above condition. These two zones do have new extrema within them that were not present in the original profile. To diagnose the extrema that are introduced in those two zones, we have to realize that Eq. (12) has its extremum at \(x_e =-{\hat{{u}}_x }/{\left( {~2~\hat{{u}}_{xx} } \right) }\). Thus the reason we see a new extremum in the zone “*i*” which is centered at *x* = 0 stems from the fact that \(-0.5<-{\hat{{u}}_x }/{\left( {~2~\hat{{u}}_{xx} } \right) }<0.5\) for that zone. In other words, if one detects the existence of a new extremum within a zone then one should be willing to reduce the curvature, \(\left| {\hat{{u}}_{xx} } \right| \), of the parabola within that zone. I.e., if \(x_e \) is negative then reducing \(\left| {\hat{{u}}_{xx} } \right| \) without changing the sign of \(\hat{{u}}_{xx} \) will eventually shift the extremum past \(x=-0.5\); if \(x_e \) is positive then reducing \(\left| {\hat{{u}}_{xx} } \right| \) without changing the sign of \(\hat{{u}}_{xx} \) will eventually shift the extremum past \(x=0.5\). For the zone “*i*” under consideration, reducing \(\left| {\hat{{u}}_{xx} } \right| \) will immediately cause the maximum or the minimum of the parabola to lie outside (or at the boundary of) the domain \([-\,0.5,0.5]\). Colella and Woodward (1984) provide closed-form expressions that detect when the curvature needs to be reduced for a parabolic profile. When such a reduction in the curvature is deemed necessary, they also provide explicit formulae for reducing the curvature by modifying one or the other edge extrapolated states. We repeat those formulae here. Consequently, the second condition for enforcing monotonicity, which is also applied to each of the zones, is given by

Once the above two conditions are applied at each of the zones, we see from the solid curve in Fig. 3c that all the zones have a monotone, piecewise parabolic profile. By comparing Fig. 3b, c, one can even observe that the maximum of the *i*th zone has been shifted to its left boundary while the minimum of the (\(i+1)^{\mathrm{th}}\) zone is shifted to its right boundary. For each zone, we can use the extrapolated right and left edge values along with the zone average in Eq. (13) to obtain the final, reconstructed parabolic profile, i.e., Eq. (12).

Notice that the final piecewise parabolic reconstruction in Fig. 3c has jumps at the zone boundaries that represent a discontinuity. If the discontinuity represents a jump in a linearly degenerate wave field then it is desirable to minimize the jumps, and therefore the dissipation, at zone boundaries. Figure 3d shows the piecewise linear profile that one obtains by applying an MC limiter to the same mesh function. We see that the jumps at zone boundaries in Fig. 3d are much larger than those in Fig. 3c. As a result, PPM represents contact discontinuities in fluid flow much better than its piecewise linear cousins. If the discontinuity is a shock then the Riemann solver will be able to introduce additional dissipation to stabilize the shock. By virtue of its being a monoticity preserving scheme, PPM does indeed introduce the requisite jumps in flow variables at zone boundaries. However, for a strong shock, the jumps at zone boundaries may be less than the amount that is needed to fully stabilize the shock. As a result, proper treatment of a strong shock in PPM requires a flattener algorithm (Colella and Woodward 1984). By locally detecting the existence of a shock and flattening the flow profiles at the shock, one increases the jumps at zone boundaries and, therefore, the local dissipation. We will learn more about this in the next section. This completes our description of PPM reconstruction.

Figure 2b shows the result from our advection test when classical PPM reconstruction was used. Since PPM nominally produces a third order accurate reconstruction for smooth flow, it was used along with a temporally third order accurate Runge–Kutta time stepping scheme. We clearly see a substantial improvement in Fig. 2b relative to Fig. 2a, which shows that an investment in good reconstruction strategies pays rich dividends. The Gaussian, triangle and elliptical profiles can be clearly distinguished from each other. The top of the ellipse does show some upwind biasing. The square wave is crisply represented with few zones across its boundaries.

## Reconstructing the solution for conservation laws—Part II: WENO reconstruction

The previous section has shown us that reconstructing the solution from a given mesh function is an intricate problem and can have a great deal of bearing on the quality of our numerical solution. In his early paper, van Leer (1979) had anticipated that it might be possible to reconstruct the solution with better than second order accuracy leading to schemes that go beyond second order. Indeed, the PPM scheme of Colella and Woodward (1984) was a step in that direction. The original PPM scheme was restricted to second order accuracy by the use of a monotonicity preserving limiter (Woodward 1986) and subsequent variants of PPM, see McCorquodale and Colella (2011), represent an effort to go beyond second order accuracy. We, therefore, see that the limiters that provide stability at discontinuities by enforcing the TVD property also restrict the accuracy of the numerical method. The limiter simply clips local extrema and, when such a limiter is applied at every time step in a long-running simulation, it degrades the accuracy of the method.

*Essentially non-oscillatory* (ENO) schemes represent an effort to go beyond second order by totally circumventing the harsher effects of TVD limiting. They are based on the realization that in order to avoid clipping extrema and thus degrading the accuracy, one has to accept a reconstruction strategy that may introduce local extrema within a zone as long as no new oscillations are introduced and as long as the solution remains numerically stable. The original ENO schemes were formulated as finite volume methods in Harten et al. (1987) and efficient finite difference versions of the same were provided in Shu and Osher (1988; 1989). The finite difference formulations have the advantage of speed when applied to uniform (or smooth), structured meshes. The finite volume schemes, while somewhat slower, are more versatile and can take well to a wide variety of structured or unstructured meshes, including adaptive meshes that change to accommodate a changing solution. Unlike TVD and PPM schemes, all of which were formulated in a small number of papers, there have been a few generations or ENO-type schemes where each generation improved on the deficiencies of the previous generation. The *weighted essentially non-oscillatory* (WENO) schemes that see modern use stem from the work of Liu et al. (1994) and Jiang and Shu (1996). WENO schemes are especially suitable for problems that simultaneously contain strong discontinuities along with complex, smooth solution features. Finite difference WENO schemes have been formulated that go up to eleventh order in Balsara and Shu (2000). Efficient finite volume formulations of WENO reconstruction are now available for structured meshes (Balsara 2009; Balsara et al. 2013) and unstructured meshes (Friedrichs 1998; Hu and Shu 1999; Dumbser and Käser 2007; Zhang and Shu 2009). For a superb review of WENO schemes, see Shu (2009). As with PPM, WENO reconstruction methods work well in strong shock situations if coupled with a good flattener algorithm (Colella and Woodward 1984; Balsara 2012b). We will introduce flatteners in Sect. 7. *Compact WENO* schemes which minimize the dispersion error (Lele 1992) have also been formulated for simulating high Mach number turbulence (Zhang et al. 2008). Shu (2009) has catalogued a plethora of science and engineering problems where WENO schemes have been used with great success.

### Weighted essentially non-oscillatory (WENO) reconstruction in one dimension

We have seen that the minmod slope limiter selects the limited slope either by looking to the left of a zone or by looking to the right of a zone. In other words, we may think of a zone and its neighbor to the left as providing a left-biased stencil and the same zone along with its neighbor to the right as providing a right-biased stencil. Either of the two stencils can, in principle, provide a second order accurate reconstruction in the central zone and the minmod limiter chooses the stencil with the smaller slope. WENO reconstruction takes this concept a lot further by carrying out a very sophisticated analysis of the solution that is available on all the possible stencils. We have also seen that the minmod slope limiter achieves its stability via non-linear hybridization, i.e., the final slope is a strongly non-linear function of the right and left-biased slopes. WENO schemes also achieve their stability via non-linear hybridization, the only difference being that a more refined process is used for achieving the non-linear hybridization. So, to summarize, WENO schemes carry out a much more sophisticated stencil analysis along with a more refined non-linear hybridization.

The easiest way to introduce WENO reconstruction is by relying on a couple of visually motivated examples in one dimension. Thus Fig. 4 introduces the process of reconstructing the Gaussian function \(u\left( x \right) =\hbox { e}^{-(x/4)^{2}}\)while Fig. 5 does the same for the hyperbolic tangent function that was used in the previous section. The dotted lines in Figs. 4 and 5 show the original function. We consider a five zone mesh spanning the domain \(x\in [-\,2.5,2.5]\), where all the zones have unit extent. We label these zones from “\(i-2\)” to “\(i+2\)”, and our goal is to demonstrate the steps in the WENO reconstruction as they are applied to zone “*i*”. We are interested in the third order accurate WENO reconstruction within the central zone which spans \(x\in [-\,0.5,0.5]\). The mesh functions for each of the two profiles being considered are shown in Figs. 4 and 5 with dashed lines. We can see that the Gaussian is represented by a smoothly-varying function on the mesh while the shock is represented as a discontinuity. Let \(\overline{{u}}_{i-2} \), \(\overline{{u}}_{i-1} \), \(\overline{{u}}_i \), \(\overline{{u}}_{i+1} \) and \(\overline{{u}}_{i+2} \) denote the values of the mesh function for the zones that are centered at \(x=-2, -1, 0, 1\) and 2 respectively. A third order, i.e., quadratic, reconstruction in the central zone is most easily enforced by using Legendre polynomials as follows

The central zone is the zone “*i*” and it is taken to be centered at \(x=0\). As with PPM, the linear and quadratic Legendre polynomials in the above formula provide the dual advantages of orthogonality and a zero average value. Higher-order extensions as well as multidimensional extensions of Eq. (19), with the same nice orthogonality property, are given in Balsara et al. (2009; 2013) and Balsara and Kim (2016). The problem of reconstructing the solution consists of arriving at a properly limited specification of \(\hat{{u}}_x \) and \(\hat{{u}}_{xx} \), i.e., the first and second moments of Eq. (19).

Just as piecewise linear TVD reconstruction relied on examining two stencils, each with a width of two zones, piecewise quadratic reconstruction consists of looking at three possible stencils, each of which has a width of three zones. Since we focus on the reconstruction in the central zone of Figs. 4 and 5, we only choose stencils that completely cover the zone of interest. Thus we have a left-biased stencil which spans the interval \(x\in \left[ {-2.5,0.5} \right] \) and depends on the zones \(\left\{ {i-2,i-1,i} \right\} \). The left-biased reconstruction is specified by the quadratic polynomial

The left-biased reconstruction is obtained by enforcing the following consistency conditions (i.e., a reconstruction via primitive function):

In other words, we require that the reconstructed polynomial correctly represents each of the three zone-averaged values in the left-biased stencil. We see that the conditions in Eq. (21) fully determine the coefficients in the left-biased reconstruction in Eq. (20). The solid curve in Fig. 4a shows the left-biased reconstruction for the Gaussian profile. Since the Gaussian is very smooth, we see that the left-biased reconstruction approximates it quite well. Figure 5a shows the same for the shock profile. In this case, the left-biased reconstruction is also non-oscillatory within the zone of interest. Notice too, that some structure is still retained within the central zone despite there being a discontinuity at that zone. We realize, therefore, that if the final reconstruction approximates the reconstructed profile from the left-biased stencil most closely in Fig. 5, we will get a properly upwinded reconstruction that is also non-oscillatory.

The centrally-biased stencil spans the interval \(x\in \left[ {-1.5,1.5} \right] \) and depends on the zones \(\left\{ {i-1,i,i+1} \right\} \). The centrally-biased reconstruction is specified by

The centrally-biased reconstruction is obtained by enforcing the following consistency conditions

The solid curves in Figs. 4b and 5b show the centrally-biased reconstruction for the Gaussian and shock profiles. As before, we see that the Gaussian is approximated very well by the central stencil. In fact, the central stencil is also the one which endows maximal stability and accuracy for smooth flow. As a result, the Gaussian example has shown us that our reconstruction should have the property that it gravitates to the central stencil when the mesh function is smooth. We see, however, that the centrally-biased stencil does a very poor job of reconstructing the shock’s profile. Indeed it introduces a spurious extremum, with the result that its influence on the final reconstruction should be strongly suppressed.

The right-biased stencil spans the interval \(x\in \left[ {-0.5,2.5} \right] \) and depends on the zones \(\left\{ {i,i+1,i+2} \right\} \). The right-biased reconstruction is specified by

The right-biased reconstruction is obtained by enforcing the following consistency conditions

The solid curves in Figs. 4c and 5c show the right-biased reconstruction for the Gaussian and shock profiles. We see that the Gaussian is approximated quite well by the right-biased stencil. Given our comments on stability and accuracy, we realize that it is best to gravitate to the central stencil despite the fact that all the three stencils produce an almost equally good reconstruction for the Gaussian profile. As expected, the shock profile is approximated very poorly by the right-biased stencil. Consequently, for the shock profile, best safety lies in relying predominantly on the left-biased stencil.

The previous three paragraphs have brought us to the realization that the choice of stable stencil depends on analyzing the smoothness properties of the reconstructed polynomial in the zone of interest. In other words, the stencil should be chosen in a solution-dependent fashion. Just as the minmod slope limiter chooses the stencil with the smallest slope, our estimation of the smoothness of each our three stencils should depend on the moments of the three reconstructed polynomials in Eqs. (20), (22) and (24). Since the quadratic reconstruction can have a non-zero second derivative, the first and second derivatives should both participate equally in constructing a measure of the smoothness of a reconstruction. This prompted Jiang and Shu (1996) to build *smoothness indicators* for the reconstruction. (For a fourth order accurate WENO scheme, the smoothness indicators would include the third derivatives, and so on.) To take the example of the left-biased stencil, we define its smoothness indicator as

Similar definitions for the other two stencils yield

We see from Eq. (26) that “smoothness indicator” might be something of a misnomer since a higher value for the smoothness indicator implies that the stencil under consideration actually produces larger first and second derivatives, i.e., it is less smooth. However, the nomenclature is well-established in the literature and we accept it as it is.

Scanning Fig. 4, we see that all three stencils should have similar smoothness indicators for the Gaussian problem. In such a situation, it is not advisable to pick the single stencil that has the lowest value of the smoothness indicator because even the tiniest changes in the smoothness indicator can cause stencils to discretely switch back and forth from one time step to the next, thereby producing numerically generated oscillations (Rogerson and Meiburg 1990). A better strategy would be to blend (i.e., make a convex combination of) all the available stencils while giving the central stencil a much higher weight when all the smoothness indicators are roughly equal. Figure 5, for the shock problem, shows that the left-biased stencil has much smaller first and second derivatives compared to the centrally-biased and right-biased stencils. Consequently, it should have a much smaller smoothness indicator than the other two stencils. In order to pick out the left-biased stencil for the shock problem, we need to weight the stencils in inverse proportion to their smoothness indicators. Economical strategies that accomplish all this do exist. The *non-linear weights*, \(\bar{{w}}_L \), \(\bar{{w}}_C \) and \(\bar{{w}}_R \) are given by

Here \(\varepsilon \) is a small number, which may be solution-dependent, and is usually set to 10\(^{-12}\). The coefficients \(\gamma _L \), \(\gamma _C \) and \(\gamma _R \) are referred to as the *linear weights*. Once the non-linear weights are obtained from Eq. (28), the final reconstructed profile in Eq. (19) is given by

There is some flexibility in the specification of the linear weights and they are usually specified based on the goals that one wants to accomplish. In the next two paragraphs we catalogue some of the popular choices for the linear weights.

For finite volume WENO schemes, it is best to aim for greater stability. One approach (Friedrichs 1998; Levy et al. 2000; Dumbser and Käser 2007) consists of emphasizing the role of the central stencil by taking \(\gamma _L =\gamma _R =1\) and setting \(\gamma _C \) in the range of 50 to 400 with \(p=4\). Such a scheme is often referred to as a *central WENO* (CWENO) scheme. In Dumbser and Käser (2007) significantly larger weights have been preferred for the central stencil. Increasing \(\gamma _C \) increases the central biasing in the scheme. I.e., for most forms of smooth flow all three stencils will have comparable smoothness indicators and we will mostly rely on the central stencil with its greater stability. The difference between \(\gamma _C \) and \(\gamma _L ,\gamma _R \) is modest. As a result, when discontinuities are present in the flow, the smoothness indicators will be vastly smaller for the stencil with the smoothest solution. In that situation Eq. (28) will select that stencil. Giving the central stencil a very large weight relative to the one-sided stencils also reduces the celerity with which a stabler one-sided stencil is chosen when the flow is non-smooth. For the extreme flows that are frequently considered in astrophysics, it might be safer to not impart too large a weight to the central stencil. Yet another approach by Martin et al. (2006) uses the linear weights to minimize the dispersion error in turbulence calculations. It has also been suggested that *p* should increase with increasing order. It is worth noting that the choices catalogued in this paragraph are most relevant to finite volume WENO schemes (the schemes of interest here), where the resulting reconstruction will only be third order accurate.

The reconstructed profile in a finite volume scheme should represent the solution at all points within the zone. In a finite difference scheme, however, we only need to evaluate the solution and its fluxes at given points on the mesh. For finite difference schemes, this opens the door to optimizing the linear weights differently so that accuracy is improved. The choice in Jiang and Shu (1996) and Balsara and Shu (2000) consists of realizing that when the flow is smooth, one can make a convex combination of the three smaller stencils to obtain a larger stencil spanning the zones \(\left\{ {i-2,i-1,i,i+1,i+2} \right\} \). For smooth flow, and with the right convex combination, the larger stencil can provide fifth order accuracy! Optimal, i.e., fifth, order of accuracy is obtained for finite difference formulations by setting \(\gamma _L =0.1\), \(\gamma _C =0.6\) and \(\gamma _R =0.3\) with a choice of \(p=2\). This can be very important for improving the accuracy of rightward propagating waves to fifth order. Mechanistically, when the flow is smooth, we have \(IS_L \cong IS_C \cong IS_R \) in Eq. (28) so that the non-linear weights \(\bar{{w}}_L \), \(\bar{{w}}_C \) and \(\bar{{w}}_R \) equal the optimal linear weights \(\gamma _L \), \(\gamma _C \) and \(\gamma _R \) respectively. When the flow is not smooth, the accuracy improvement is relinquished. Henrick et al. (2006) showed that a mapping function needs to be applied to the non-linear weights in Eq. (28) in order to circumvent a loss of accuracy at critical points, i.e., points where the first or higher derivatives can become zero. (Setting \(\gamma _L =0.3\), \(\gamma _C =0.6\) and \(\gamma _R =0.1\)maximizes the accuracy of the reconstruction at the left zone boundary. This permits leftward propagating waves to do so with fifth order accuracy, when those waves are smooth.) It is important to point out that this accuracy improvement is only most effective when considering finite difference schemes, which are not the direct point of focus here.

Notice that the final \(\hat{{u}}_x \) and \(\hat{{u}}_{xx} \) that we obtain from Eq. (29) and use in Eq. (19) have a strongly non-linear dependence on the solution. This is how WENO schemes achieve their non-linear hybridization. The solid lines in Figs. 4d and 5d show the reconstructed profiles for the Gaussian and shock profiles. We see that the reconstructed polynomial for the Gaussian follows the original Gaussian function extremely well without clipping the maximum, as well it should for a smooth profile. From Fig. 5d for the shock profile we see that the reconstructed polynomial for the *i*th zone that is centered at \(x=0\) is non-oscillatory, retains a small amount of curvature and is obtained, for the most part, from the left-biased stencil which is the only stable stencil in this problem. Comparing Fig. 5d with the analogous Fig. 3c for PPM, we see that the third order WENO reconstruction indeed produces larger jumps at shocks. Thus the WENO reconstruction indeed does provide good stabilization at shocks while leaving other extrema intact.

While we have catalogued the formulation in physical space, WENO schemes can also be formulated for the characteristic variables. All the early formulations of WENO schemes in Jiang and Shu (1996) and Balsara and Shu (2000) were in characteristic variables. Qiu and Shu (2002) have shown that there might be some advantage formulating the reconstruction problem in characteristic variables. Equations (9), (10) and (11) have shown us how the reconstruction problem can be cast in characteristic variables. For structured meshes, WENO reconstruction is most easily formulated in modal space and Balsara et al. (2009; 2013) and Balsara and Kim (2016) have provided easily implementable closed form expressions for WENO reconstruction up to very high orders. The choice of non-linear hybridization described in Eq. (28) is not the only one there is. Jiang and Shu (1996), Balsara and Shu (2000), Henrick et al. (2006), Borges et al. (2008), Gerolymos et al. (2009) and Hu et al. (2010), Castro et al. (2011), Fan et al. (2014; 2014) have shown that different strategies for evaluating the non-linear weights may be used in one dimension with a resultant increase in the order of accuracy of the scheme. I.e., if one wishes to have a finite difference scheme then reconstruction with *r*th order accurate polynomials can be made to yield a scheme with an overall accuracy of \(2r-1\) for smooth flow. Such schemes appear in the literature under a variety of variant names like WENO-M, WENO-Z and WENO-\(\upeta \). A similar increase in accuracy can be achieved for WENO schemes on unstructured meshes (Zhang and Shu 2009). Shi et al. (2002) and Mignone (2014) also discuss the case where the mesh is non-uniform. Divergence-free WENO reconstruction of vector fields is discussed in Balsara (2004, 2009) and Balsara et al. (2013). This completes our description of WENO reconstruction in one dimension.

Figure 6a, b show the results of our one-dimensional advection test when third and fourth order accurate CWENO reconstruction was used along with a Runge–Kutta time stepping of matching accuracy. We see that both schemes reproduce the correct solution very well without any spurious overshoots and undershoots. In Fig. 6a, b we can clearly distinguish the shape of each profile from the other. The Gaussian and triangular profiles show crisp extrema which have not been clipped. The ellipse does not show any flattening at the top of its profile, nor does it show any upwind bias. The profile of the square wave has been preserved very crisply by the fourth order scheme and slightly less so by the third order scheme. The PPM scheme in Fig. 2b represents the square wave profile as sharply as the fourth order CWENO scheme because both schemes start the reconstruction with a fourth order accurate representation of the boundary values. The fourth order CWENO scheme does, however, do a superlative job of preserving the extrema in the Gaussian and triangular profiles. One can use the optimal weights described in Jiang and Shu (1996) and Balsara and Shu (2000) to improve the formal order of accuracy, which also improves the performance of WENO schemes on the present test problem.

### WENO reconstruction in multiple dimensions

Let us now consider the general, higher-order, finite volume reconstruction in a zone (*i*, *j*) of the two-dimensional mesh shown in Fig. 1. Specifically, let us consider third order accurate reconstruction. The desired moments in the zone (*i*, *j*) are then given by

where \(\left( {\tilde{x},\tilde{y}} \right) \) are the zone’s local coordinates, as defined in Eq. (6) and the subscripts “*i*, *j*” in the moments are dropped just to keep the notation simple. We see, therefore, that a third order accurate finite volume reconstruction requires us to build all the moments that the WENO scheme from the previous section could provide us if it is applied in both the *x*- and *y*-directions. Thus the moments \(\hat{{u}}_x \) and \(\hat{{u}}_{xx} \) can be obtained by applying third order WENO reconstruction in the *x*-direction. Likewise, the moments \(\hat{{u}}_y \) and \(\hat{{u}}_{yy} \) can be obtained by applying the same WENO reconstruction in the *y*-direction. However, the moment \(\hat{{u}}_{xy} \), which represents the cross-term in the reconstruction, is still left unspecified. This term is needed for true third order accurate finite volume reconstruction.

One possible way of obtaining all the moments in Eq. (30) might be to use several large multidimensional stencils and to try and obtain all the moments simultaneously from each of the stencils. This is, in fact, the favored method for WENO reconstruction on unstructured meshes, see Friedrichs (1998), Dumbser and Käser (2007). On structured meshes, a more economical method would be to obtain all the one-dimensional moments in the *x*- and *y*-directions using the dimension-by-dimension strategy outlined in the previous paragraph (Balsara et al. 2009). Assuming that this is done, we only need to specify the moment \(\hat{{u}}_{xy} \) in a WENO sense in Eq. (30). For simplicity, assume that all the zones shown in Fig. 1 have unit extent in each direction. Furthermore, assume that the origin is located at the center of zone (*i*, *j*). Choosing our first stencil with zones \(\left\{ {\left( {i,j} \right) ,\left( {i+1,j+1} \right) } \right\} \)and requiring the consistency condition

we get

Similarly, choosing our second, third and fourth stencils with zones \(\left\{ \left( {i,j} \right) ,\left( i+1,j-1 \right) \right\} \), \(\left\{ {\left( {i,j} \right) ,\left( {i-1,j+1} \right) } \right\} \) and \(\left\{ {\left( {i,j} \right) ,\left( {i-1,j-1} \right) } \right\} \) we obtain three other alternative values for the cross term as

Since only the third order term is being reconstructed, we can exclusively focus on the second moments when constructing the smoothness measures. The four smoothness indicators for each of our four stencils are then given by the formula

where we use the four different choices for \(\hat{{u}}_{xy}\) that are given in Eqs. (32) and (33). The four smoothness measures can be used in the usual way, see Eq. (28), to obtain a non-linearly weighted value for \(\hat{{u}}_{xy}\). Equal linear weights are ascribed to the four stencils considered in this section. This completes our description of third order accurate, finite-volume WENO reconstruction on structured meshes. Fourth order accurate, finite-volume, multi-dimensional WENO reconstruction strategies for structured meshes have also been catalogued in Balsara et al. (2009; 2013).

In this section, we have described reconstruction methods that operate on the conserved variables and reconstruct all the cross terms, Eq. (33). It is easiest to present such methods in a pedagogical introduction. However, it is useful to point the reader to approaches that use dimension-by-dimension reconstruction (McCorquodale and Colella 2011; Buchmuller and Helzel 2014; Buchmuller et al. 2016) and methods that transform from conserved to primitive variables before carrying out reconstruction in the primitive variables (McCorquodale and Colella 2011).

The methods from the previous paragraph might be especially valuable for RHD and RMHD because it has been realized that reconstructing the spatial components of the four-velocity can give us a reconstructed four-velocity variable that is manifestly sub-luminal (Komissarov 1999; Aloy et al. 1999; Balsara 2001b; Balsara et al. 2016b; Balsara and Kim 2016). It is difficult to meet this requirement of sub-luminal reconstruction in any other way for highly relativistic flows. For the specific case of RHD and RMHD, the reconstruction of the primitive variables has its deficiencies. Let us explain those deficiencies next. Imagine one zone with \(\hbox {v}_x =0.999\hbox { (i.e., Lorentz factor }\gamma =22.366)\) and a neighboring zone with \(\hbox {v}_x =0.9999\hbox { (i.e., Lorentz factor }\gamma =70.712)\). If we reconstruct the x-velocity, the difference between the zones would only yield an undivided difference \(\Delta \hbox {v}_x =0.0009\). This is a very small difference and prone to error accumulation. The gradient in the variables would then be much smaller than the variable being reconstructed. It can result in loss of fidelity, since the smallest increase in \(\Delta \hbox {v}_x \) can drive the flow superluminal. It is precisely because velocities in high speed relativistic flow all tend to bunch up at \(\sim \)1 that the velocity becomes a bad variable in which to carry out the reconstruction. Now say that we reconstruct the x-component of the four-velocity \(\gamma \hbox {v}_x \). Now the same two zones in the above example have \(\gamma \hbox {v}_x =22.344\) and \(\gamma \hbox {v}_x =\hbox {70.704}\). Clearly, we can reconstruct the slopes for \((\gamma \hbox {v}_x )\), \((\gamma \hbox {v}_y )\) and\((\gamma \hbox {v}_z )\); we can still hope to retain a significant variation in the last three components of the four-velocity. Once the reconstruction is done, we can obtain \(\left( {\gamma \hbox {v}_x ,\gamma \hbox {v}_y ,\gamma \hbox {v}_z } \right) ^{T}\) anywhere within the zone. We now show that we can easily retrieve the three-velocity with just a few float point operations. We define the \(\vartheta \) variable as \(\vartheta \equiv \gamma ^{2}\hbox {v}_x^2 +\gamma ^{2}\hbox {v}_y^2 +\gamma ^{2}\hbox {v}_z^2 ={\mathbf{v}^{2}}/{\left( {1-\mathbf{v}^{2}} \right) }\). This definition now enables us to retrieve the three-velocity as \(~\mathbf{v}^{2}=\vartheta /{\left( {1+\vartheta } \right) }\) and the Lorentz factor as \(\gamma =\sqrt{1+\vartheta }\). In Balsara and Kim (2016) it is shown that this idea can even be extended to the space-time predictor step in a time-dependent scheme for RHD or RMHD.

## Evolving conservation laws accurately in time—Part I: Runge–Kutta methods

The previous sections have shown us how to reconstruct the solution vector on a computational mesh. We saw that we could achieve second order accurate reconstruction in space with piecewise linear methods. We could also construct finite volume reconstructions that went beyond second order accuracy in space. Matching these spatial reconstruction techniques with methods that allow us to achieve a corresponding temporal accuracy is the goal of this section and the next. We tackle this section in three easy parts. First, we study the general philosophy and structure of Runge–Kutta time stepping; this is done in Sect. 4.1. Second, we describe how a second order scheme is assembled with Runge–Kutta time stepping; done in Sect. 4.2. Third, we understand the changes that have to be made in going beyond second order; we instantiate them with a third order scheme with Runge–Kutta time stepping. We do this in Sect. 4.3. Runge–Kutta methods are perhaps the simplest way of achieving second and higher orders of temporal accuracy for hyperbolic problems.

### Runge–Kutta time stepping

Runge–Kutta time-discretizations have a logical simplicity which accounts for their great popularity. The Runge–Kutta methods are also referred to as method of lines approaches or semi-discrete approaches because they simplify the process of temporally updating the solution of a PDE to make it look very much like the time update of an ODE system. They are based on the viewpoint that we can write the PDE in Eq. (1) as

Written this way, it has the semblance of an ordinary differential equation (ODE). The method of lines is not strictly speaking a “method” as much as it is a philosophy. It consists of using some semi-discrete approach for solving ODEs to achieve the temporal accuracy in Eq. (35). I.e., despite being inspired by ODEs, the method works for PDEs.

Not all second order Runge–Kutta methods have equally desirable attributes, especially as they apply to the TVD property. For example, if each of the stages for the improved Euler approximation is TVD then the final solution at the end of the two stages is also TVD. Unfortunately, the guarantees provided by the improved Euler approximation with respect to the TVD property only extend in their truest sense to scalar conservation laws, not to systems. However, a modified Euler approximation cannot even ensure such a TVD property for scalar conservation laws. (We have to balance this with the reality that the improved and modified Euler approximations produce results of comparable quality in practical problems.) Realize, therefore, that although several strong proofs are available for the stability properties of Runge–Kutta schemes, they are not as ironclad as one might like.

Several authors (Shu and Osher 1988; Shu 1988; Gottlieb and Shu 1998; Spiteri and Ruuth 2002, 2003; Gottlieb et al. 2001; Gottlieb 2005; Gottlieb et al. 2011) have proved theorems showing that the time update in Eq. (35) can be carried out to higher orders of accuracy using a sequence of internal Runge–Kutta stages. Moreover, these time-update schemes have the same TVD property as the improved Euler approximation mentioned above. I.e., if each of the stages of the Runge–Kutta scheme is TVD then the final solution at the end of all the stages is also TVD. Runge–Kutta schemes having this special property are known as strong stability preserving (SSP) Runge–Kutta schemes. The SSP Runge–Kutta scheme at second order is indeed the improved Euler approximation given by

The above Runge–Kutta scheme starts with a mesh function \(\left\{ {\hbox {U}^{n}} \right\} \)that is specified at a time \(t^{n}\) and evolves it via the use of one internal stage \(\left\{ {\hbox {U}^{\left( 1 \right) }} \right\} \) to a mesh function \(\left\{ {\hbox {U}^{n+1}} \right\} \)that is specified at a time \(t^{n+1}=t^{n}+\Delta t\). The third order accurate SSP Runge–Kutta scheme is given by

The above second and third order SSP Runge–Kutta schemes are optimal in the sense that for one-dimensional flow they can support a CFL number of unity and, moreover, it is not possible to arrive at a time-explicit Runge–Kutta scheme of the same order that provides a larger CFL number per stage that is used in the scheme. For example, Eq. (37) is optimal because it is impossible to find another third order SSP Runge–Kutta scheme that increases its CFL number by more than one for every three stages used in the scheme. An almost optimal, fourth order accurate SSP Runge–Kutta scheme is given by

For one-dimensional flow, Eq. (38) can support a CFL number of 1.5. Notice that this is a five stage scheme. In contrast, the classical fourth order Runge–Kutta scheme is only a four stage scheme, thus saving the evaluation of one entire stage; but it is not SSP. The Butcher barriers that plague ordinary Runge–Kutta schemes at fifth and higher orders also plague SSP Runge–Kutta schemes at fourth and higher orders. The increasing number of extra stages in Runge–Kutta schemes make them progressively inefficient with increasing order. ADER schemes, which we will study in the next section, do not suffer from this deficiency.

Please note that for multidimensional problems, the permitted CFL number is divided by the dimensionality of the problem. Thus the second and third order schemes in Eqs. (36) and (37) only support CFL numbers of 0.5 and 0.333 in two and three dimensions respectively. Please also recall that boundary conditions have to be applied consistently to each of the stages in Eqs. (36)–(38). SSP Runge–Kutta schemes that go beyond fourth order have also been formulated by Spiteri and Ruuth (2002, 2003), but the ones presented here are the workhorses for most practical work.

### Second order accurate Runge–Kutta scheme

Further specification of Runge–Kutta schemes requires us to provide a recipe for obtaining the fluxes at the zone boundaries at any stage of the multi-stage scheme. We start with the mesh function and use the methods from Sect. 2 to obtain the reconstructed profile \(\hbox {U}_{i,j} \left( {\tilde{x},\tilde{y}} \right) \) within a zone. Equations (6) and (30) give us examples of such reconstructions at second and third order. It is traditional in this work to assume that a zone has been mapped to the unit square; Eq. (6) provides an example of how such a linear mapping is carried out. Consequently, \(\left( {\tilde{x},\tilde{y}} \right) \in \left[ {-1/2,1/2} \right] \times \left[ {-1/2,1/2} \right] \) form the local coordinates of each zone (*i*, *j*). Each of the stages of a Runge–Kutta scheme is defined at only one time level. Consequently, observe from Eq. (4) that the time-averaging of the fluxes is not needed. In Eq. (35) we can discretize the spatial parts as

with the facially-averaged fluxes at the upper *x*- and *y*-faces of the zone (*i*, *j*) defined by

Specification of the Runge–Kutta scheme requires specifying the above two integrals at each of the faces of the mesh. We detail the evaluation of the numerical fluxes in the next paragraph.

At second order, we assume that the vector of conserved variables \({\overline{\hbox {U}}}_{i,j} \) as well as its undivided differences, \(\Delta _x {\overline{\hbox {U}}}_{i,j} \) and \(\Delta _y {\overline{\hbox {U}}}_{i,j} \), are available for each zone (*i*, *j*) of the mesh shown in Fig. 7. The left and right states needed for evaluating the Riemann problem at the top *x*-boundary of the zone being considered, i.e., at the \(\left( {i+1/2,j} \right) \)location in Fig. 7, are given by

Likewise, the bottom and top states needed for evaluating the Riemann problem at the \(\left( {i,j+1/2} \right) \) zone-boundary are given by

Figure 7 shows a schematic representation of the four abutting zones \(\left( {i,j} \right) \), \(\left( {i+1,j} \right) \), \(\left( {i,j+1} \right) \) and \(\left( {i+1,j+1} \right) \)and illustrates various aspects of the construction that is catalogued in Eqs. (41) and (42). At second order, the integrals in Eq. (40) are just the values of the upwinded fluxes provided by any Riemann solver that is evaluated at the face centers. This dramatic simplification of the integrals does not carry over to higher orders. Thus, at second order we get

Here \(\hbox {F}_{RS} \) and \(\hbox {G}_{RS} \) denote the Riemann solver, which is being used as a machine that accepts two states as inputs and provides the upwinded flux as an output. This completes our description of the Runge–Kutta method at second order.

### Runge–Kutta schemes at higher orders; using third order as an example

When one tries to go beyond second order, Eqs. (39) and (40) continue to be valid. As mentioned in the previous subsection, the difficulties at higher orders all arise from the integrals that have to be evaluated in Eq. (40). In order for the overall scheme to have third or fourth order of accuracy, the integrals in Eq. (40) have to be evaluated with the same order of accuracy. Let us consider the first integral in Eq. (40) which gives us the *x*-flux at the \(\left( {i+1/2,j} \right) \) zone-boundary. To evaluate the integral with third order of accuracy using numerical quadrature, we would need to obtain the numerical flux at two suitably chosen quadrature points on that boundary. (Recall the third order accurate Gauss rule for numerical quadrature.) Each such flux would require an invocation of a Riemann solver, thus requiring two rather expensive solutions of the Riemann problem. At fourth order, one would have three quadrature points, thus requiring us to solve the Riemann problem three times. Clearly, if we continued this line of development, the higher-order spatially-averaged fluxes in Eq. (40) would be very costly to evaluate because each call to the Riemann solver is itself quite expensive. Besides, three-dimensional problems would be costlier yet, since they would have even more quadrature points in each face. Clearly, a more efficient approach would be very desirable.

A more efficient method for evaluating the spatially-averaged fluxes in Eq. (40) was presented in Atkins and Shu (1998), van der Vegt and van der Ven (2002a, b) and Dumbser et al. (2007). The method is called *quadrature free* because it avoids the use of a large number of quadrature points in the flux evaluation. It works for certain very useful classes of Riemann solvers, including the HLL, HLLC, HLLI and linearized Riemann solvers. Efficient third and fourth order approaches, with copious implementation-related details for three-dimensional structured meshes, are documented in Balsara et al. (2009, 2013). Here we present the method for the HLL Riemann solver and focus on third order of accuracy in two dimensions. Say that the conservation law has “*M*” components. Let us start with an extension of Eq. (30), which we write for an “*M*” component vector of conserved variables in the zone \(\left( {i,j} \right) \) as

Equation (44) is referred to as a *modal representation* in space and the vectors \({\overline{\hbox {U}}}_{i,j} \), \({\hat{\hbox {U}}}_{i,j;x} \), \({\hat{\hbox {U}}}_{i,j;y} \), \({\hat{\hbox {U}}}_{i,j;xx} \), \({\hat{\hbox {U}}}_{i,j;yy} \) and \({\hat{\hbox {U}}}_{i,j;xy} \) are called the *modes* of the reconstruction. In Eq. (44) we follow the convention that \(\left( {\tilde{x},\tilde{y}} \right) \) are the local coordinates within the zone \(\left( {i,j} \right) \) and that they are mapped to the unit square \(\left[ {-.5,.5} \right] \times \left[ {-.5,.5} \right] \). We can use Eq. (44) to obtain the entire vector of conserved variables at any location within the square. The value of the conserved variables at any specific location within the zone of interest can be evaluated using Eq. (44). The locations within the zone where the values are evaluated are called *nodes*, and the values themselves are called *nodal* values.

To define an HLL Riemann solver at the *x*-face at \(\left( {i+1/2,j} \right) \), we need to find the extremal wave speeds, \(\hbox {S}_L \) and \(\hbox {S}_R \), flowing in the x-direction at that zone boundary. We can obtain these speeds by evaluating Eq. (44) and its analogue from the zone (*i*+1,*j*) on either side of the center of the *x*-face being considered. To make this concrete, we build the two vectors of conserved variables given by

We then use the left and right boundary values, \(\hbox {U}_{L;~i+1/2,j}^{\left( c \right) } \)and \(\hbox {U}_{R;~i+1/2,j}^{\left( c \right) } \)to obtain \(\hbox {S}_L \) and \(\hbox {S}_R \), i.e., the extremal wave speeds in the HLL Riemann solver. Figure 8 shows a schematic representation of the two abutting zones (*i*,*j*) and (*i*+1,*j*) and illustrates various aspects of the construction that is catalogued here. The HLL Riemann solver at any location on the *x*-face at \(\left( {i+1/2,j} \right) \) is written as

Compare the above equation to the equation for the HLL flux and please review the HLL flux from Chapter 4 of the author’s website. In principle, \(\hbox {S}_L \) and \(\hbox {S}_R \)can have different values at different points on the face being considered. The important insight from Dumbser et al. (2007) consists of freezing the wave speeds \(\hbox {S}_L \) and \(\hbox {S}_R \). I.e., we are freezing our wave model so that the extremal wave speeds all along the face being considered equal those evaluated at the center of the face. Freezing the wave model does not diminish the order of accuracy of the overall scheme. It does make the flux in Eq. (46) linear in terms of the left and right conserved variables, i.e., \(\hbox {U}_{L;~i+1/2,j} \left( {\tilde{y}} \right) \) and \(\hbox {U}_{R;~i+1/2,j} \left( {\tilde{y}} \right) \), as well as linear in terms of the left and right fluxes, i.e., \(\hbox {F}_{L;~i+1/2,j} \left( {\tilde{y}} \right) \) and \(\hbox {F}_{R;~i+1/2,j} \left( {\tilde{y}} \right) \). We show in the next paragraph that this small simplification makes it possible to spatially-average Eq. (46) over the *x*-face of interest.

Notice now that \(\hbox {U}_{L;~i+1/2,j} \left( {\tilde{y}} \right) \) and \(\hbox {U}_{R;~i+1/2,j} \left( {\tilde{y}} \right) \) available as analytic expressions from Eq. (44) and its counterpart in zone \(\left( {i+1,j} \right) \) as

\(\hbox {U}_{L;~i+1/2,j} \left( {\tilde{y}} \right) \) is evaluated on the left dashed surface shown in Fig. 8. Similarly, using the analogue of Eq. (44) from the zone \(\left( {i+1,j} \right) \), the second expression in Eq. (47) gives us \(\hbox {U}_{R;~i+1/2,j} \left( {\tilde{y}} \right) \). \(\hbox {U}_{R;~i+1/2,j} \left( {\tilde{y}} \right) \) is evaluated on the right dashed surface shown in Fig. 8. Thus the third term on the right hand side of Eq. (46) is analytically integrable over the *x*-face at \(\left( {i+1/2,j} \right) \). To illustrate this explicitly, we have

It is easily seen that the above integral is third order accurate. We now wish to obtain facially integrated versions of the x-flux on either side of the zone boundary being considered. In other words, we wish to obtain third order accurate integrals of the first two terms in Eq. (46). This can be accomplished if we have the x-fluxes at three quadrature points that lie immediately to the left of the *x*-boundary at \(\left( {i+1/2,j} \right) \) in Fig. 8. To be specific, we use Simpson’s rule as our third order accurate quadrature formula. Notice that the flux can only be evaluated at a quadrature point if we have the conserved variables at the same quadrature point. Equation (44) can now be evaluated at three quadrature points that lie immediately to the left of the *x*-face at \(\left( {i+1/2,j} \right) \), as shown in Fig. 8. We then have

The analogue of Eq. (44) in zone (\(i+1,j)\) can also be evaluated at three quadrature points that lie immediately to the right of the *x*-face at \(\left( {i+1/2,j} \right) \), as shown in Fig. 8. We then have

Evaluating the x-fluxes from the three conserved variables in Eq. (49) enables us to explicit the integral of the first term on the right hand side of Eq. (46) as

where the Simpson rule has been used to obtain third order accuracy. Equation (50) also enables us to explicit the integral of the second term on the right hand side of Eq. (46) as

and the above equation is again third order accurate. Equations (48), (51) and (52) can be used to obtain the third order accurate, facially-integrated HLL flux in the *x*-direction, i.e., the very same entity that we are evaluating via the first integral in Eq. (40). A similar construction can be used to make a rapid evaluation of the second integral in Eq. (40), giving us the facially-integrated HLL flux in the *y*-direction. When this is done at all faces, we can obtain a third order accurate representation of the right hand side of Eq. (39). Using this term for each of the three stages in Eq. (37) completes our specification of a third order accurate Runge–Kutta scheme.

## Evolving conservation laws accurately in time—Part II: predictor–corrector schemes

Despite their desirable simplicity, several of the tasks in a Runge–Kutta scheme have to be repeated at each internal stage. This increases the computational cost. Predictor–corrector schemes avoid some of this duplication of effort. Section 5.1 introduces predictor–corrector methods at second order. They yield the fastest schemes at second order and they also lay the foundation for ADER schemes. The formulation of higher-order ADER schemes is difficult to understand. For this reason, we do it in two easier stages in the next two subsections. Section 5.2 introduces a very simple formulation of ADER schemes in one dimension at third order. Such a formulation is analytically tractable with a little bit of basic calculus. Section 5.3 introduces ADER methods in multidimensions, casting them in the role of higher-order extensions of predictor–corrector type methods.

### Second-order accurate predictor–corrector schemes

A predictor–corrector scheme is made of two steps—the predictor step and the corrector step. In the predictor step, we construct the spatial variation, i.e., the undivided differences, within a zone and use it to obtain a measure of the time rate of change of the conserved variables within that zone. Thus starting with the conserved variables \({\overline{\hbox {U}}}_{i,j}^n \) in each of the zones \(\left( {i,j} \right) \) at time \(t^{n}\), we use a slope limiter to obtain \(\Delta _x {\overline{\hbox {U}}}_{i,j} \) and \(\Delta _y {\overline{\hbox {U}}}_{i,j} \) in Eq. (6). The predictor step then consists of using the variation within the zone to obtain a measure of \(\left( {{\partial \hbox {U}}/{\partial t}} \right) _{i,j}^n \). There are various ways of doing this. Here we present a strategy that will prepare us for our study of ADER schemes in Sect. 4.3. Thus consider the *nodal points*
\(\left( {i+1/2,j} \right) \), \(\left( {i-1/2,j} \right) \), \(\left( {i,j+1/2} \right) \) and \(\left( {i,j-1/2} \right) \)
*within* the zone \(\left( {i,j} \right) \) that is under consideration. For example, the nodal points \(\left( {i+1/2,j} \right) \) and \(\left( {i,j+1/2} \right) \) associated with zone \(\left( {i,j} \right) \) are shown in Fig. 7. We obtain the conserved variables at those points using just the values and slopes that are internal to the zone \(\left( {i,j} \right) \). The variables \({\overline{\hbox {U}}}_{i,j}^n \), \(\Delta _x {\overline{\hbox {U}}}_{i,j} \) and \(\Delta _y {\overline{\hbox {U}}}_{i,j} \) can be thought of as the *modes* or the *modal values* of the solution within the zone \(\left( {i,j} \right) \). We therefore use the modal values to obtain the *nodal values* as follows:

The values \(\hbox {U}_{i+1/2,j}^n \), \(\hbox {U}_{i-1/2,j}^n \), \(\hbox {U}_{i,j+1/2}^n \) and \(\hbox {U}_{i,j-1/2}^n \) can be used to make an “in the small” approximation of \(\left( {{\partial \hbox {U}}/{\partial t}} \right) _{i,j}^n \)as follows

The above step can be carried out locally within all the zones, permitting us to predict the value of the conserved variable, not just in space, but also in time. As long at the time over which we seek to predict the values is less than the CFL limit on the time step, our predicted values will be accurate and stable.

The corrector part then consists of using \(\left( {{\partial \hbox {U}}/{\partial t}} \right) _{i,j}^n \) and the undivided differences to obtain space- and time-centered values for the conserved variables on either side of each zone boundary. Thus at the zone boundary \(\left( {i+1/2,j} \right) \) we obtain the left and right states given by:

Likewise, at the zone boundary \(\left( {i,j+1/2} \right) \) we obtain the top and bottom states given by

We can think of Eqs. (55) and (56) as endowing time evolution to the nodal values shown in Fig. 7. The final update step can now be written as

The present scheme is stable up to a CFL number of 0.5 in two dimensions. (In three dimensions, the limiting CFL number becomes 1/3.) While this CFL number is less than the CFL number of some of the schemes which include a multidimensional wave model (Colella 1990; Saltzman 1994; LeVeque 1997; Abgrall 1994a, b; Balsara 2010, 2012a), its chief advantage is its simplicity, ease of implementation and its great speed. Notice that, unlike the second order Runge–Kutta schemes, the present scheme only invokes the limiters and the Riemann solver once per time step. As a result, it is also faster and slightly less diffusive than the second order Runge–Kutta scheme in certain instances.

### A very simple one-dimensional ADER scheme at third order

ADER stands for Arbitrary DERivative in space and time. Only the ADER predictor step is pedagogically tricky, so in this subsection we restrict attention only to the ADER predictor step. The goal of this subsection is to make the ADER predictor step accessible to the reader in its simplest setting. Let us consider a very simple ADER scheme for the one-dimensional conservation law \(\partial _t \hbox {U}+\partial _x \hbox {F}=0\). We can even take “U” to be a scalar for the sake of simplicity, though the logic in this subsection works even if “U” is a solution vector. Let the one dimensional mesh have zones of size \(\Delta x\) and a timestep of size \(\Delta t\). We wish to evolve the solution from time \(t^{n}\) to a time \(t^{n+1}=t^{n}+\Delta t\). For each zone with zone-center \(x_i \) we can define a local and normalized spatial coordinate given by \(\tilde{x}={\left( {x-x_i } \right) }/{\Delta x}\) with a local time coordinate given by \(\tilde{t}={\left( {t-t^{n}} \right) }/{\Delta t}\). Consequently, in terms of the normalized coordinates, the PDE can be written as

For the ADER predictor step we focus exclusively on the solution within the *i*th zone. We assume that third accurate spatial reconstruction has been carried out so that we start our ADER scheme with

In the above equation, the mean value \(\overline{{u}}_i \) in the *i*th zone is given by the time-update from the previous timestep; however, the modes \(\hat{{u}}_{i,x} \) and \(\hat{{u}}_{i,xx} \) are obtained by the third order accurate spatial reconstruction. The goal of the ADER predictor step is to predict the solution *within* the *i*th zone for all space-time points given by \(\left( {\tilde{x},\tilde{t}} \right) \in \left[ {-1/2,1/2} \right] \times \left[ {0,1} \right] \) in a fashion that is consistent with the governing dynamical equation given by Eq. (58). We want this time evolution to be third order accurate in space-time so that we want

Equation (60) identifies a set of basis functions given by

Associated with this basis set, we have a set of modes given by \(\left\{ \overline{{u}}_i ~,~\hat{{u}}_{i,x} ~,~\hat{{u}}_{i,xx},~\hat{{u}}_{i,t} ~,~\hat{{u}}_{i,tt} ~,~\hat{{u}}_{i,xt} \right\} \). The first three basis functions in Eq. (61) are purely spatial, while the next three carry the temporal evolution. Realize, therefore, that the ADER predictor step that we seek should be a method for starting with Eq. (59) and yielding the coefficients \(\hat{{u}}_{i,t} \), \(\hat{{u}}_{i,tt} \) and \(\hat{{u}}_{i,xt} \) in Eq. (60) in a fashion that is optimally consistent with the governing Eq. (58). We will devise an iterative strategy to achieve this convergence; the iterations are known to converge very fast.

We start the iterative solution process with \(\hat{{u}}_{i,t} =0\), \(\hat{{u}}_{i,tt} =0\) and \(\hat{{u}}_{i,xt} =0\). Let us first realize that the six coefficients in Eq. (60) constitute six modal coefficients. If we were to assign six meaningful sets of numbers to these coefficients, we would be specifying the entire space-time evolution of \(u_i \left( {\tilde{x},\tilde{t}} \right) \) within the *i*th zone. Now please look at Fig. 9 and observe that it has six nodal points in space-time. Three of these six nodal points have been specified at \(\tilde{t}=0\). We assert that the specification of the six modal coefficients in Eq. (60), would be completely equivalent to specifying the six nodal values in space-time as shown in Fig. 9. Realize from Eq. (58) that it is not adequate to specify the modes in Eq. (60). Because the flux is also involved in Eq. (58), we should also specify the flux \(f_i \left( {\tilde{x},\tilde{t}} \right) \) within the *i*th zone. The flux should be obtained with comparable space-time accuracy so that we write it as

The flux coefficients in Eq. (62) should be consistent with the coefficients of the solution in Eq. (60) in a way that can only be arbitrated by the governing Eq. (58). This can be accomplished in a fashion that we will soon specify. In the rest of this paragraph, we show how the flux coefficients \(\hat{{f}}_i \), \(\hat{{f}}_{i,x} \) and \(\hat{{f}}_{i,xx} \) can be obtained in a fashion that is consistent with the coefficients \(\overline{{u}}_i \), \(\hat{{u}}_{i,x} \) and \(\hat{{u}}_{i,xx} \) at \(\tilde{t}=0\). To see how this is done, please focus on the first three nodal points in Fig. 9. These three nodal points are given by \(\left( {\tilde{x}_1 ,\tilde{t}_1 } \right) =\left( {0,0} \right) \), \(\left( {\tilde{x}_2 ,\tilde{t}_2 } \right) =\left( {1/2,0} \right) \) and \(\left( {\tilde{x}_3 ,\tilde{t}_3 } \right) =\left( {-1/2,0} \right) \). We evaluate the solution at these nodal points so that we can define nodal values of the solution within the zone being considered as \(\tilde{u}^{1}=u_i \left( {\tilde{x}_1 ,\tilde{t}_1 } \right) \), \(\tilde{u}^{2}=u_i \left( {\tilde{x}_2 ,\tilde{t}_2 } \right) \) and \(\tilde{u}^{3}=u_i \left( {\tilde{x}_3 ,\tilde{t}_3 } \right) \). This is done by evaluating Eq. (59). Using these nodal values of the solution, we can evaluate nodal values of the fluxes within the zone being considered as \(\tilde{f}^{1}=f\left( {\tilde{u}^{1}} \right) \), \(\tilde{f}^{2}=f\left( {\tilde{u}^{2}} \right) \) and \(\tilde{f}^{3}=f\left( {\tilde{u}^{3}} \right) \). We wish to obtain the first three coefficients in Eq. (62). With these three nodal values of the fluxes in hand, we can specify the first three modal coefficients for the fluxes in the *i*th zone by asserting a system of three equations that is given by

On inverting the system, the result is

This completes the process of initializing the flux coefficients in Eq. (62) at \(\tilde{t}=0\). Figure 10 shows us this initialization step in the form of a flowchart. This paragraph has also given us our first exposure to the process of transcribing from modal values to nodal values for the solution; using nodal solution values to evaluate nodal values for the fluxes; then using those nodal values for the fluxes to obtain the corresponding modal coefficients for the fluxes.

We start the iterative process by setting \(\hat{{u}}_{i,t} =0\), \(\hat{{u}}_{i,tt} =0\) and \(\hat{{u}}_{i,xt} =0\). Correspondingly, we set \(\hat{{f}}_{i,t} =0\), \(\hat{{f}}_{i,tt} =0\) and \(\hat{{f}}_{i,xt} =0\). These time-dependent coefficients can only be set by appeal to the dynamics, i.e., by appealing to the governing Eq. (58). We enforce satisfaction of the governing equation via a Galerkin projection over the space-time element shown in Fig. 9. The test functions that we use are identical to the trial functions in Eq. (61). The projection can be explicitly written as

Notice that only the three time-dependent test functions participate in the Galerkin projection in Eq. (65) because only those three test functions give us the time-dependent dynamics. Operationally, one substitutes \(u_i \left( {\tilde{x},\tilde{t}} \right) \) from Eq. (60) in Eq. (65) to obtain \({\partial u_i \left( {\tilde{x},\tilde{t}} \right) }/{\partial \tilde{t}}\). Likewise, again operationally, one substitutes \(f_i \left( {\tilde{x},\tilde{t}} \right) \) from Eq. (62) in Eq. (65) to obtain \({\partial f_i \left( {\tilde{x},\tilde{t}} \right) }/{\partial \tilde{x}}\). Then one takes one of the three time-evolutionary bases from Eq. (61) and one carries out the integration in Eq. (65) using a computer algebra system. It is not difficult to verify that the resulting equations give

We see, therefore, that if we have a set of coefficients for \(\hat{{f}}_{i,x} \), \(\hat{{f}}_{i,xt} \) and \(\hat{{f}}_{i,xx} \) during any stage in the iterative process then Eq. (66) gives an improved set of time-dependent coefficients \(\hat{{u}}_{i,t} \), \(\hat{{u}}_{i,tt} \) and \(\hat{{u}}_{i,xt} \). To complete this iterative strategy, we have only to find a way to take these improved time-dependent coefficients and use them to build an improved set of flux coefficients in Eq. (62). We do that next.

Notice from Fig. 9 that we have three more quadrature points in space-time given by \(\left( {\tilde{x}_4 ,\tilde{t}_4 } \right) =\left( {1/2,1/2} \right) \), \(\left( {\tilde{x}_5 ,\tilde{t}_5 } \right) =\left( {{-1}/2,1/2} \right) \) and \(\left( {\tilde{x}_6 ,\tilde{t}_6 } \right) =\left( {0,1} \right) \). Immediately after the first iteration, all the coefficients in Eq. (60) will indeed be non-zero for most typical variations in the initial conditions. We evaluate the solution at the fourth, fifth and sixth nodal points in Fig. 9 so that we can define nodal values of the solution within the space-time element being considered as \(\tilde{u}^{4}=u_i \left( {\tilde{x}_4 ,\tilde{t}_4 } \right) \), \(\tilde{u}^{5}=u_i \left( {\tilde{x}_5 ,\tilde{t}_5 } \right) \) and \(\tilde{u}^{6}=u_i \left( {\tilde{x}_6 ,\tilde{t}_6 } \right) \). Using these nodal values of the solution, we can evaluate nodal values of the fluxes within the zone being considered as \(\tilde{f}^{4}=f\left( {\tilde{u}^{4}} \right) \), \(\tilde{f}^{5}=f\left( {\tilde{u}^{5}} \right) \) and \(\tilde{f}^{6}=f\left( {\tilde{u}^{6}} \right) \). To close the loop, we should relate these nodal values of the fluxes to the modal coefficients for the fluxes in Eq. (62). Realize too that the first three nodal values for the fluxes, which were evaluated at \(\tilde{t}=0\), have not changed. We can write a system of three equations that is analogous to Eq. (64) for the fourth, fifth and sixth nodes. On inverting the system, the result is

This shows us how to start with an improved solution from Eq. (66); obtain from it an improved set of fluxes at the nodal points in Fig. 9 and to then use those improved nodal fluxes to obtain improved modal fluxes from Eq. (67). Once that is done, we can return to Eq. (66) and the iteration resumes. Figure 11 shows us this iteration in the ADER predictor step in the form of a flowchart.

The simple demonstration in this subsection is just meant to make the iterative ADER predictor step very accessible and easy to understand. Further information for multidimensional structured meshes is available in Balsara et al. (2009; 2013) and Dumbser et al. (2013). The iteration described in the previous two paragraphs converges very fast. For a temporally *N*th order scheme, one only needs \((\hbox {N}-1)\) steps to converge to the level of discretization error. There is even a theoretical proof (Jackson 2017) that the method converges very fast. The next subsection provides all the details for higher-order ADER schemes on multidimensional structured meshes.

### ADER time stepping for second- and higher-order time accuracy

In the previous subsections, we saw that predictor–corrector schemes at second order can be faster than their Runge–Kutta counterparts at the same order. This efficiency is due to the fact that each predictor–corrector time step only needs one reconstruction step and one solution of the Riemann problem. The ADER schemes presented here are the more efficient counterparts of the Runge–Kutta schemes at second and higher order. The ADER methodology is a time update strategy in the same way that Runge–Kutta schemes give us a method for evolving the PDE in time. Just like Runge–Kutta schemes, ADER schemes can be used for reconstruction-based schemes as well as discontinuous Galerkin schemes. Thus one can have ADER-WENO or ADER-DG schemes, analogous to RK-WENO schemes with Runge–Kutta time stepping or RKDG schemes respectively. ADER schemes represent a very economical method for arranging the time update and recent head-to-head comparisons have shown ADER time stepping to be faster than Runge–Kutta time stepping by a factor of up to two (Balsara et al. 2013) for the same order of accuracy.

Just like WENO schemes, ADER schemes have seen a few generations of development in the literature. Methods leading up to ADER schemes have been presented by several authors (van Leer 1979; Ben-Artzi and Falcovitz 1984). The above authors focused on the *generalized Riemann problem*. It consists of realizing that any second order scheme will have piecewise linear variations in the zones to the left and right of a zone boundary. As a result, we not just have a jump at the zone boundary but also have linear variations in the fluid quantities on either side of the jump. Consequently, the Riemann problem will no longer be a similarity solution in space-time. Instead, the wave structures in the Riemann problem will curve in response to the spatially varying states that they propagate into. Titarev and Toro (2002, 2005) and Toro and Titarev (2002) found a method for extending the generalized Riemann problem to higher orders and coined the ADER acronym. As a result, the left and right states could have any sort of polynomial variation at the zone boundary. Modern ADER schemes have been formulated more in the style of predictor–corrector schemes (Dumbser et al. 2008; Balsara et al. 2009, 2013). The predictor and corrector steps are indeed higher-order extensions of the second order predictor–corrector scheme described in Sect. 5.1. The next two subsections describe the ADER predictor and corrector steps independently. In this review we describe a variant of ADER schemes, called ADER-CG, which is suited for problems with non-stiff source terms. The “CG” stands for continuous Galerkin and refers to the fact that the solution cannot take on abrupt temporal changes during a time step in response to stiff source terms.

#### Multidimensional ADER-CG predictor step

The ADER predictor step consists of developing a space-time representation of the vector of conserved variable in each zone. At second order, the ADER predictor step becomes identical to the predictor step from the predictor–corrector scheme in Sect. 5.1. To retain second order accuracy in time, one only needs to obtain the piecewise linear variation in time, which is explicited in Eq. (54). The extension of the second order predictor–corrector scheme to higher orders is entirely non-trivial. For that reason, we illustrate the ADER construction in two dimensions at third order. Extensions of the present section to even higher orders on three-dimensional structured and unstructured meshes have been presented in Dumbser et al. (2008), Balsara (2009), Balsara et al. (2013) and Dumbser et al. (2013).

In order to write a third order accurate space-time dependence within a zone \(\left( {i,j} \right) \)we first need to identify a set of local space-time basis functions that are defined in a local space-time coordinate system within each zone. Just as we developed local spatial coordinates \(\left( {\tilde{x},\tilde{y}} \right) \) in Eq. (6), we now develop a local time coordinate system given by \(\tilde{t}\equiv {\left( {t-t^{n}} \right) }/{\Delta t}\). Here we assume that we are evolving the solution from time \(t^{n}\) to time \(t^{n+1}=t^{n}+\Delta t\) in a zone with size \(\Delta x\) and \(\Delta y\) in the x- and y-directions. In terms of our local space-time coordinate system we have \(\left( {\tilde{x},\tilde{y},\tilde{t}} \right) \in \left[ {-1/2,1/2} \right] \times \left[ {-1/2,1/2} \right] \times \left[ {0,1} \right] \); we refer to this as a *reference space-time element*. Examining the mappings \(\tilde{x}\equiv {\left( {x-x_i } \right) }/{\Delta x}\), \(\tilde{y}\equiv {\left( {y-y_j } \right) }/{\Delta y}\) and \(\tilde{t}\equiv {\left( {t-t^{n}} \right) }/{\Delta t}\), we see that the reference space-time element is an element obtained by linearly mapping the zone under consideration to a zone with unit zone size and time step. The spatial dependence can be analogous to that in Eq. (44) so that we can use the same spatial bases that were developed there. To develop space-time basis that retain third order accuracy within the zone being considered, we upgrade Eq. (44) to obtain

Notice that the spatial modes in Eq. (68), i.e., \({\overline{\hbox {U}}}_{i,j} \), \({\hat{\hbox {U}}}_{i,j;x} \), \({\hat{\hbox {U}}}_{i,j;y} \), \({\hat{\hbox {U}}}_{i,j;xx} \), \({\hat{\hbox {U}}}_{i,j;yy} \) and \({\hat{\hbox {U}}}_{i,j;xy} \), are available via some form of non-oscillatory reconstruction. (In the next section we will see that the spatial modes can even be evolved via some form of DG scheme.) The time-dependent modes, i.e., \({\hat{\hbox {U}}}_{i,j;t} \), \({\hat{\hbox {U}}}_{i,j;tt} \), \({\hat{\hbox {U}}}_{i,j;xt} \) and \({\hat{\hbox {U}}}_{i,j;yt} \) are chosen so that Eq. (68) retains all the terms in a Taylor expansion that are needed to provide a third order accurate reconstruction in space and time. Our task, in describing the ADER-CG predictor step, is to obtain the time-dependent modes within a zone when the spatial modes are available in that zone.

Reasoning by analogy with the second order case, i.e., Eq. (54), we realize that we will have to implicate the x- and y-fluxes in order to obtain an “in the small” time update within the zone of interest. Also notice that the terms \({\left( {\hbox {F}\left( {\hbox {U}_{i+1/2,j}^n } \right) -\hbox {F}\left( {\hbox {U}_{i-1/2,j}^n } \right) } \right) }/{\Delta x}\) and \({\left( {\hbox {G}\left( {\hbox {U}_{i,j+1/2}^n } \right) -\hbox {G}\left( {\hbox {U}_{i,j-1/2}^n } \right) } \right) }/{\Delta y}\) in Eq. (54) are just the x- and y-slopes of the x- and y-fluxes respectively. We, therefore, realize that in order to obtain the time-dependent modes in Eq. (68) we will need to study the moments of the fluxes. Reasoning in analogy with Eq. (54) we realize that only certain modes of the fluxes might eventually be needed. However, it is best to explicitly write out the entire modal representation of the x-flux in a reference space-time element as

and the entire modal representation of the y-flux in a reference space-time element as

In the next paragraph we will demonstrate how the governing equation is linearly mapped to the reference space-time element, which also shows the usefulness of working with the reference element. We will see in the next subsection that having all the modes of the two fluxes above can be used to advantage in the corrector step of the ADER scheme. We also observe that since the entire spatial variation in Eq. (68) is assumed to be known at the beginning of the ADER step, we can obtain the spatial modes \({\hat{\hbox {F}}}_{i,j} \), \({\hat{\hbox {F}}}_{i,j;x} \), \({\hat{\hbox {F}}}_{i,j;y} \), \({\hat{\hbox {F}}}_{i,j;xx} \), \({\hat{\hbox {F}}}_{i,j;yy} \) and \({\hat{\hbox {F}}}_{i,j;xy} \) in Eq. (69) at the beginning of the ADER step. Likewise, we can obtain the spatial modes \({\hat{\hbox {G}}}_{i,j} \), \({\hat{\hbox {G}}}_{i,j;x} \), \({\hat{\hbox {G}}}_{i,j;y} \), \({\hat{\hbox {G}}}_{i,j;xx} \), \({\hat{\hbox {G}}}_{i,j;yy} \) and \({\hat{\hbox {G}}}_{i,j;xy} \) in Eq. (70) at the beginning of the ADER step. We will soon see how this evaluation can be carried out in a computationally efficient manner.

We will soon see that the variation of the conserved variables within a zone, i.e., Eq. (68), can be used to obtain the fluxes in the same zone, i.e., Eqs. (69) and (70). The gradients of the fluxes, in turn, govern the time evolution of the conserved variables. To relate the modes of the fluxes to the time-dependent modes in Eq. (68), one has to utilize the governing equation, i.e., Eq. (1). The governing equation can be transformed to the local space-time coordinates of the zone \(\left( {i,j} \right) \) as

This is tantamount to scaling the *x*-fluxes by \(\left( {{\Delta t}/{\Delta x}} \right) \) and the *y*-fluxes by \(\left( {{\Delta t}/{\Delta y}} \right) \) during the calculation so that we do not need to multiply too many factors of \(\left( {{\Delta t}/{\Delta x}} \right) \) or \(\left( {{\Delta t}/{\Delta y}} \right) \) all over. This scaling takes us from the physical zone to the reference element. When we reach the end of our calculation, i.e., when we have obtained converged modes in Eqs. (68)–(70), we can always return to the physical zone by rescaling the fluxes as \(\hbox {F}_{i,j} \left( {\tilde{x},\tilde{y},\tilde{t}} \right) \rightarrow \widetilde{\hbox {F}}_{i,j} \left( {\tilde{x},\tilde{y},\tilde{t}} \right) \left( {{\Delta x}/{\Delta t}} \right) \) and \(\hbox {G}_{i,j} \left( {\tilde{x},\tilde{y},\tilde{t}} \right) \rightarrow \widetilde{\hbox {G}}_{i,j} \left( {\tilde{x},\tilde{y},\tilde{t}} \right) \left( {{\Delta y}/{\Delta t}} \right) \). In principle, we could substitute Eqs. (68)–(70) into Eq. (71) and try to find a match to the polynomial terms, but this would become increasingly intractable as the order of accuracy increases. A simpler approach would be to project Eq. (71) into a basis space and require the *projection* to hold in a weak form. Thus let \(\phi \left( {\tilde{x},\tilde{y},\tilde{t}} \right) \) be a test function in space and time. We obtain a weak formulation of Eq. (71) by asserting that

While this can be asserted for any space-time test function \(\phi \left( {\tilde{x},\tilde{y},\tilde{t}} \right) \), it is best to use the test functions that are associated with the time-dependent modes in Eq. (68). Since we are interested in the time evolution of Eq. (68), the time-dependent basis functions are, in some sense, the best basis functions to use. The theoretical underpinnings of the finite element method also support this choice. Thus with \(\phi \left( {\tilde{x},\tilde{y},\tilde{t}} \right) =\tilde{t}\) we have:

Similarly, with \(\phi \left( {\tilde{x},\tilde{y},\tilde{t}} \right) =\tilde{t}^{2}\) we have:

Furthermore, with \(\phi \left( {\tilde{x},\tilde{y},\tilde{t}} \right) =\tilde{x}~\tilde{t}\) we get:

Likewise, with \(\phi \left( {\tilde{x},\tilde{y},\tilde{t}} \right) =\tilde{y}~\tilde{t}\) we get:

Please note that the above four equations hold in the reference space-time element. The above four equations can then be rewritten as a more meaningful set as follows:

These are the equations that relate the modes of the x- and y-fluxes to the time-dependent modes in Eq. (68). Although they have been derived by a finite element-like procedure, it is possible to discern the finite-difference like structure for the time evolution in these equations. It turns out that they can be solved via iteration. The iterative procedure can be started by zeroing out all of the time-dependent terms in Eqs. (68), (69) and (70). Each iteration yields an improved set of terms \({\hat{\hbox {U}}}_{i,j;t} \), \({\hat{\hbox {U}}}_{i,j;tt} \), \({\hat{\hbox {U}}}_{i,j;xt} \) and \({\hat{\hbox {U}}}_{i,j;yt} \). These terms can then be used to improve our approximations for \({\hat{\hbox {F}}}_{i,j;t} \), \({\hat{\hbox {F}}}_{i,j;tt} \), \({\hat{\hbox {F}}}_{i,j;xt} \) and \({\hat{\hbox {F}}}_{i,j;yt} \) in Eq. (69). Similarly, we can improve our approximations for \({\hat{\hbox {G}}}_{i,j;t} \), \({\hat{\hbox {G}}}_{i,j;tt} \), \({\hat{\hbox {G}}}_{i,j;xt} \) and \({\hat{\hbox {G}}}_{i,j;yt} \) in Eq. (70). Notice that each iteration is designed to sharpen our fidelity to the weak form of the governing equation because each iteration is an application of the projection in Eq. (72). Furthermore, it is an amazing result owing to the contractive nature of the *Picard iteration* that, at third order, only two iterations of Eq. (77) are needed to achieve third order accuracy. At second order, the Picard iteration theory requires only one iteration, which is why we did not iterate on Eq. (54). At fourth order, one would require three iterations, and so on.

We still have to specify how the spatial modes are to be obtained in Eqs. (69) and (70). The idea is to identify a set of nodal points in the local space-time coordinate system at \(\tilde{t}=0\). The nine black circles in Fig. 12 show one possible set of such spatial nodes and are given by the ordered set of nodal points in the reference space-time element:

The above set of nodes is labeled from 1 to 9 in Fig. 12. Once the conserved variables are obtained at these nodes using Eq. (68), they can be used to construct the nodal values of the x- and y-fluxes. Let those nodal values for the fluxes be shown by a tilde on the top, while the modes for the fluxes are shown by a caret on the top. Denoting the nodal location with a superscript, we now list the transcription from the nodal values to the spatial modes of the x-flux in Eq. (69) as follows

Notice how reminiscent the above expressions are to finite difference approximations for the moments. A similar transcription can be used for obtaining the spatial modes of the y-flux in Eq. (70). The spatial modes in Eqs. (69) and (70) should be computed only once before the iteration described in the above paragraph is started. Notice that our choice of time-dependent basis functions in Eqs. (68)–(70) is such that the time-dependent modes in Eq. (68) do not change the spatial modes in Eqs. (69) and (70). We have arrived at a better appreciation of the nomenclature *ADER-CG*, where the “CG” refers to the fact that the scheme is continuous Galerkin in time.

During each iteration, we start with the existing modes \({\hat{\hbox {F}}}_{i,j;x} \), \({\hat{\hbox {G}}}_{i,j;y} \), \({\hat{\hbox {F}}}_{i,j;xt} \), \({\hat{\hbox {G}}}_{i,j;yt} \), \({\hat{\hbox {F}}}_{i,j;xx} \), \({\hat{\hbox {G}}}_{i,j;xy} \), \({\hat{\hbox {F}}}_{i,j;xy} \) and \({\hat{\hbox {G}}}_{i,j;yy} \); please see the right hand sides of Eq. (77). Evaluating the right hand sides of Eq. (77) will then give us an improved set of time-dependent modes \({\hat{\hbox {U}}}_{i,j;t} \), \({\hat{\hbox {U}}}_{i,j;tt} \), \({\hat{\hbox {U}}}_{i,j;xt} \) and \({\hat{\hbox {U}}}_{i,j;yt} \). These can be used to build an improved set of time-dependent modes in Eqs. (69) and (70) for use in the next iteration. We now pick a set of nodal points in the local space-time coordinate system with \(\tilde{t}>0\). The grey and dashed circles in Fig. 12 show one possible set of such nodes in space and time. They are given by the ordered set of nodal points in the reference space-time element:

The above set is labeled from 11 to 15 in Fig. 12. As before, conserved variables can be obtained at those nodes by using our best available approximation of Eq. (68). The conserved variables at these nodes can, in turn, be used to obtain a better approximation for the fluxes at the same nodes. Denoting the nodal location with a superscript, we now list the transcription from the nodal values to the time-dependent modes of the x-flux in Eq. (69) as follows

A similar transcription can be used for obtaining the time-dependent modes of the y-flux in Eq. (70). This completes our description of the predictor step of the ADER-CG scheme in two dimensions at third order.

#### Multidimensional ADER-CG corrector step

Section 4.3 demonstrated a very efficient quadrature-free strategy for starting with a higher-order spatial variation, i.e., Eq. (44), and using it to obtain a spatially averaged numerical flux. In other words, we devised a computationally efficient strategy for integrating Eq. (46) over the face of interest. The Runge–Kutta schemes that were documented in Section 4.3 use multiple stages to build a time-accurate update. The predictor step of the ADER-CG scheme documented above yields the space-time variation of the conserved variables and the fluxes in Eqs. (68)–(70). Equations (3) and (4) show how the space-time integration of the fluxes at zone boundaries yields a one-step update. It is our goal to demonstrate how such an update can be carried out at high orders in a quadrature-free fashion by using an ADER formulation. For the sake of simplicity, we make our demonstration specific to third order on a two-dimensional structured mesh. However, the ideas readily extend to three dimensions and unstructured meshes (Dumbser et al. 2008; Balsara et al. 2009, 2013).

Let us begin by extending Eq. (46) for the HLL flux in the *x*-direction to include space and time variations in the upper *x*-boundary of the zone \(\left( {i,j} \right) \) as

To define an HLL Riemann solver at the *x*-face indexed by \(\left( {i+1/2,j} \right) \), we need to find the extremal wave speeds, \(\hbox {S}_L \) and \(\hbox {S}_R \), flowing in the x-direction at that zone boundary. We can obtain these speeds by evaluating Eq. (68) and its analogue from the zone (*i*+1,*j*) on either side of the spatial and temporal center of the *x*-face being considered. To make this concrete, we build the two vectors of conserved variables given by

The above two states are analogous to the states in Eq. (45) with the exception that Eq. (83) is also centered in time. We then use the left and right boundary values, \(\hbox {U}_{L;~i+1/2,j}^{\left( c \right) } \)and \(\hbox {U}_{R;~i+1/2,j}^{\left( c \right) } \) to obtain \(\hbox {S}_L \) and \(\hbox {S}_R \). Observe that Eq. (83) is different from Eq. (45) in that it corresponds to extending Fig. 8 in the temporal direction. This is shown in Fig. 13. Notice that Fig. 8 only includes spatial variation whereas Fig. 13 includes the spatial and temporal variation of the conserved variables and fluxes on either side of the zone boundary. With \(\hbox {S}_L \) and \(\hbox {S}_R \) being frozen for this time step, Eq. (82) becomes a linear function of \(\hbox {U}_{L;~i+1/2,j} \left( {\tilde{y},\tilde{t}} \right) \) and \(\hbox {F}_{L;~i+1/2,j} \left( {\tilde{y},\tilde{t}} \right) \), which are evaluated by using the space-time variation in the zone that lies to the left of the zone boundary shown in Fig. 13, and \(\hbox {U}_{R;~i+1/2,j} \left( {\tilde{y},\tilde{t}} \right) \) and \(\hbox {F}_{R;~i+1/2,j} \left( {\tilde{y},\tilde{t}} \right) \), which are evaluated by using the space-time variation in the zone that lies to the right of the zone boundary shown in Fig. 13. The utility of having the space-time representation of the solution and fluxes in Eqs. (68)–(70) now becomes readily apparent. Using Eq. (68), the spatially and temporally integrated value of \(\hbox {U}_{L;~i+1/2,j} \left( {\tilde{y},\tilde{t}} \right) \) is then given by using the space-time variation in the zone \(\left( {i,j} \right) \) as

An analogous expression can be written for the space-time integral of \(\hbox {F}_{L;~i+1/2,j} \left( {\tilde{y},\tilde{t}} \right) \) by using Eq. (69). Similarly, the spatially and temporally integrated value of \(\hbox {U}_{R;~i+1/2,j} \left( {\tilde{y},\tilde{t}} \right) \) is given by using the space-time representation of the conserved variables in the zone \(\left( {i+1,j} \right) \) as

An analogous expression can be written for the space-time integral of \(\hbox {F}_{R;~i+1/2,j} \left( {\tilde{y},\tilde{t}} \right) \). These integrals enable us to obtain a third order accurate space-time integration of the numerical flux in Eq. (82). Equations (84) and (85), along with their analogues for the x-flux, enable us to write down that space-time integration explicitly. By applying these ideas in both dimensions we get space-time averaged numerical fluxes that can be directly used in Eq. (3) to obtain a one-step update for our conservation law. This completes our description of the ADER-CG corrector step. The three-dimensional extension has more terms but is easily accomplished with the help of a symbolic manipulation package.

## Runge–Kutta discontinuous Galerkin (RKDG) schemes

*Galerkin schemes* refer to a class of schemes that posit a set of basis functions on the entire computational domain and then solve the problem in terms of the modes of the basis functions. Fourier techniques for solving PDEs can be thought of as an example of Galerkin methods. Sine and cosine functions form the basis functions in this example and the solution is expressed in terms of the Fourier modes, i.e., the coefficients of the sines and cosines. Because we are interested in hyperbolic conservation laws that can give rise to discontinuities, it is not advantageous to have a set of basis functions that span the entire computational domain. For example, if a discontinuous function is represented in terms of a discrete set of Fourier basis functions, we would encounter a Gibbs phenomenon at the location of the discontinuity. (In all fairness, spectral methods can handle problems with a few weak and isolated shocks, but it becomes increasingly difficult to handle the general case where strong shocks may form at several locations.)

The rest of this section is split into three subsections. The first subsection provides a basic description of DG methods. The second subsection describes recent WENO limiters; the subsection after that describes MOOD limiters.

### Basic description of discontinuous Galerkin (DG) methods

*Runge–Kutta Discontinuous Galerkin* (RKDG) methods are based on the idea that within each zone one can have a small set of basis functions that may indeed become discontinuous at zone boundaries (Cockburn and Shu 1989; Cockburn et al. 1989; Cockburn and Shu 1998; Cockburn et al. 1990, 2000). The discontinuities at zone boundaries can then be treated by solving a Riemann problem. The moments of the basis functions then become the independent variables that are to be evolved by the scheme. (The basis functions are also sometimes called trial functions.) Let us consider Eq. (44) to appreciate the difference between a scheme that is based on reconstruction and a discontinuous Galerkin scheme. A third order scheme that reconstructs the solution would reconstruct all the moments in Eq. (44), except of course the zone-averaged value. This would have to be done at each stage of the Runge–Kutta time evolution strategy. Only one evolutionary equation is solved for the vector of conserved variables \({\overline{\hbox {U}}}_{i,j} \) in zone \(\left( {i,j} \right) \). That is, the components of \({\overline{\hbox {U}}}_{i,j} \) in zone \(\left( {i,j} \right) \) are the only *degrees of freedom* in that zone. In contrast, a third order discontinuous Galerkin (DG) method is based on the viewpoint that all the moments in Eq. (44) are degrees of freedom in zone \(\left( {i,j} \right) \)and should be evolved in time. This is to be done in a fashion that is consistent with the governing equations, i.e., the hyperbolic conservation law. Six evolutionary equations are then developed for the six vectors \({\overline{\hbox {U}}}_{i,j} \), \({\hat{\hbox {U}}}_{i,j;x} \), \({\hat{\hbox {U}}}_{i,j;y} \), \({\hat{\hbox {U}}}_{i,j;xx} \), \({\hat{\hbox {U}}}_{i,j;yy} \) and \({\hat{\hbox {U}}}_{i,j;xy} \). That is, we now have six times as many degrees of freedom as we would have in a reconstruction-based algorithm. Thus, in place of Eq. (44), we can extend our notation to show the time-dependence as

All the modes in Eq. (86) have, therefore, been endowed with time-evolution. A third order Runge–Kutta time stepping scheme can be used to discretize their evolution in time.

Recall that reconstruction-based schemes build all the moments of the zone-centered variable in Eq. (44). This reconstruction is carried out at the start of every time step, and yet, only the zone-averaged variable is updated at the end of a time step. In contrast, because all the moments are evolved in an RKDG scheme, and the evolution is consistent with the governing equation, the method can be very accurate. If the solution is smooth to begin with and remains so in most of the zones of the computational domain, then evolving all the moments can really help improve accuracy. Our guiding philosophy in a DG scheme would therefore be to do as little limiting as possible within a zone, because any such limiting would damage the information that is contained in some or all of the higher moments. In regions of smooth flow, no limiting is needed so that the method retains its theoretical accuracy. In practice, the presence of discontinuities forces us to restrict the higher moments in Eq. (86), with the result that RKDG schemes, quite like their finite volume brethren, have to be non-linearly stabilized. However, the philosophy is to apply non-linear stabilization to the moments as sparingly as possible. Practical experience has shown that RKDG schemes can be stabilized with a minimal amount of limiting.

Also recall that as the order of accuracy increases, reconstruction-based schemes use increasingly larger stencils that impede parallelism. If nonlinear stabilizaion is not needed in the physical problem, the RKDG method requires a very small stencil. The small stencil can become an advantage on parallel computers.

In the course of this section, we will see that DG methods are generalizations of finite volume methods in the sense that they use all the concepts of limiting and Riemann solvers that were initially developed within the context of finite volume schemes. However, DG methods recast these ideas within the context of a finite element framework. This makes the method very proficient at handling flow problems with complicated, body-fitted geometries (Bassi and Rebay 1997; Warburton et al. 1998). DG methods have also been used for solving problems on arbitrary Lagrangian Eulerian (ALE) meshes where the zone boundaries of the mesh can move in response to flow features or other dynamics (van der Vegt and van der Ven 2002a, b; Boscheri and Dumbser 2014, 2017; Boscheri et al. 2014a, b). When dealing with problems with geometric complexity, one has to go through the complication of working with a set of boundary-conforming elements though (Dubiner 1991; Warburton 1998; Karniadakis and Sherwin 1999). The Galerkin formulation also makes DG methods very useful for solution-dependent space and time adaptivity (Biswas et al. 1994). DG methods enable one to simultaneously have *h-adaptivity*, where the size of the mesh (denoted by “*h*”) is locally refined, and *p-adaptivity*, where the order of the method (denoted by “*p*”) is increased on refined patches. Collectively, this is known as *hp-adaptivity*. The *hp*-adaptive methods can offer spectral-like convergence to the physical solution of a scientific or engineering problem. As a result, DG methods are very popular in engineering applications where one simultaneously has complicated boundaries and a need to refine with increasing accuracy around local surfaces of interest.

Consider the hyperbolic system in Eq. (1) which has to be solved on the mesh shown in Fig. 1. Say we have to take a time step on zones with sizes \(\Delta x\) and \(\Delta y\) in each direction. In terms of the local coordinates within a zone, Eq. (1) can be written as

Notice that the derivatives on the right hand side have been written in the zone’s local coordinates. Comparing Eqs. (87)–(35), it is easy to see how the time discretization might be carried out with a Runge–Kutta method. However, we need to find evolutionary equations for all the moments of Eq. (86). To that end, realize that the basis functions in Eq. (86) are actually a set of orthogonal Legendre polynomials. As with any set of basis functions, we can obtain their coefficients, i.e., the modes or the degrees of freedom, by making an orthogonal projection. When the basis functions are not orthogonal, the derivation becomes only slightly more involved. Thus, in general, we multiply the above equation by an arbitrary test function \(\phi \left( {\tilde{x},\tilde{y}} \right) \) that is defined over the zone \(\left( {i,j} \right) \) of interest and integrate over the zone of interest. Using integration by parts we get:

If the basis functions form an orthogonal set, as they do for Eq. (86), then it is always best to draw the test functions from that set.

It is worthwhile to make four observations about Eq. (88). First, notice that the integrals are applied componentwise for a hyperbolic conservation law. Second, notice that when \(\varphi \left( {\tilde{x},\tilde{y}} \right) \) is set to unity, i.e., our test function is a constant, then the first and fourth terms on the right hand side of Eq. (88) become zero. Equation (88) just yields an evolutionary equation for the conserved quantity, i.e., the first term on the right hand side of Eq. (86). Thus we get

In that case, the boundary integrals in Eq. (89) should match up with the ones in Eqs. (39) and (40). In other words, the boundary integrals on the right hand side of Eq. (88) should retrieve the upwinded fluxes evaluated at the appropriate order at the boundaries of the zone being considered. Since we are illustrating RKDG schemes at third order, we must retrieve Eqs. (46)–(52) if we are using an HLL flux, and analogous expressions if we are using a different flux function. This is achieved if \(\hbox {F}\left( {\tilde{x}=1/2,\tilde{y},t} \right) \) and \(\hbox {F}\left( {\tilde{x}=-1/2,\tilde{y},t} \right) \) are actually the resolved x-fluxes coming from a properly upwinded Riemann solver applied to the upper and lower x-boundaries of the zone being considered. Similarly, \(\hbox {G}\left( {\tilde{x},\tilde{y}=1/2,t} \right) \) and \(\hbox {G}\left( {\tilde{x},\tilde{y}=-1/2,t} \right) \) are resolved y-fluxes provided by an upwinded Riemann solver applied to the upper and lower y-boundaries of the zone being considered. This is referred to as a *weak formulation* of the hyperbolic system. Our reinterpretation of the surface integrals in Eq. (88) provides a properly upwinded flux which, in turn, enables the variables in one zone to interact with their neighbors across the zone boundaries. These upwinded fluxes are used in the update of all the boundary integrals in Eq. (88). Third, notice that when \(\varphi \left( {\tilde{x},\tilde{y}} \right) \) has spatial variation, the first and fourth terms on the right hand side of Eq. (88) pick up non-trivial contributions from the area integrals. Those terms are needed for accurate time-evolution of higher moments, as we will see in the next paragraph. Fourth, notice that when basis functions are non-orthogonal, one has to invert a small matrix, known as a *mass matrix*, in order to obtain the modal time evolution. Since our basis set is orthogonal, our mass matrix is a diagonal matrix and we do not face this problem here.

We now write out the time-evolution of the modes in Eq. (86) explicitly. This is most easily done by using the Legendre polynomials as our test functions. The zeroth moment is already catalogued in Eq. (89). Using \(\varphi \left( {\tilde{x},\tilde{y}} \right) =\tilde{x}\) in Eq. (88) we get

Notice that the factor 1/12 on the left hand side of Eq. (90) comes from the mass matrix. Also notice the area integration, which is an extra term that one has to evaluate with an appropriate order of accuracy. In RKDG schemes, this is usually done by numerical quadrature. Furthermore, observe that both the x-flux terms contribute with the same signs at the boundaries. That is, while there is a conservation principle for the conserved variables, see Eq. (89), there is no conservation principle for the higher moments. In other words, changing the linear variation within a zone does not change the mean value in that zone and the physics of a conservation law only requires the mean value to be conserved. Using \(\varphi \left( {\tilde{x},\tilde{y}} \right) =\tilde{y}\) in Eq. (88) we get

Using \(\varphi \left( {\tilde{x},\tilde{y}} \right) =\left( {\tilde{x}^{2}-1/12} \right) \) in Eq. (88) yields

Using \(\varphi \left( {\tilde{x},\tilde{y}} \right) =\left( {\tilde{y}^{2}-1/12} \right) \) in Eq. (88) yields

Lastly, using \(\varphi \left( {\tilde{x},\tilde{y}} \right) =\tilde{x}~\tilde{y}\) in Eq. (88) yields

This completes our derivation of the time-dependence of the modes in Eq. (86). Third order Runge–Kutta time stepping from Eq. (37) can be used to evolve the equations from (89) to (94).

While the DG methods have several genuine advantages in certain circumstances, they also have their drawbacks. As the number of moments that one evolves increases, the permitted explicit time step decreases (Cockburn et al. 2000). One way to rectify this consists of evolving only a few of the lower moments while reconstructing the higher moments (Qiu and Shu 2004, 2005; Balsara et al. 2007; Dumbser et al. 2008). This does increase the permitted time step while relinquishing only a small amount of the accuracy. In the vicinity of discontinuities, a limiter does need to be applied to the higher moments in Eq. (86). The high resolution that comes from evolving the higher moments is only realized if most of the moments are not changed by the limiting process. Thus in problems with several strong, interacting shocks, these methods might lose their advantage. Several limiters have been presented over the years (Biswas et al. 1994; Burbeau et al. 2001; Qiu and Shu 2004, 2005; Balsara et al. 2007; Krivodonova 2007; Zhu et al. 2008; Xu et al. 2009a, b, c, Xu and Lin 2009; Xu et al. 2011; Zhu and Qiu 2011; Zhong and Shu 2013; Zhu et al. 2013; Dumbser et al. 2014). The problem is that there has been no coalescence of consensus around any one particular limiter. In the next subsection, we present a WENO limiter by Zhong and Shu (2013). In the subsection after that, we present the MOOD limiter of Dumbser et al. (2014). Storing the large number of degrees of freedom can also be problematical if computer memory is limited.

### WENO limiter for DG methods

We now describe the simplest form of WENO limiting (Zhong and Shu 2013; Zhu et al. 2013) with several modifications made here to make it amenable to seamless implementation. This limiting strategy is to be used with some form of discontinuity detector so that one only invokes the limiter in zones that have a discontinuity, i.e., zones that are denoted “troubled” zones. (We discuss positivity preserving reconstruction in higher-order schemes in a subsequent section. In that section, we will have occasion to discuss discontinuity detectors.) The underlying idea is that one should only invoke this limiter in as few zones as possible. The other design philosophy is that even if the limiter is invoked in a zone where it may not truly be needed, it should not damage the higher-order accuracy of the DG algorithm. Let us denote the zone that has to be limited on a two-dimensional Cartesian mesh with a subscript “*i*, *j*”. We illustrate the third order limiting procedure for this zone. We assume that from the previous timestep the DG scheme has left us with a polynomial given by the following modal representation

Notice that Eq. (95) resembles Eq. (86), the only difference being that the time dependence has been dropped so as to yield a more compact notation. Displaying the DG limiter at third order in 2D is general enough to enable the reader to extend these ideas to any order and also 3D with the help of a computer algebra system. Notice too that conservation requires that only the mode \({\overline{\hbox {U}}}_{i,j} \) in the above equation should be kept intact, the remaining modal coefficients, i.e., \({\hat{\hbox {U}}}_{i,j;x} \), \({\hat{\hbox {U}}}_{i,j;y} \), \({\hat{\hbox {U}}}_{i,j;xx} \), \({\hat{\hbox {U}}}_{i,j;yy} \) and \({\hat{\hbox {U}}}_{i,j;xy} \), can be modified via the limiting process.

Because our limiting is based on the WENO philosophy, we first define smoothness indicators. For the polynomial in Eq. (95) we can construct a smoothness indicator that is given by

The square bracket in the above equation is not a matrix, but just contains a summation of perfect squares. Here the \(\hbox {u}_{i,j} \left( {\tilde{x},\tilde{y}} \right) \) denotes a component of \(\hbox {U}_{i,j} \left( {\tilde{x},\tilde{y}} \right) \), which can be taken literally to be a component or it can even be taken to be an eigenweight that has been obtained via a characteristic projection. Because the polynomial in Eq. (95) is written in terms of an orthogonal basis set, the integration in Eq. (96) yields a nice closed form expression given by

Here, \({\hat{\hbox {u}}}_{i,j;x} \) can be a component of \({\hat{\hbox {U}}}_{i,j;x} \) if we want to limit on the conserved variables. Alternatively, if we want to limit on the characteristic variables, \({\hat{\hbox {u}}}_{i,j;x} \) can be an eigenweight that is obtained by a characteristic projection of \({\hat{\hbox {U}}}_{i,j;x} \) in one of the two principal directions of the mesh.

The next step consists of realizing that the zone \(\left( {i,j} \right) \) has four immediate von Neumann neighbors given by zones \(\left( {i+1,j} \right) \), \(\left( {i-1,j} \right) \), \(\left( {i,j+1} \right) \) and \(\left( {i,j-1} \right) \). Because solutions of hyperbolic PDEs propagate from one zone to the next, it is likely that even when the solution in zone \(\left( {i,j} \right) \) is troubled, the solution in one of these neighboring zones is salient. As a result, the moments from that neighboring zone, suitably shifted to the current zone, could help to limit the zone \(\left( {i,j} \right) \). We have now to study what a suitable shift is. Let us focus on the zone \(\left( {i+1,j} \right) \)and examine how its reconstruction polynomial would appear if it were shifted one zone to the left so as to coincide with the zone \(\left( {i,j} \right) \). We then have

Notice that the polynomial in zone \(\left( {i+1,j} \right) \) is now written in the local coordinates of the zone \(\left( {i,j} \right) \). A little simplification yields

Notice that the mean value of Eq. (99), averaged over zone \(\left( {i,j} \right) \), does not equal \({\overline{\hbox {U}}}_{i,j}\). However, if we were to replace \(\left( {{\overline{\hbox {U}}}_{i+1,j} -{\hat{\hbox {U}}}_{i+1,j;x} +{\hat{\hbox {U}}}_{i+1,j;xx} } \right) \) by \({\overline{\hbox {U}}}_{i,j} \) in Eq. (99) then it could potentially be a polynomial that one could use to replace Eq. (95). Therefore, analogous to Eq. (99), we can define a shifted polynomial from zone \(\left( {i+1,j} \right) \) which has the correct zone average for zone \(\left( {i,j} \right) \). It is given by

Equation (100) is the suitably shifted polynomial that is shifted from zone \(\left( {i+1,j} \right) \) to zone \(\left( {i,j} \right) \). In general, we do not want to replace Eq. (95) with Eq. (100). However, if zone \(\left( {i,j} \right) \) is a troubled zone with a bad (i.e., a seriously TVD violating) solution, then this might be warranted. We now see that a WENO-style weighting between all the available von Neumann neighbors might help us decide whether to replace the troubled polynomial and by how much. We should do this weighting in a WENO style in order to avoid very rapid switching of the stencil. Relating Eqs. (100) to (95) and (97) we can also write down a smoothness indicator for the shifted polynomial in Eq. (100). Note that even though \({\tilde{\hbox {U}}}_{i+1,j} \left( {\tilde{x},\tilde{y}} \right) \) relates to zone \(\left( {i+1,j} \right) \), its smoothness indicator should be evaluated over zone \(\left( {i,j} \right) \) when we seek a limiting procedure for zone \(\left( {i,j} \right) \). Analogous to Eq. (97) we write the smoothness indicator for Eq. (100) explicitly as

This shows us how to shift a polynomial by one zone to a neighboring zone and how to evaluate its smoothness indicator.

We have three remaining immediate neighbors for the zone \(\left( {i,j} \right) \). Now that we understand the concept, we quickly write down the analogues of Eq. (100) and the corresponding smoothness indicators. From the zone \(\left( {i-1,j} \right) \) we obtain

and the corresponding smoothness indicator is

From the zone \(\left( {i,j+1} \right) \) we obtain

and the corresponding smoothness indicator is

From the zone \(\left( {i,j-1} \right) \) we obtain

and the corresponding smoothness indicator is

This completes our description of the shifted reconstruction polynomials and their corresponding smoothness indicators.

Now that the smoothness indicators from the neighboring zones are in hand, we can develop the corresponding non-linear weights. To zone \(\left( {i,j} \right) \) we ascribe a central linear weight given by \(\gamma _C =0.96\); and to the four immediate neighbors we ascribe linear weights given by \(\gamma _N =0.01\). The non-linear weights are then given by

In the above equation, we can set \(\varepsilon =10^{-12}\) and \(\hbox {p}=2\) as suggested by Zhong and Shu (2013). The reconstructed polynomial, with limiting, is then given by

Here \(\hbox {u}_{i,j} \left( {\tilde{x},\tilde{y}} \right) \) is a component of Eq. (95); \({\tilde{\hbox {{u}}}}_{i+1,j} \left( {\tilde{x},\tilde{y}} \right) \) is a component of Eq. (100); and \({\tilde{\hbox {{u}}}}_{i-1,j} \left( {\tilde{x},\tilde{y}} \right) \), \({\tilde{\hbox {{u}}}}_{i,j+1} \left( {\tilde{x},\tilde{y}} \right) \) and \({\tilde{\hbox {{u}}}}_{i,j-1} \left( {\tilde{x},\tilde{y}} \right) \) are components of Eqs. (102), (104) and (106) respectively. If characteristic variables are being used, they could be the eigenweights that are obtained from characteristic projection. This completes our description of the WENO limiting for DG schemes. The limiting strategy described here is implemented by exactly following the sequence of equations that is described in this section.

### MOOD limiter for DG methods

MOOD stands for Multidimensional Optimal Order Detection. It is based on the realization that a higher-order scheme may return an oscillatory result at the end of a timestep. Alternatively, a higher-order scheme may return an unphysical result at the end of a timestep. In both those situations, we realize that a lower order scheme would have served us better in those zones that turned out to be pathological (or troubled). The catch is that if one is taking a timestep from a time of \(t^{n}\) to a time of \(t^{n+1}\) then one does not know which zone might produce a troubled result till the timestep has completed. The MOOD philosophy argues that *a priori limiting* of the solution at time \(t^{n}\) may indeed result in excessive limiting in zones where this is not needed. For a WENO scheme, where the reconstruction step includes a non-linear hybridization (i.e., a limiting) procedure, this is not much of an issue. However, DG schemes may, in principle, not need any limiting at all. In such circumstances, falsely invoking the limiter at time \(t^{n}\)can lead to excessive limiting. The MOOD philosophy, therefore, suggests that it is best to hold off on the limiting process till the timestep has completed, i.e., till a time of \(t^{n+1}\). At that advanced time, the solution itself can be polled to see if it violates physical admissibility (i.e., a loss of pressure or density positivity) or numerical admissibility (i.e., production of an oscillatory profile on the mesh). In all such cases, the troubled zone can be tagged and its time integration can be redone in an a posteriori sense. This is called *a posteriori limiting*. Operationally, this limiting involves using a known and stable TVD or low order WENO scheme to evolve the offending zone again from a time of \(t^{n}\) to a time of \(t^{n+1}\). Realize, therefore, that the solution has to be available at both times. Furthermore, the data from the troubled zones in the DG scheme has to be extracted in a suitable fashion and handed over to a different solver. The data from that different solver has to be handed back in a suitable fashion to the troubled zones in the DG scheme.

MOOD limiting for DG is based on viewing the DG polynomial over one DG zone as being equivalent to specifying just one volume-averaged solution vector on a set of sub-cells of that zone. An early sub-cell based approach to DG limiting was first developed in Balsara et al. (2007). MOOD-type limiting for DG schemes has been developed by Dumbser et al. (2014). (An analogous concept that may be described as a subcell finite volume limiter has been developed in Sonntag and Munz (2014).) We have provided several extra clarifications here to make it easy to understand and implement. It works well with both ADER and RK time update strategies. In our explanation of MOOD limiting, let us go from the specific to the general. In this paragraph let us specifically describe in words the process of applying MOOD limiting on a 2D mesh on which we are evolving a third order DG scheme. At a time of \(t^{n}\) and a time of \(t^{n+1}\) we have stored all the modes that are given in Eq. (86) for all the zones of the mesh. The DG scheme evolves these modes so that we have six modes within each zone for each of the conserved variables at each of the two time levels. The DG solution at time level \(t^{n+1}\) may have some troubled zones. Please look at Fig. 14 and let us assume that zone \(\left( {i,j} \right) \) will eventually be found to be a troubled zone, so that we wish to first detect the pathology and then redo the timestep for that zone with a more stable method. We choose a simple TVD or lower order WENO scheme as our stable method. In order for the lower order method to have the same amount of information as the six modes that we have evolved with the DG method, we split each zone of the entire mesh into at least nine sub-cells. Each of these sub-cells will receive a volume-averaged solution from its parent DG cell at time \(t^{n+1}\). Because each DG zone has six spatially-varying modes, it can easily supply a unique volume-averaged solution vector to each of its nine sub-cells shown in Fig. 14. We now apply a *physical admissibility detector* (PAD) and a *numerical admissibility detector* (NAD) to each sub-cell. (The PAD and NAD are described in detail in a later paragraph.) If all the sub-cells associated with a parent DG zone are salient at time \(t^{n+1}\), we say that the DG zone had a successful update and we don’t consider that cell any further. However, if any of the sub-cells has an unphysical solution (i.e., it triggers the PAD) or an oscillatory solution (i.e., it triggers the NAD), we declare the zone to be a troubled zone. Let us say, for the sake of argument that the zone \(\left( {i,j} \right) \) is found to be a troubled zone. We will have to, therefore, redo the time update from a time of \(t^{n}\) to a time of \(t^{n+1}\) for those nine sub-cells in Fig. 14 with a simple TVD or low order WENO scheme that is known to be very stable. The TVD or WENO reconstruction might require a halo of two or three zones, which is why we show a layer of two sub-cells around the nine sub-cells that we identified in Fig. 14. The DG solution in zone \(\left( {i,j} \right) \) at time \(t^{n}\) is then imparted (*scattered*) to the nine sub-cells in Fig. 14. The nine sub-cells shown in Fig. 14 are then evolved from a time of \(t^{n}\) to a time of \(t^{n+1}\) with a TVD or WENO scheme. The nine sub-cells now hold a salient solution at time \(t^{n+1}\). From these nine sub-cell averages at time \(t^{n+1}\) we can retrieve (*gather*) the DG polynomial in zone \(\left( {i,j} \right) \) at time \(t^{n+1}\). We can now say that the zone \(\left( {i,j} \right) \) has undergone an a posteriori MOOD limiting for this timestep.

Now let us consider the general case. Say, for the sake of discussion that we want to represent the same amount of information that is contained in an \(\hbox {N}\)th order DG polynomial in one dimension. To represent the same amount of information in a finite volume sense we will need (\(\hbox {N}+1\)) sub-cells within each DG zone. These sub-cells will have featureless slabs of fluid. These (\(\hbox {N}+1\)) sub-cells will have only one solution vector each. We, therefore, say that the DG polynomial at \(\hbox {N}\)th order has as much information as the volume-averaged solution vectors in each of the (\(\hbox {N}+1\)) sub-cells. For an \(\hbox {N}\)th order DG scheme in “d” dimensions, we will have to split each DG zone into at least \(\left( {\hbox {N+1}} \right) ^{\hbox {d}}\) sub-cells. Figure 15, which is Fig. 2 from Dumbser et al. (2014), shows a flowchart that describes the MOOD limiting of DG schemes. The solution at time \(t^{n+1}\) within each DG zone is scattered to its sub-cells. On those sub-cells, we apply the PAD and the NAD. If none of the sub-cells associated with a DG zone shows any pathology, the DG solution in that zone is deemed acceptable. If not, we flag the zone and scatter the DG solution at the earlier time \(t^{n}\) from the troubled zone as well as its halo of neighbors. This information is now available on the sub-cell mesh. Such a mesh will contain the sub-cells associated with a troubled zone and also all halo sub-cells that are needed for a time update. The sub-cells associated with the troubled zone then undergo a time update from time \(t^{n}\) to time \(t^{n+1}\) with the help of a TVD or low order WENO scheme. The sub-cell solution at the advanced time \(t^{n+1}\) is then gathered back to the troubled zone on the DG mesh. These gather and scatter steps are arranged so that they can be done very efficiently. We see, therefore, that there are two crucial parts that we need to describe further. First, we need to describe the scatter and gather steps and their efficient implementation. Second, we need to give some useful information about the PAD and NAD. We do that in the ensuing paragraphs.

First, let us describe the scatter and gather steps. We will do this for a third order DG scheme in 2D. The reader will see that with the help of a computer algebra system the procedure can be extended to any order and to 3D. Let us start with Eq. (86) and describe the scatter step. From Fig. 14 we realize that specifying just the volume-averaged solution vector in each of the nine sub-cells in Fig. 14 would give us more than sufficient information to retrieve all the moments in Eq. (86). The sub-cells have uniform size so that the first sub-cell is given by \(\left[ {-1/2,-1/6} \right] \times \left[ {-1/2,-1/6} \right] \), the second sub-cell is given by \(\left[ {-1/6,1/6} \right] \times \left[ {-1/2,-1/6} \right] \), the third sub-cell is given by \(\left[ {1/6,1/2} \right] \times \left[ {-1/2,-1/6} \right] \) and so on. We now show how the volume-averaged solution vectors in the nine sub-cells in Fig. 14 can be obtained from the DG polynomial. We will denote those volume-averaged solution vectors by \(\hbox {U}_{\left( k \right) } \) with \(k=1,\ldots ,9\). Suppressing the time-dependence in Eq. (86), or just using Eq. (95) for the sake of convenience, we have

This completes our description of the scatter step.

Let us now focus on the gather step. The gather step reverses the scatter step. In other words, given nine sub-cells with solution vectors that have been evolved up to a time \(t^{n+1}\), we wish to find the best set of coefficients that we use in Eq. (95). Realize that the mean value has to be preserved between the nine sub-cells and the one parent DG zone. As a result, for the sake of conservation, we have

The other moments should be set so as to have the best values that they can have. Equation (110) then gives us a system of nine equations in five unknowns. The optimal solution can be obtained via a least squares minimization of the following \(9\times 5\) overdetermined system

Since it is very easy to solve this least squares system by inverting a small \(5\times 5\) matrix, the solution can be efficiently obtained. In fact, since the matrix only has constant coefficients, the inversion has only to be done once. From Fig. 14 it is also important to realize that the fluxes across the boundaries of zone \(\left( {i,j} \right) \) change when the sub-cells are updated. As a result, the values of the solutions in zones \(\left( {i+1,j} \right) \), \(\left( {i-1,j} \right) \), \(\left( {i,j+1} \right) \) and \(\left( {i,j-1} \right) \) will also change.

The physical admissibility detector (PAD) consists simply of realizing that the sub-cells associated with each DG zone should each have positive density and pressure. If the flow is relativistic, the zone should also have sub-luminal velocities. In other words, the PAD is just dependent on the physics of the problem. The numerical admissibility detector (NAD) consists of just requiring no new extrema to develop in the solution and it is applied component-wise to the sub-cells. To apply the NAD to the zone \(\left( {i,j} \right) \) in Fig. 14, we go to all the sub-cells associated with all the nine zones shown in Fig. 14. Let us say that \(\hbox {u}_{\left( k \right) }^m \) is the \(\hbox {m}\)th component of the vector of conserved variables in the \(\hbox {k}\)th sub-cell of zone \(\left( {i,j} \right) \). For the \(\hbox {m}\)th component in the solution vector, we find the minimum and maximum value that this component assumes in all of the sub-cells in all the nine DG zones shown in Fig. 14. Let \(\hbox {u}_{\min }^m \) be that minimum value and let \(\hbox {u}_{\max }^m \) be that maximum value. In order to avoid a clipping of physical extrema, we want to allow the solution to slightly exceed the minimum and maximum ranges if needed. So we require the \(\hbox {m}\)th component of the solution vector in all the sub-cells of zone \(\left( {i,j} \right) \) to lie in the following range in order to be numerically admissible. The range is given by

The extent by which the minimum or maximum can be exceeded is given by “\(\delta ^{m}\)”. Unfortunately, the value of “\(\delta ^{m}\)” is set by heuristic considerations. However, a reasonable suggestion from Dumbser et al. (2014) is to use

with \(\delta _0 =10^{-4}\) and \(\varepsilon =10^{-3}\) being used in the above equation. With the arrangement of terms in Eqs. (113) and (114), the solution is allowed to develop some new extrema as long as the extrema are bounded. If the conditions in Eq. (113) are passed by all the components “m” of all the sub-cells, we say that the DG zone \(\left( {i,j} \right) \) has passed the NAD. If the DG zone \(\left( {i,j} \right) \) also passes through the PAD, we say that the zone \(\left( {i,j} \right) \) is acceptable and does not need any further MOOD limiting. If a zone does not pass the PAD and NAD conditions, we use the scatter and gather process from Eqs. (110) to (112) to redo the time-evolution in the troubled zone with a lower order TVD or WENO scheme.

## Positivity preserving reconstruction

Obtaining numerical solutions for the Euler equation that retain positive densities and pressures is incredibly important. Some Riemann solvers can guarantee a positive resolved state while others cannot make such a guarantee. A Riemann solver that guarantees positivity can be very useful in obtaining a physical solution. When either the density or pressure become negative, the Euler system loses its convexity property, handicapping our ability to obtain physical solutions. However, a loss of *positivity* does not arise exclusively from the Riemann solver. It can even arise due to the kind of reconstruction that is used. The TVD property only guarantees positivity of the reconstructed profile in one dimension. In multiple dimensions, certain parts of a reconstructed profile within a zone can lose positivity even when TVD reconstruction is used. This loss of positivity usually occurs near the vertices of a zone, where the piecewise linear profile reaches its extremal values. For higher-order reconstruction, the problem becomes a little worse because the reconstructed profile can also attain extremal values inside the zone. For that reason, we focus attention on obtaining a reconstructed profile that retains positive density and pressure. There are several papers where the topic of positivity has been discussed, both for Euler and MHD flow (Barth and Frederickson 1990; Barth 1995; Liu and Lax 1996; Lax and Liu 1998; Balsara and Spicer 1999b; Zhang and Shu 2010; Balsara 2012b). The positivity preserving method we present here derives from the latter two references. A video introduction to this work is included in Balsara (2012b). Recently, Balsara and Kim (2016) have presented a scheme for RMHD that tries to preserve the sub-luminal velocity of the flow.

We describe the method on a two dimensional structured mesh, though it extends naturally to three dimensions and it could also extend naturally to unstructured meshes. Let \(\rho \) and P be the density and pressure and let **v** be the velocity vector. Let \(\gamma \) be the ratio of specific heats. Let **m** denote the momentum density and \(\varepsilon \) the energy density. For Euler flow we can write \(\varepsilon =\hbox {P}/{\left( {\gamma -1} \right) }+{\rho ~\mathbf{v}^{2}}/2\).

We first need to define a *flattener function* that can identify regions of strong shocks within our computational domain. The method, therefore, begins by obtaining the divergence of the velocity, \(\left( {\nabla \cdot \mathbf{v}} \right) _{i,j} \), and the sound speed, \(\hbox {c}_{s;i,j} \equiv \sqrt{{\gamma \hbox {P}_{i,j} }/{\bar{{\rho }}_{i,j} }}\), within a zone (*i*, *j*) as shown in Fig. 1. To identify a shock, the undivided divergence of the velocity within a zone has to be compared with the minimum of the sound speed in the zone (*i*, *j*) and all its immediate neighbors. Thus we need the minimum sound speed from all the neighbors, see Fig. 1. It is defined by

In each zone, which is assumed to have an extent \(\Delta x\), we define the flattener as

While there is some flexibility in the value of \(\kappa _1 \), here we take \(\kappa _1 =0.4\). Here, in an intuitive sense, \(\kappa _1 \) measures the strength of the velocity divergence relative to the neighboring sound speeds. Numerical experimentation has shown this value to work well at several orders and for a large range of problems. Notice from the structure of the above equation that when the flow develops rarefactions, i.e., \(\left( {\nabla \cdot \mathbf{v}} \right) _{i,j} \ge 0\), the reconstruction is left completely untouched by the flattener. For compressive motions of modest strength, i.e., when \(-\kappa _1 \hbox { c}_{s;i,j}^{\min -nbr}<\Delta x~\left( {\nabla \cdot \mathbf{v}} \right) _{i,j} <0\), the flattener also leaves the reconstruction untouched. We, therefore, see that \(\eta _{i,j} =0\) when the flow is smooth and it goes to \(\eta _{i,j} =1\) in a continuous fashion when strong shocks are present. It is possible to improve on the previous flattener. Zones that are about to be run over by a shock but have not yet entered the shock would also be stabilized if they were to experience some flattening. We identify such situations by looking at the pressure variation. We describe the method for the x-direction as

Please note that Eq. (116) is applied first to the entire mesh in order to identify zones that are already inside a shock. Equation (117) is applied subsequently in order to identify zones that are about to be run over by a shock. It is trivial to extend the above equation to the y-direction. For multidimensional problems, the above strategy can be applied to each of the principal directions of the mesh.

We now wish to obtain the minimum and maximum values of the density and pressure variables from the neighboring zones. For Fig. 1, we can do this for the density variable by setting

where the overbars indicate zone-averaged values. A multidimensional TVD limiting strategy would have demanded that \(\rho _{i,j}^{\mathrm{min}-nbr}\le \rho _{i,j}(x,y)\le \rho _{i,j}^{\mathrm{max}-nbr}\) where is the reconstructed density in the zone of interest. Similar expressions should be obtained for the pressure.

To accommodate non-oscillatory reconstruction schemes, we need to extend the range \(\left[ \rho _{i,j}^{\mathrm{min}-nbr},\rho _{i,j}^{\mathrm{max}-nbr}\right] \) in a solution-dependent way. Using the flattener variable, this is easily done as:

For this work we took \(\kappa _{2} = 0.4\) based on extensive numerical experimentation. Observe that the role of \(\kappa _2 \) is to extend the range of permitted densities to \(\left[ (1-\kappa _{2})\rho _{i,j}^{\min -nbr}\,,(1+\kappa _{2})\rho _{i,j}^{\max -nbr}\right] \) in regions of smooth flow. If strong shocks are present in the vicinity of the zone of interest, the range of permitted densities is smoothly reduced to \(\left[ \rho _{i,j}^{\min -nbr}\,,\rho _{i,j}^{\max -nbr}\right] \) as the strength of the shocks become progressively larger. We can do similarly for the pressures. As a result, within each zone (*i*, *j*) we obtain a range of densities \(\left[ \rho _{i,j}^{\mathrm{min}-extended},\rho _{i,j}^{\mathrm{max}-extended}\right] \) and demand that the reconstructed density profile satisfy \(\rho _{i,j}^{\min -extended}\le \rho _{i,j}(x,y)\le \rho _{i,j}^{\max -extended}\). Similarly, we obtain a range of pressures \(\left[ \hbox {P}_{i,j}^{\mathrm{min}-extended},\hbox {P}_{i,j}^{\mathrm{max}-extended}\right] \) and demand that the pressure variable that can be derived at any point within the zone of interest be bounded by \(\hbox {P}_{i,j}^{\min -extended}\le \hbox {P}_{i,j}(x,y)\le \hbox {P}_{i,j}^{\max -extended}\). In practice, it might be valuable to also provide absolute floor values for \(\rho _{i,j}^{\min -extended} \) and \(\hbox {P}_{i,j}^{\min -extended}\).

Notice from the previous two paragraphs that the density is a conserved variable and the zone-averaged density is already contained within the range \(\left[ \rho _{i,j}^{\mathrm{min}-extended},\rho _{i,j}^{\mathrm{max}-extended}\right] \) by construction. Thus bringing the reconstructed density within the range simply requires us to reduce the spatially varying part of the density. The pressure, on the other hand, is a derived variable. While the zone-averaged pressure still lies within the range \(\left[ P_{i,j}^{\mathrm{min}-extended},\hbox {p}_{i,j}^{\mathrm{max}-extended}\right] \), bringing the reconstructed pressure within this range is harder, especially since the reconstruction is almost always expressed in terms of the conserved variables. The next insight comes from Zhang and Shu (2010) who presented an implementable strategy for doing this. For any conserved variable, say for instance the density in the zone (*i*, *j*), we can write

Here \(\rho _{i,j}(x, y)\) is the original reconstructed profile in the zone of interest, \(\bar{\rho }_{i,j}\) is the zone-averaged density and \(\tau \in \left[ {0,1} \right] \). Please do not confuse “\(\tau \)” with the time variable. In this section, it will refer exclusively to a parameter we use to restore positivity. \(\hbox {When}_{ }\tau =1\), the corrected profile \(\tilde{\rho }_{i,j} \left( {x,y} \right) \) is exactly equal to \(\rho _{i,j} \left( {x,y} \right) \). Thus if the entire reconstructed density lies within the desired range then such a situation is equivalent to setting \(\uptau =1\) within that zone. If the reconstructed profile lies outside the range, one can always bring the corrected profile \(\tilde{\rho }_{i,j} \left( {x,y} \right) \) within the range by finding some \(\uptau <1\) which accomplishes this. For \(\uptau =0\), this is always satisfied, ensuring that any conserved variable can be brought within the desired range by progressively reducing the value of “\(\tau \)” from unity till the variable is within the range.

The implementable strategy, which draws on Sanders (1988), Barth (1995) and Zhang and Shu (2010), consists of having a set of “*Q*” nodal points \(\left\{ \left( {x^{q},y^{q}} \right) ;q=1, \ldots ,Q \right\} \) within each zone and evaluating the entire vector of conserved variables at those points. The index “*q*” tags the nodal points within each zone. It is worth pointing out that the present strategy requires a judicious choice of nodal points in order to work well. We will give some further details about the choice of nodal points for a structured mesh at the end of this section. Thus we have \(\rho _{i,j}^q \equiv \rho _{i,j} \left( {x^{q},y^{q}} \right) \) and we can also use them to find \(\rho _{i,j}^{\min -zone} =\min \left( {\rho _{i,j}^1 ,\rho _{i,j}^2 ,\ldots ,\rho _{i,j}^Q } \right) \) and \(\rho _{i,j}^{\max -zone} =\max \left( {\rho _{i,j}^1 ,\rho _{i,j}^2 ,\ldots ,\rho _{i,j}^Q } \right) \). As shown by Barth (1995), within each zone (*i*, *j*) we can obtain a variable

Then the corrected profile for the density, which lies within the desired solution-dependent range and has sufficient leeway to be a non-oscillatory reconstruction, is given by

Notice that Eqs. (120) and (122) differ in their import, because \(\uptau _{i,j} \) from Eq. (121) is used in Eq. (122). For most practical calculations, this correction will only be invoked in an extremely small fraction of zones and, that too, for a very small fraction of the total number of time steps. In practice, the physical velocity should not change when the density profile is corrected. Since the momentum density scales as the density, when the variation in the density is reduced, it also helps to reduce the variation in the momentum density by the same amount. Similarly, the total energy density should also be reduced by the same amount.

The previous paragraph has shown how the density is brought within the desired range. We now describe the process of bringing the pressure within the desired range for Euler flow. The analogous demonstration for MHD flow is presented in Balsara (2012b). The positivity for the pressure variable is enforced after the positivity fixes for the density variable have been incorporated, as described in the previous paragraph. The philosophy applied here is quite similar to the one used for the density. The only difference is that the pressure is a derived variable. Thus we write

As before, we have \(\tau \in \left[ {0,1} \right] \), and we observe that with \(\tau =0\) the pressure is guaranteed to be within the desired range. Our positivity enforcing method relies on the fact that the zone-averaged value is always assumed to retain positive density and pressure, which can indeed be guaranteed by using a positivity preserving Riemann solver. Working with the previously defined nodal points, we can define \(\rho _{i,j}^{q}\equiv \rho _{i,j}(x^{q},y^{q}) \,,\,\mathbf{m}_{i,j}^{q}\equiv \mathbf{m}_{i,j}(x^{q},y^{q})\) and \(\varepsilon _{i,j}^{q}\equiv \varepsilon _{i,j}(x^{q},y^{q})\). We can then define the pressure at each nodal point by

If \(\hbox {P}_{i,j}^{q}\) lies within the desired range of pressures, we set a nodal variable \(\tau _{i,j}^{q}=1\). If \(\hbox {P}_{i,j}^{q}\) is not within the desired range, we wish to find a nodal variable \(\tau _{i,j}^{q}<1\) which brings it within the desired range. We illustrate the case where the \(\hbox {P}_{i,j}^{\min -extended}\) bound is violated by the *q*th nodal point. The variable \(\tau _{i,j}^{q}<1\) which brings that nodal pressure back within the desired range is given by solving

The above equation is easy to solve for because it is actually a quadratic, a fact made apparent by writing it explicitly as

The above step should be done for all the defective nodes within a zone. As before, we expect that only a very small fraction of zones in a practical computation will need this pressure positivity fix. We can then find \(\tau _{i,j}=\min (\tau _{i,j}^{1},\tau _{i,j}^{2},\ldots ,\tau _{i,j}^{Q})\). As before, \(\tau _{i,j}\) can now be used to shrink the spatially varying part of all the conserved variables in zone (*i*, *j*); i.e., as shown in Eq. (9). Indeed note from the above two equations that one has to shrink the spatial variation of *all* the conserved variables in order to bring *all* the nodal pressures within the desired range. This completes our description of the positivity preserving scheme for Euler flow.

The method described above needs to be implemented on a set of nodal points within a zone. The nodes should be picked in such a way that they bring out the extremal variation within a zone. For piecewise linear reconstruction, the extrema in the reconstructed function are always obtained at the vertices of the zone. Because piecewise linear reconstruction is a special sub-case of any higher-order reconstruction, the vertices should always be included in the set of nodes within a zone, even at higher orders. For higher-order reconstruction, Balsara (2012b) provides a detailed description of how the nodes are to be picked. For a two-dimensional mesh at third order, Fig. 12 provides a good example of nodal points that might be used. We only use the black circles in Fig. 12 for enforcing positivity.

It is also good to point out that the methods in this section are designed to save a code from a rare code crash that may arise from a negative density or pressure in a few zones. But they are not intended to overcome known intrinsic limitations in the methods. Nor will they overcome badly-designed initial conditions. For example, it has been well known (Toro and Titarev 2002) that large differences in tangential velocity across a moving interface are problematical for such methods. Higher-order methods will go some ways in ameliorating this problem if the tangential discontinuity is smoothed out over a few zones; but if the tangential discontinuity is abrupt, there is no solution for Toro’s problem. A variant of this issue, as it pertains to relativistic flow, has also been catalogued in the literature (Mignone et al. 2005; Martí and Müller 2015). Martí and Müller describe a strong shock with relativistic tangential speeds. In this case, the shock propagates at a wrong speed, spoiling the solution behind it. Again, the methods described in this section do not correct for such situations.

There is another situation where the methods do help somewhat. It has to do with problems involving strong magnetization. The methods described here have been extended to non-relativistic MHD (Balsara and Spicer 1999b; Balsara 2012b), but not to relativistic MHD. For RMHD, Komissarov (1999) has designed some rather pathological problems involving highly magnetized, relativistic explosions. In such situations, some of the worst difficulties in troubled zones are circumvented by redefining conserved variables in the problematical zones so that they are actually averages derived from neighboring zones. Even when it workd, this patch-up alas causes a loss of conservation. Using very small timesteps in the problematical zones, in conjunction with a more dissipative Riemann solver, can help too; however, we recognize again that this is a less-than-appealing resolution of a pathological problem. This latter option at least has the virtue of being conservative, as opposed to the option advocated by Komissarov (1999).

## Accuracy analysis on multidimensional test problems

Since we have catalogued several high accuracy schemes, it becomes interesting to demonstrate the difference that order of accuracy makes in the solution of problems with smooth flow. When demonstrating the order of accuracy of a method, it is very helpful (though not essential) to pick test problems whose initial conditions and time-evolution can be specified analytically. We demonstrate the order of accuracy of the higher-order schemes that were catalogued in the previous sections. The same schemes were used in this and the next section. For all the non-relativistic problems the reconstruction was done in conservative variables with the pressure positivity ideas described in the previous section (Balsara 2012b; Balsara et al. 2009, 2013). The relativistic formulation was also fully conservative in its update (Balsara and Kim 2016), however, it used reconstruction in the four-velocity variables in order to ensure sub-luminal reconstruction of the velocities. The speed of light is taken to be unity for all relativistic problems; i.e., we use geometrized units.

### Hydrodynamical vortex with ADER-WENO schemes

In the hydrodynamic vortex problem, presented by Jiang and Shu (1996), an isentropic vortex propagates at \(45{^{\circ }}\) to the grid lines in a two-dimensional domain with periodic boundaries given by \([-\,5, 5] \times [-\,5, 5]\). The unperturbed flow at the initial time can be written as \((\rho , P, v_x, v_y)=(1 , 1, 1, 1)\). The ratio of the specific heats is given by \(\gamma = 1.4\). The entropy and the temperature are defined as \(S = P /\rho ^{\gamma }\) and \(T = P / \rho \). The vortex is set up as a fluctuation of the unperturbed flow with the fluctuations given by

Its strength is controlled by the parameter \(\varepsilon \), and we set \(\varepsilon = 5\). The radius “*r*” from the origin of the domain and can be written as \(r^{2}= x^{2} + y^{2}\). Because the vortex represents a self-similar flow profile, it undergoes a form-preserving translation along the diagonal of the computational domain. As a result, the above initial conditions can be used to specify the fluid variables at any later time.

The analytically predicted conserved variables can be compared to the numerically computed conserved variables in order to demonstrate accuracy. Note though that once one goes past second order, initializing the zone-averaged conserved variables on a mesh is a non-trivial exercise. The reason is easy to illustrate at third order. The above equations can be used to predict the conserved variables associated with the vortex for all times and at any point within a zone. Thus one can predict them at the nodes defined by Eq. (78). Notice though that at third order, the zone-averaged conserved variables are not well-approximated by the conserved variables that are evaluated at the central node within a zone. It actually requires a numerical quadrature to evaluate the zone-averaged conserved variables. Thus if we take \(\hbox {U}_{i,j}^1 ,\ldots ,\hbox {U}_{i,j}^9 \) to be the values of the conserved variables that are evaluated at the nine nodes from Eq. (78) within zone \(\left( {i,j} \right) \), we have

Figure 16a, b show the logarithms of the errors measured in the \(L_1 \)and \(L_\infty \)norms for the vortex test problem as a function of the logarithm of the zone size \(\Delta x\). This is done for the second, third and fourth order schemes. We see that the higher-order schemes produce a smaller error on the coarsest meshes. Moreover, as the mesh is refined, the error in the higher-order schemes decreases much faster with mesh refinement. Schemes with WENO reconstruction and ADER time stepping were used to generate the third and fourth order results. The second order scheme used an MC limiter with a predictor–corrector formulation.

### MHD vortex with DG and PNPM schemes

In the previous subsection, we presented a genuinely two-dimensional Euler problem associated with a fluid vortex that was made to propagate at \(45{^{\circ }}\) to the computational mesh. The problem was extended to MHD in Balsara (2004). It is especially good for accuracy testing because it consists of a smoothly-varying and dynamically stable configuration that carries out non-trivial motion in the computational domain. The problem is set up on a two-dimensional domain given by \([-\,5,5]\times [-\,5,5]\). The domain is periodic in both directions. An unperturbed magnetohydrodynamic flow with (\(\uprho , \hbox {P}, \hbox {v}_{\mathrm{x}}\), \(\hbox {v}_{\mathrm{y}}, \hbox {B}_{\mathrm{x}}, \hbox {B}_{\mathrm{y}}) = (1, 1, 1, 1, 0, 0)\) is initialized on the computational domain. The ratio of specific heats is given by \(\upgamma = 5/3\). The vortex is initialized at the center of the computational domain by way of fluctuations in the velocity and magnetic fields given by

We used \(\mu =2\pi \) in the above equation for the results shown here. The magnetic vector potential in the z-direction associated with the magnetic field in the previous equation is given by

The magnetic vector potential plays an important role in the divergence-free initialization of the magnetic field on the computational domain. The circular motion of the vortex produces a centrifugal force. The tension in the magnetic field lines provides a centripetal force. The magnetic pressure also contributes to the dynamical balance in addition to the gas pressure. The condition for dynamical balance is given by

For the fluid case, Jiang and Shu (1996) provide an isentropic solution for the above equation. For the MHD case it is simplest to set the density to unity and solve the above equation for the pressure. The fluctuation in the pressure is then given by

As a result all aspects of the flow field are available in analytical form for all time which makes this problem very useful for accuracy analysis. The vortex can be set up with any strength because it is an exact solution of the MHD equations. It is worth pointing out that this test problem is easily extended to three dimensions by having a non-zero value for the z-component of the magnetic field. The simplest extension consists of giving the magnetic field a constant pitch angle with respect to the z-axis.

Accuracy analysis of this test problem using RK-WENO and ADER-WENO schemes has been presented in Balsara (2009) and Balsara et al. (2009). Here we present results from Balsara and Käppeli (2017) involving just the magnetic field but with the variation in the velocity and pressure suppressed. As a bonus though, we show the error as measured in the \(\hbox {L}_{1}\) and \(\hbox {L}_\infty \) norms for several PNPM schemes in Fig. 17. We see that there is a quality gap between the P0P1 scheme and the P1P1 scheme (which is indeed the P=1 DG scheme). Likewise, we see a quality gap between the P0P2 scheme (WENO scheme) and the P2P2 scheme (which is indeed the P=2 DG scheme). However, the P1P2 and P2P2 schemes produce results in Fig. 17 that are virtually indistinguishable! Despite having comparable accuracy, the third order P1P2 scheme was able to take substantially larger timesteps than the third order P2P2 scheme, showing that it offers some advantages.

### RHD and RMHD vortices with ADER-WENO schemes

For classical hydrodynamics and MHD, there are several very nice, non-trivial multidimensional test problems for demonstrating that a numerical method meets its design accuracy. The present RHD and RMHD test problems, first described in Balsara and Kim (2016), are the relativistic analogues of the classical hydrodynamical and MHD vortices. They should prove very useful for accuracy testing of RHD and RMHD codes.

The problem is set up on a periodic domain that spans \(\left[ {-5,5} \right] \times \left[ {-5,5} \right] \). We first describe the velocity and magnetic field in the rest frame of the vortex. For nonrelativistic hydrodynamics or MHD, making the vortex move on the mesh is just a matter of adding a net velocity. For relativistic hydrodynamics and MHD, one has to include the additional complications of relativistic velocity addition and Lorentz transformation. These additional tasks are entirely non-trivial for relativistic flow. For that reason, we initially focus on the description of the vortex in its own rest frame. In a subsequent paragraph we will describe the velocity addition and Lorentz transformation. The velocity of the vortex (before it is made to move relative to the mesh) is given by

For both the hydrodynamical and RMHD test problems we have used \(v_{\max }^\phi =0.7\). Recall that in geometrized units, the speed of light is unity. Notice that the velocity diminishes rapidly far away from the center of the vortex. This rapid drop in the velocity ensures that the boundaries of the domain have a negligible effect on the dynamics of the vortex. The magnetic field of the vortex (before it is made to move relative to the mesh) is given by

For the RMHD test problem, we set \(B_{\max }^\phi =0.7\). Notice that the magnetic field diminishes rapidly far away from the center of the vortex. This rapid drop in magnetic pressure and magnetic tension ensures that the boundaries of the domain have a negligible effect on the dynamics of the vortex. The corresponding magnetic vector potential, which is very useful for setting up a divergence-free vector field, is given by

The pseudo-entropy is defined by \(S = P /\rho ^{\Gamma }\) with \(\Gamma =5/3\) being the ratio of specific heats. The pressure and density of the vortex are also set to unity at the center of the vortex. The vortex is initialized to be isentropic so that \(\delta S=0\); i.e., the entropy is a constant throughout the vortex. Consistent with this velocity field and magnetic field, the steady state equation for the radial momentum of the vortex yields a pressure balance condition. This pressure balance condition for an RMHD vortex is given by

For the hydrodynamical case, the above equation simplifies to become

Here, \(\gamma \) is the Lorentz factor, \(h=1+{\Gamma P_g }/{\left( {\left( {\Gamma -1} \right) \rho } \right) }\) is the specific enthalpy; \(P_g \) is the gas pressure, \(b^{\phi }\) is the covariant magnetic field in the \(\phi \)-direction and \(\Gamma \) is the ratio of specific heats. Depending on the circumstance, one of the above two equations is numerically integrated radially outwards from the center of the vortex. Along with the isentropic condition, this equation fully specifies the run of the density and pressure in the vortex as a function of radius. Figure 18a shows the run of thermal pressure as a function of radius for the vortices used here in the relativistic hydrodynamics and RMHD cases. Notice that a specification of the pressure at all radial points in the vortex also yields the density because of the isentropic condition. Observe that the thermal pressure profile for the magnetized vortex is less steep in Fig. 18a because the magnetic pressure supplements the gas pressure. This completes the description of the vortex in its own rest frame. Because the next steps associated with relativistic velocity addition and Lorentz transformation are non-trivial, we recommend that the run of density and pressure for the vortices should be tabulated on a very fine one-dimensional radial mesh. Typically, this radial mesh should have resolution that is much finer than the two-dimensional mesh on which the problem is computed.

We now describe the process of mapping the vortex to a computational mesh on which it moves with a speed \(\beta _x \hat{{x}}+\beta _y \hat{{y}}\); in geometrized units, and just for this subsection, \(\beta _x \) and \(\beta _y \) are just the relative x- and y-velocities that take us from the rest frame of the vortex to the frame in which the vortex is moving relative to the computational mesh. We use \(\beta _x =\beta _y =0.5\) for our vortex; i.e., our vortex moves on the mesh with a speed that is \(1/{\sqrt{2}}\) times the speed of light. Let us define \(\gamma _\beta \equiv 1/{\sqrt{1-\beta _x^2 -\beta _y^2 }}\) to be the Lorentz factor associated with this velocity. In reality, this mapping of the vortex to a computational mesh is achieved by making the computational mesh move with a speed \(-\beta _x \hat{{x}}-\beta _y \hat{{y}}\) relative to the rest frame of the vortex. Let the rest frame of the vortex be described by the unprimed spacetime coordinates \(\left( {t,x,y,z} \right) ^{T}\). The coordinates of the computational mesh, therefore, correspond to primed spacetime coordinates given by \(\left( {t^{/},x^{/},y^{/},z^{/}} \right) ^{T}\). In practice, \(t^{/}=0\) when initializing the computational mesh and realize too that the equations that describe the vortex in its own rest frame are also time-independent, i.e., they do not depend on “*t*”. Thus for any chosen coordinate \(\left( {t^{/}=0,x^{/},y^{/},z^{/}=0} \right) ^{T}\) on the computational mesh we can find the corresponding unprimed coordinates via the following Lorentz transformation

The unprimed coordinates refer to the rest frame of the vortex. In the unprimed frame, all the flow variables associated with the vortex have already been specified via the discussion in the previous paragraph. Scalar variables, like density and thermal pressure, are referred to the rest frame of the fluid parcel, i.e., they are proper variables that transform as scalars. Consequently, they transform unchanged as long as the Lorentz transform in the previous equation is properly applied. Three-velocities have to be suitably transformed using the relativistic addition of velocities. The appropriate formulae that give us the velocities in the primed frame from the original velocities in the unprimed frame are given below as:

and

With the relativistic velocity addition formulae described above, we can obtain the velocities at any point on our computational mesh. We refer the reader to the text by Gourgoulhon (2013) for details on Lorentz transformations. Since we use a magnetic vector potential to initialize our magnetic field, we point out that the electric field potential \(\Phi \), and the magnetic vector potential (\(\vec {\hbox {A}})\) together form a four-vector \(\left( {\Phi ,\hbox {A}^{i}} \right) ^{T}\). Being a four-vector, it transforms just like a four-coordinate. We can, therefore, obtain the magnetic vector potential in the primed frame. In the specific instance of the vortex that we describe here, the z-component of the magnetic vector potential is unchanged as we transform from the unprimed frame back to the primed frame. Even in the primed frame, the previously described Lorentz transformation is such that only the z-component of the magnetic vector potential will be non-zero. Likewise, the value of \(\Phi \) is immaterial and set to zero. We see therefore that it is easy to initialize the divergence-free magnetic field for the vortex on the computational mesh. This completes our discussion of the set-up for relativistically boosted hydrodynamical and RMHD vortices on a computational mesh. Because these relativistic vortices are new in the literature, in Fig. 18b we show the density profile of the RMHD vortex on the computational mesh at the initial time. Notice that the boosted vortex shows substantial Lorentz contraction in its density variable. Figure 18c, d show the x-velocity and y-velocity respectively. Notice that the velocity profiles are not symmetrical about the northeast-pointing diagonal of the mesh owing to the relativistic velocity addition formulae.

Figure 19a, b show the errors measured in the \(\hbox {L}_1 \) and \(\hbox {L}_\infty \) norms for the RHD vortex. The error is measured in the density variable, i.e., the proper density times the Lorentz factor. Figure 19c, d show the errors measured in the \(\hbox {L}_1 \) and \(\hbox {L}_\infty \) norms for the RMHD vortex. In this instance, we show the error measured in the x-component of the magnetic field. ADER-WENO schemes at second, third and fourth order were used. We see that the schemes meet their design accuracies.

## Test problems

In this section, we do not focus on one-dimensional test problems. Good libraries of one-dimensional test problems for Euler flow have been provided in Woodward and Colella (1984). For analogous catalogues of one-dimensional Riemann problems for MHD flow, please see Ryu and Jones (1995), Dai and Woodward (1994) and Falle (2001). For a list of one-dimensional Riemann problems for RHD flow, please see Martí and Müller (2003) and also Rezzolla and Zanotti (2001). For an analogous catalogue of RMHD problems, please see Balsara (2001a, b) and Giacomazzo and Rezzolla (2006).

In the rest of this section, we present several stringent multidimensional test problems for Euler, MHD, RHD and RMHD flow that were all done with higher-order schemes.

### Euler flow: the forward-facing step test problem with ADER-DG schemes

This problem was first presented by Woodward and Colella (1984). The problem consists of a two-dimensional wind tunnel that spans a domain of [0, 3] x [0, 1]. A forward-facing step is set up at a location given by the coordinates (0.6,0.2). Inflow boundary conditions are applied at the left boundary, where the gas enters the wind tunnel at Mach 3.0 with a density of 1.4 and a pressure of unity. The right boundary is an outflow boundary. The walls of the wind tunnel and the step are set to be reflective boundaries. The singularity at the corner was treated with the same technique that was suggested by Woodward and Colella (1984); see also Fedkiw et al. (1999). The simulation was run until a time of 4.0 time units and the ratio of specific heats is given by 1.4.

Figure 20 from Dumbser et al. (2014) shows the density variable from the forward facing step problem using an ADER-DG scheme at sixth order. The result in the upper panel was computed on a \(300 \times 100\) zone mesh and is shown at a time of 4 units. Even though the mesh seems to have only 30,000 zones, a high order DG scheme can capture substantial amounts of sub-structure within each zone. We see that the simulation captures the roll-up of the vortex very clearly. The lower panel shows the zones that were flagged for MOOD limiting in red. We see that only a very small fraction of zones were limited by the MOOD limiting procedure. The CFL number was set to 0.4.

The step induces a forward-facing bow shock, which interacts with the upper wall. The interaction of the bow shock with the upper wall initiates a Mach stem. All the shocks are properly captured on the computing grid and have sharp profiles. The vortex sheet that emanates from the Mach stem is correctly resolved with only a few zones across the sheet. We notice that the vortex sheet shows little or no spreading over the length of the computational domain. This demonstrates the ability of the high order schemes to provide a better resolution for a smaller number of zones.

### Euler flow: double Mach reflection problem with ADER-WENO scheme

This problem was presented by Woodward and Colella (1984). We use the same setup for this test problem as the above authors. A Mach 10 shock hits a reflecting wall which spreads from \(x=1/6\) to \(x=4\) at the bottom of the domain. The two-dimensional computational mesh spans \([0, 4] \times [0, 1]\). The angle between the shock and the wall is \(60{^{\circ }}\). At the start of the computation, the position of the shock is given by \((x, y) = (1/6, 0)\). The undisturbed fluid in front of the shock is initialized with a density of 1.4 and a pressure of 1. The exact post-shock condition is used for the bottom boundary from \(x=0\) to \(x=1/6\) to mimic an angled wedge. For the remaining boundary at the bottom of the domain we used a reflective boundary condition. The top boundary condition imposes the exact motion of a Mach 10 shock in the flow variables. The left and right boundaries are set to be inflow and outflow boundaries.

Figure 21 shows the density variable at \(t = 0.2\) in the sub-domain given by \([0, 3] \times [0, 1]\). The upper panel shows a simulation with a resolution of \(1920 \times 480\) zones. At the high resolution, the Mach stem displays a roll-up due to the operation of the Kelvin–Helmholtz instability. We used the fourth order ADER-WENO scheme for this simulation. Notice that the fourth order ADER-WENO scheme resolves all the structures that form in this problem. According to Cockburn and Shu (1998), a second order scheme would need at least four times as many zones in each direction to resolve the instability and for such a simulation we would need much more CPU time than the fourth order scheme shown in Fig. 21. That demonstrates the efficiency of the higher-order schemes presented here.

### MHD flow: 2D rotor test problem with ADER-WENO scheme

This problem was suggested in Balsara and Spicer (1999a, b) and Balsara (2004). The problem is set up on a two dimensional unit square. It consists of having a dense, rapidly spinning cylinder, in the center of an initially stationary, light ambient fluid. The two fluids are threaded by a magnetic field that is uniform to begin with and has a value of 2.5 units. The pressure is set to 0.5 in both fluids; though it can also be set to unity. The ambient fluid has unit density. The rotor has a constant density of 10 units out to a radius of 0.1. Between a radius of 0.1 and \(0.1+6~\Delta x\) a linear taper is applied to the density so that the density in the cylinder linearly joins the density in the ambient. The taper is, therefore, spread out over six computational zones and it is a good idea to keep that number fixed as the resolution is increased or decreased. The ambient fluid is initially static. The rotor rotates with a uniform angular velocity that extends out to a radius of 0.1. At a radius of 0.1 it has a toroidal velocity of one unit. Between a radius of 0.1 and \(0.1+6~\Delta x\) the rotor’s toroidal velocity drops linearly in the radial velocity from one unit to zero so that at a radius of \(0.1+6~\Delta x\) the velocity blends in with that of the ambient fluid. The ratio of specific heats is taken to be 5/3. The problem is stopped at a time of 0.29.

The RIEMANN framework for computational astrophysics was applied to this problem. Figure 22, which is drawn from Balsara and Nkonga (2017), shows the results from the MHD Rotor test problem. Figure 22a–d show the density, pressure, magnitude of the fluid velocity and magnitude of the magnetic field at the final time. A fourth order ADER-WENO scheme with \(1000 \times 1000\) zone resolution was used.

### MHD flow: 3D extreme blast test problem with ADER-WENO scheme

This test problem is a more extreme extension of a 2D blast test problem from Balsara and Spicer (1999a, b). The present test problem was described in Balsara and Nkonga (2017) and uses a multidimensional Riemann solver described in that same paper. The plasma \(\beta \) measures the ratio of the thermal pressure to the magnetic pressure. As the plasma’s \(\beta \) becomes smaller, this problem becomes increasingly stringent. The problem consists of a \(\gamma =1.4\) gas with unit density and a pressure of 0.1 initialized on a 257\(^{3}\) zone mesh spanning the unit cube. Initially we have \(\hbox {B}_x =\hbox {B}_y =\hbox {B}_z ={150}/{\sqrt{3}}\). The pressure is initially reset to a value of 1000 inside a central region with a radius of 0.1. The plasma’s \(\beta \) is initially given by \(1.117\times 10^{-4}\). A CFL number of 0.4 was used. The problem is run up to a time of 0.0075, by which time a strong magnetosonic blast wave propagates through the domain. The problem was run with a third order ADER-WENO scheme with the MuSIC Riemann solver applied at the edges of the mesh. (The term MuSIC in the Riemann solver stands for a Riemann solver that is “Multidimensional, Self-similar, strongly-Interacting, Consistent”.) Methods to ensure pressure positivity from Balsara (2012b) were used.

Figure 23 shows the variables from the 3D blast problem in the \(z = 0\) mid-plane of the computational domain. Figure 23a shows the plot of the density for the mid-plane in the z-direction. Figure 23b shows the same for the pressure in the same plane. Figure 23c, d show the magnitude of the velocity and the magnitude of the magnetic field, again in the same plane. We see that despite this being a very stringent problem, the densities and pressures are positive, as expected.

### MHD flow: decay of finite amplitude torsional Alfvén waves with ADER-WENO scheme

Turbulence studies play an increasingly important role in several fields, like astrophysics or space physics. The ability to propagate finite amplitude Alfvén waves over large distances and long times on a computational mesh is crucial for carrying out simulations of MHD turbulence. If the Alfvén waves are damped strongly because of inherent numerical dissipation in a code, the code will fail to capture the resulting turbulence. This is because MHD turbulence is mainly sustained by Alfvén waves. The Alfvén wave decay test problem, first presented by Balsara (2004), examines the numerical dissipation of torsional Alfvén waves in two dimensions. In this test problem torsional Alfvén waves propagate at an angle of \(9.462{^{\circ }}\) to the y-axis through a domain given by \([-\,3, 3] \times [-\,3, 3]\). The domain was set up with \(120 \times 120\) zones and has periodic boundary conditions. We do not present further details of the set-up, because the problem is already well-described in the above-mentioned paper. The simulation was stopped at 129 time units by which time the Alfvén waves had crossed the domain several times. Depending on the dissipation properties of the scheme, the amplitude of the torsional Alfvén wave will, of course, decay. A more dissipative method will cause greater dissipation of the Alfvén wave; a less dissipative method will reduce that dissipation.

It is often said that the quality of the Riemann solver is not very important, especially when high order schemes are used. But practitioners have not quantified the precise order of accuracy of the scheme at which the quality of the Riemann solver becomes immaterial. We set out to quantify this order of accuracy for MHD simulations. To that end, we simulated the torsional Alfvén wave decay problem with second, third and fourth order schemes with the 1D HLLI Riemann solver along with the 2D MuSIC Riemann solver with sub-structure. Used in this fashion, both the 1D and 2D Riemann solvers are complete; i.e., they fully represent all the waves that arise in the MHD system. We then simulated the same problem again with the same second, third and fourth order schemes. However, this time we used a 1D HLL Riemann solver along with the 2D MuSIC Riemann solver without any sub-structure. In other words, in our second set of simulations both Riemann solvers did not resolve any intermediate waves.

Figure 24a, b show the evolution of the maximum z-velocity and maximum z-component of the magnetic field in the torsional Alfvén wave as a function of time. For the simulations shown in Fig. 24a, b we used the 1D HLLI Riemann solver along with the 2D MuSIC Riemann solver with sub-structure. Figure 24c, d show the same information as Fig. 24a, b, the only difference being that we used the 1D HLL Riemann solver along with the 2D MuSIC Riemann solver without sub-structure. Comparing the two sets of figures, we see that the inferior Riemann solvers produce a six-times larger decay in the amplitude of the Alfvén wave at second order. At third order, the inferior Riemann solvers produce a three-times larger decay in the amplitude of the Alfvén wave. Notice that the second order scheme with superior Riemann solvers is less dissipative than the third order scheme with inferior Riemann solvers! At fourth order, the difference between the inferior Riemann solvers and the exact Riemann solvers is almost negligible. We, therefore, conclude that second and third order schemes are greatly benefited by the quality of the Riemann solver. It is only at fourth and higher orders of accuracy that the difference between a superior and an inferior Riemann solver begins to become quite small! However, please note that a fourth order scheme has computational complexity that is substantially higher than a second or third order scheme. The Riemann solver with substructure has a computational complexity that is only marginally higher than a Riemann solver without substructure. As a result, it is very advantageous to improve the quality of all schemes at all orders.

### RMHD flow: 2D relativistic rotor test problem with ADER-WENO scheme

The rotor test problem was initially presented for classical MHD by Balsara and Spicer (1999a, b) and it has been adapted to RMHD by Del Zanna et al. (2003) in two dimensions and Mignone et al. (2009) in three dimensions. Balsara and Kim (2016) pointed out that there are nuances in setting up this problem on a mesh. In order for a mesh to actually represent the high Lorentz factor flows in this problem, they showed that the mesh resolution had to be comparably high. The problem is set up on a unit domain in two dimensions which spans \(\left[ {-0.5,0.5} \right] \times \left[ {-0.5,0.5} \right] \). A unit x-magnetic field is set up all over the domain with a unit thermal pressure. There is a unit density in the problem everywhere except within a radius of 0.1, where the density becomes ten times larger. The high density region is set into rapid rotation with a velocity given by \({\vec {\hbox {v}}}\left( {x,y} \right) =-w~y~\hat{{x}}+w~x~\hat{{y}}\), thus forming a rotor. The parameter “*w*” controls the rotation speed. Because very small changes in “*w*” can result in very large changes in the Lorentz factor, the problems arise when one tries to set up this problem on a computational mesh. The high Lorentz factor flows are confined to a very thin ring at the outer boundary of the rotor. We used \(w=9.9944\) which corresponds to a maximal Lorentz factor of 30, which requires the use of a mesh with at least \(3500\times 3500\) zones. The value “*w*” that we use here is that it ensures that the outer boundary of the rotor is moving with a speed that is very close to unity in geometrized units.

We used a mesh with \(4700 \times 4700\) zones for this simulation. Figure 25a through d show the density, gas pressure, Lorentz factor and magnetic field magnitude at a final time of 0.4. Despite the very large initial Lorentz factor, we see that all the flow variables are well-represented. The large Lorentz factor produces a substantial outward expansion in the density owing to the large centrifugal effect in the fast-rotating flow. The magnetic field in Fig. 25d is strongly compressed due to the high Lorentz factor. The simulation in Fig. 5 was run with a CFL of 0.4 using a third order accurate ADER-WENO scheme along with the MuSIC Riemann solver.

### RMHD flow: 2D relativistic Orzag–Tang test problem with ADER-WENO scheme

The Orzag–Tang test problem (Orszag and Tang 1979) is designed to illustrate the transition to turbulence for MHD flows. The RMHD variant of that test problem has been proposed by Beckwith and Stone (2011). We do not repeat the set-up here. The problem was set up on a unit square with \(1000~\times 1000\) zones and run to a final time of 0.8. The problem was run with a fourth order ADER-WENO scheme with the MuSIC Riemann solver applied at the edges of the mesh. Figure 26a–d show the density, pressure, magnitude of the velocity and magnitude of the magnetic field at the final time for the relativistic Orzag–Tang problem. All the requisite RMHD flow features are captured nicely in our simulations.

### RMHD flow: long-term decay of relativistic Alfvén waves with ADER-WENO scheme

Turbulence in non-relativistic and relativistic plasmas is currently one of the hot topics in astrophysics. We know that the turbulence in magnetized plasmas is Alfvénic; i.e., the propagation and interaction of Alfvén waves gives rise to turbulence. In order for RMHD turbulence to be correctly represented, we need to ensure that isolated, torsional Alfvén waves can propagate with minimal numerical dissipation on a computational mesh. The RMHD wave families can propagate at \(45^{\circ }\) to the mesh lines with minimum dissipation. It is much more difficult to achieve good propagation of waves that are required to propagate at a small angle to one of the mesh lines (Balsara 2004).

We construct an RMHD version of a test problem that examines the dissipation of torsional Alfvén waves when they propagate at a small angle to the mesh. See Balsara (2004) for non-relativistic test. We use a uniform \(120\times 120\) zone mesh that spans \(\left[ {-3,3} \right] \times \left[ {-3,3} \right] \) in the xy-plane. An uniform density, \(\rho _0 =1\), and pressure, \(P_0 =1\), are initialized on the mesh. The unperturbed velocity is \(v_0 =0\), and the unperturbed magnetic field is \(\hbox {B}_0 =0.5\). A constant specific heat ratio of \(\Gamma =4/3\) is used in this simulation. The amplitude of the Alfvén wave fluctuation (\(B_1 )\) can be parameterized in terms of the velocity fluctuation, which has a value of 0.1 in this problem. The Alfvén wave is designed to propagate along the wave vector, \(\mathbf{k}=k_x \hat{{x}}+k_y \hat{{y}}\), where \(k_x =1/6\), \(k_y =1\). The velocity and magnetic field are given as follows:

Here, the unit vector, \(\mathbf{n}=n_x \hat{{x}}+n_y \hat{{y}}={(k_x \hat{{x}}+k_y \hat{{y}})}/{\sqrt{k_x^2 +k_y^2 }}\), the phase of the wave at initial time, \(\phi =2\pi (k_x x+k_y y)\), and the perturbation amplitude of the magnetic field is given by \(B_1 =v_1 \sqrt{\rho _0 +\frac{\Gamma }{\Gamma -1}P_0 +B_0^2 }\). The corresponding vector potential for the magnetic field is given by

The entire simulation is run to a time of \(t=130\) by which time the Alfvén waves have crossed the computational domain five times. A CFL of 0.4 is used.

Figure 27a, b show the evolution of the maximum z-velocity and maximum z-component of the magnetic field in the relativistic torsional Alfvén wave as a function of time. For the simulations shown in Fig. 27a, b we used the 1D HLLI Riemann solver along with the 2D MuSIC Riemann solver with sub-structure. Figure 27c, d show the same information as Figs. 27a, b, the only difference being that we used the 1D HLL Riemann solver along with the 2D MuSIC Riemann solver without sub-structure. Comparing the two sets of figures, we see that the inferior Riemann solvers again show substantially larger dissipation at second and third orders. It is only at fourth order that we find a much-reduced difference between a Riemann solver with sub-structure and a Riemann solver that does not resolve any intermediate waves. As before, notice that the second order scheme with superior Riemann solvers is less dissipative than the third order scheme with inferior Riemann solvers! We, therefore, conclude that a Riemann solver that resolves intermediate waves is very important for reducing dissipation in second and third order schemes. At fourth and higher orders, that importance is diminished. The incremental costs of including sub-structure in a Riemann solver are only slight, making it advantageous to improve the quality of all schemes at all orders.

## Conclusions

There is a great need for precision in computational astrophysics. The greater computational astrophysics community has roused itself into producing some very good methods for the PDE systems that are of interest in astrophysics, cosmology and numerical relativity. This review seeks to bring together the astrophysics community and the larger computational physics community, showing that great strides of progress can be made by the inter-diffusion of knowledge.

At second order, we have seen the value of TVD reconstruction. PPM schemes incorporate many aspects of TVD reconstruction while aiming for higher orders in the reconstructing polynomials. WENO schemes give us a method for carrying out reconstruction at successively higher orders. It is important to realize though that order of accuracy is not all-important. The ability to maintain other physical principles, such as positivity of density and pressure, also play an important role in the design of numerical schemes. It is also valuable to realize that reconstructing all the moments is not the only pathway to higher order. RKDG, HWENO and PNPM schemes offer us methods for retaining higher moments and evolving them in time. (An HWENO scheme is basically a P1PM scheme, e.g., Balsara et al. 2007.) For problems with relatively smooth flows over the entire computational domain, such methods can provide a significant advantage over schemes that resort to a complete reconstruction of all the moments at each and every timestep.

Higher-order spatial reconstruction should indeed be matched with higher-order time evolution. Such a balanced accuracy in spatial and temporal accuracy is most desirable since a diminished time accuracy certainly results in a decreased overall accuracy of the numerical scheme. We have displayed two competing methodologies in time accurate simulation—Runge–Kutta timestepping and ADER timestepping. The former has the advantage of simplicity in programming, even if it requires extra stages at orders beyond third order. Reasonably simple formulations of the ADER timestepping have also become commonplace and they do offer the advantage of increased code speed.

The methods presented here are all based on finite volume formulations. If the computational emphasis is on uniform, structured mesh simulations, finite difference formulations may well offer a speed advantage. However, the finite volume formulations presented in this review are more versatile. They take well to complex geometries and extend seamlessly to unstructured meshes. ALE meshes, where the boundaries of the mesh can move, are also treated successfully by these methods. They can be used as base-level algorithms for adaptive mesh refinement calculations. They are quite fast and parallelize well. There is a rich literature and a wealth of practical experience associated with these methods. Their pitfalls, when they exist, are well-documented in the literature along with possible remedies. This makes them reliable workhorses for practical computation. The examples provided in this review have illustrated their excellent performance on a range interesting problems.

## References

Abgrall R (1994a) Approximation du problème de Riemann vraiment multidimensionnel des èquations d’Euler par une methode de type Roe, I: La linèarisation. C R Acad Sci Ser I 319:499

Abgrall R (1994b) Approximation du problème de Riemann vraiment multidimensionnel des èquations d’Euler par une methode de type Roe, II: Solution du probleme de Riemann approchè. C R Acad Sci Ser I 319:625

Aloy MA, Ibáñez JM, Martí JM, Müller E (1999) GENESIS: a high-resolution code for three-dimensional relativistic hydrodynamics. Astrophys J Suppl Ser 122:151–166

Anile AM (1989) Relativistic fluids and magnetofluids. Cambridge University Press, Cambridge

Anton L, Miralles JA, Martí JM, Ibáñez JM, Aloy MA, Mimica P (2010) A full wave decomposition Riemann solver in RMHD. Astrophys J Suppl 187:1

Atkins H, Shu CW (1998) Quadrature-free implementation of the discontinuous Galerkin method for hyperbolic equations. AIAA J 36:775–782

Balsara DS (1994) Riemann solver for relativistic hydrodynamics. J Comput Phys 114:284–297

Balsara DS (1998a) Linearized formulation of the Riemann problem for adiabatic and isothermal magnetohydrodynamics. Astrophys J Suppl 116:119

Balsara DS (1998b) Total variation diminishing algorithm for adiabatic and isothermal magnetohydrodynamics. Astrophys J Suppl 116:133–153

Balsara DS (2001a) Divergence-free adaptive mesh refinement for magnetohydrodynamics. J Comput Phys 174:614–648

Balsara DS (2001b) Total variation diminishing scheme for relativistic magnetohydrodynamics. Astrophys J Suppl 132:83–101

Balsara DS (2004) Second-order-accurate schemes for magnetohydrodynamics with divergence-free reconstruction. Astrophys J Suppl 151:149–184

Balsara DS (2009) Divergence-free reconstruction of magnetic fields and WENO schemes for magnetohydrodynamics. J Comp Phys 228:5040–5056

Balsara DS (2010) Multidimensional extension of the HLL Riemann solver; application to Euler and magnetohydrodynamical flows. J Comput Phys 229:1970–1993

Balsara DS (2012a) A two-dimensional HLLC Riemann solver with applications to Euler and MHD flows. J Comput Phys 231:7476–7503

Balsara DS (2012b) Self-adjusting, positivity preserving high order schemes for hydrodynamics and magnetohydrodynamics. J Comput Phys 231:7504–7517

Balsara DS (2014) Multidimensional Riemann problem with self-similar internal structure—part I—application to hyperbolic conservation laws on structured meshes. J Comput Phys 277:163–200

Balsara DS (2015) Three dimensional HLL Riemann solver for structured meshes; application to Euler and MHD flow. J Comput Phys 295:1–23

Balsara DS, Dumbser M (2015a) Multidimensional Riemann problem with self-similar internal structure—part II—application to hyperbolic conservation laws on unstructured meshes. J Comput Phys 287:269–292

Balsara DS, Dumbser M (2015b) Divergence-free MHD on unstructured meshes using high order finite volume schemes based on multidimensional Riemann solvers. J Comput Phys 299:687–715

Balsara DS, Käppeli R (2017) von Neumann stability analysis of globally divergence-free RKDG and PNPM schemes for the induction equation using multidimensional Riemann solvers. J Comput Phys 336:104–127

Balsara DS, Kim J (2016) A subluminal relativistic magnetohydrodynamics scheme with ADER-WENO predictor and multidimensional Riemann solver-based corrector. J Comput Phys 312:357–384

Balsara DS, Nkonga B (2017) Multidimensional Riemann problem with self-similar internal structure—part III—a multidimensional analogue of the HLLI Riemann solver for conservative hyperbolic systems. J Comput Phys 346:25–48

Balsara DS, Shu C-W (2000) Monotonicity preserving weighted non-oscillatory schemes with increasingly high order of accuracy. J Comput Phys 160:405–452

Balsara DS, Spicer DS (1999a) A staggered mesh algorithm using high order Godunov fluxes to ensure solenoidal magnetic fields in magnetohydrodynamic simulations. J Comput Phys 149:270–292

Balsara DS, Spicer DS (1999b) Maintaining pressure positivity in magnetohydrodynamic simulations. J Comput Phys 148:133–148

Balsara DS, Altmann C, Munz C-D, Dumbser M (2007) A sub-cell based indicator for troubled zones in RKDG schemes and a novel class oh hybrid RKDG + HWENO schemes. J Comput Phys 226:586–620

Balsara DS, Rumpf T, Dumbser M, Munz C-D (2009) Efficient, high-accuracy ADER-WENO schemes for hydrodynamics and divergence-free magnetohydrodynamics. J Comput Phys 228:2480

Balsara DS, Dumbser M, Meyer C, Du H, Xu Z (2013) Efficient implementation of ADER schemes for Euler and magnetohydrodynamic flow on structured meshes—comparison with Runge–Kutta methods. J Comput Phys 235:934–969

Balsara DS, Dumbser M, Abgrall R (2014) Multidimensional HLL and HLLC Riemann solvers for unstructured meshes—with application to Euler and MHD flows. J Comput Phys 261:172–208

Balsara DS, Vides J, Gurski K, Nkonga B, Dumbser M, Garain S, Audit E (2016a) A two-dimensional Riemann solver with self-similar sub-structure—alternative formulation based on least squares projection. J Comput Phys 304:138–161

Balsara DS, Amano T, Garain S, Kim J (2016b) High order accuracy divergence-free scheme for the electrodynamics of relativistic plasmas with multidimensional Riemann solvers. J Comput Phys 318:169–200

Balsara DS, Garain S, Shu C-W (2016c) An efficient class of WENO schemes with adaptive order. J Comput Phys 326:780–804

Barth TJ (1995) Aspects of unstructured grids and finite-volume solvers for Euler and Navier–Stokes equations, VKI/NASA/AGARD special course on unstructured grid methods for advection dominated flows, AGARD Publ. R-787 (Von Karman Institute for Fluid Dynamics, Belgium)

Barth TJ, Frederickson PO (1990) Higher-order solution of the Euler equations on unstructured grids using quadratic reconstruction, AIAA Paper no. 90-0013, 28th Aerospace Sciences Meeting

Bassi F, Rebay S (1997) High-order accurate discontinuous finite element solution of the 2D Euler equations. J Comput Phys 138:267–279

Batten P, Clarke N, Lambert C, Causon DM (1997) On the choice of wavespeeds for the HLLC Riemann solver. SIAM J Sci Comput 18:1553–1570

Beckwith K, Stone JM (2011) A second-order Godunov method for multi-dimensional relativistic magnetohydrodynamics. Astrophys J Suppl Ser 193:6

Ben-Artzi M (1989) The generalized Riemann problem for reactive flows. J Comput Phys 81:70–101

Ben-Artzi M, Birman A (1990) Computation of reactive duct flows in external fields. J Comput Phys 86:225–255

Ben-Artzi M, Falcovitz J (1984) A second-order Godunov-type scheme for compressible fluid dynamics. J Comput Phys 55:1–32

Ben-Artzi M, Falcovitz J (2003) Generalized Riemann problems in computational fluid dynamics. Cambridge University Press, Cambridge

Berger M, Colella P (1989) Local adaptive mesh refinement for shock hydrodyamics. J Comput Phy 82:64–84

Biswas R, Devine RK, Flaherty J (1994) Parallel, adaptive finite element methods for conservation laws. Appl Numer Math 14:255–283

Borges R, Carmona M, Costa B, Don WS (2008) An improved weighted essentially non-oscillatory scheme for hyperbolic conservation laws. J Comput Phys 227(6):3101–3211

Boris JP, Book DL (1976) Flux corrected transport III: minimal-error FCT algorithms. J Comput Phys 20:397–431

Boscheri W, Dumbser M (2014) A direct Arbitrary–Lagrangian–Eulerian ADER-WENO finite volume scheme on unstructured tetrahedral meshes for conservative and non-conservative hyperbolic systems in 3D. J Comput Phys 275:484–523

Boscheri W, Dumbser M (2017) Arbitrary–Lagrangian–Eulerian discontinuous Galerkin schemes with a posteriori sub-cell finite volume limiting on moving unstructured meshes. J Comput Phys 348:449–479

Boscheri W, Balsara DS, Dumbser M (2014a) Lagrangian ADER-WENO finite volume schemes on unstructured triangular meshes based on genuinely multidimensional HLL Riemann solvers. J Comput Phys 267:112–138

Boscheri W, Dumbser M, Balsara DS (2014b) High order Lagrangian ADER-WENO schemes on unstructured meshes—application of several node solvers to hydrodynamics and magnetohydrodynamics. Int J Numer Methods Fluids 76(10):737–778

Bourgeade A, LeFloch Ph, Raviart P (1989) An asymptotic expansion for the solution of the generalized Riemann problem. II. Application to the equations of gas dynamics. Ann Inst H Poincaré Anal Non Linéaire 6:437–480

Brackbill JU (1985) Fluid modeling of magnetized plasmas. Space Sci Rev 42:153

Brackbill JU, Barnes DC (1980) The effect of nonzero \(\cdot \) B on the numerical solution of the magnetohydrodynamic equations. J Comput Phys 35:426–430

Brecht SH, Lyon JG, Fedder JA, Hain K (1981) A simulation study of East–West IMF effects on the magnetosphere. Geophys Res Lett 8:397

Brio M, Wu CC (1988) An upwind differencing scheme for the equations of ideal magnetohydrodynamics. J Comput Phys 75:400

Buchmuller P, Helzel C (2014) Improved accuracy of high-order WENO finite volume methods on Cartesian grids. J Sci Comput 61:343–368

Buchmuller P, Dreher J, Helzel C (2016) Finite volume WENO methods for hyperbolic conservation laws on Cartesian grids with adaptive mesh refinement. Appl Math Comput 272:460–478

Burbeau A, Sagaut P, Bruneau ChH (2001) A problem-independent limiter for high-order Runge–Kutta discontinuous Galerkin methods. J Comput Phys 169:111–150

Canuto C, Hussaini MY, Quarteroni A, Zang TA (2007) Spectral methods: evolution to complex geometries and applications to fluid dynamics (scientific computation). Springer, Berlin

Cargo P, Gallice G (1997) Roe matrices for ideal MHD and systematic construction of Roe matrices for systems of conservation laws. J Comput Phys 136:446

Castro M, Costa B, Don WS (2011) High order weighted essentially non-oscillatory WENO-Z schemes for hyperbolic conservation laws. J Comput Phys 230:1766–1792

Chadrashekar P, Klingenberg C (2016) Entropy stable finite volume scheme for ideal compressible MHD on 2D Cartesian meshes. SIAM J Numer Anal 54(2):1313–1340

Chandrashekar P (2013) Kinetic energy preserving and entropy stable finite volume schemes for compressible Euler and Navier–Stokes equations. Commun Comput Phys 14:1252–1286

Clain S, Diot S, Loubère R (2011) A high-order finite volume method for systems of conservation laws-Multi-dimensional Optimal Order Detection (MOOD). J Comput Phys 230:4028–4050

Cockburn B, Shu C-W (1989) TVB Runge–Kutta local projection discontinuous Galerkin finite element method for conservation laws II: general framework. Math Comput 52:411–435

Cockburn B, Shu C-W (1998) The Runge–Kutta discontinuous Galerkin method for conservation laws V: multidimensional systems. J Comput Phys 141:199–224

Cockburn B, Lin SY, Shu C-W (1989) TVB Runge–Kutta local projection discontinuous Galerkin finite element method for conservation laws III: one-dimensional systems. J Comput Phys 84:90–113

Cockburn B, Hou S, Shu C-W (1990) TVB Runge–Kutta local projection discontinuous Galerkin finite element method for conservation laws IV: the multidimensional case. J Comput Phys 54:545–581

Cockburn B, Karniadakis G, Shu C-W (2000) The development of discontinuous Galerkin methods, In: Cockburn B, Karniadakis G, Shu C-W (eds) Discontinuous Galerkin methods: theory, computation and applications, lecture notes in computational science and engineering, vol 11, Springer, Berlin, Part I: overview, pp 3–50

Colella P (1985) A direct Eulerian MUSCL scheme for gas dynamics. SIAM J Sci Stat Comput 6:104–117

Colella P (1990) Multidimensional Upwind methods for hyperbolic conservation laws. J Comput Phys 87:171

Colella P, Sekora MD (2008) A limiter for PPM that preserves accuracy at smooth extrema. J Comput Phys 227:7069

Colella P, Woodward P (1984) The piecewise parabolic method (PPM) for gas-dynamical simulations. J Comput Phys 54:174–201

Cravero I, Semplice M (2016) On the accuracy of WENO and CWENO reconstructions of third order on nonuniform meshes. J Sci Comput 67:1219–1246

Crockett RK, Colella P, Fisher RT, Klein RI, McKee CF (2005) An unsplit, cell-centered Godunov method for ideal MHD. J Comput Phys 203:422

Dai W, Woodward PR (1994) Extension of the piecewise parabolic method to multidimensional ideal magnetohydrodynamics. J Comput Phys 115:485–514

Dai W, Woodward PR (1998) On the divergence-free condition and conservation laws in numerical simulations for supersonic magnetohydrodynamic flows. Astrophys J 494:317–335

Daubechies I (1992) Ten lectures on wavelets. SIAM, Philadelphia

Del Zanna L, Velli M, Londrillo P (2001) Parametric decay of circularly polarized Alfvén waves: multidimensional simulations in periodic and open domains. Astron Astrophys 367:705–718

Del Zanna L, Bucciantini N, Londrillo P (2003) An efficient shock-capturing central-type scheme for multidimensional relativistic flows. Astron Astrophys 400:397–413

Del Zanna L, Zanotti O, Bucciantini N, Londrillo P (2007) ECHO: a Eulerian conservative high-order scheme for general relativistic magnetohydrodynamics and magnetodynamics. Astron Astrophys 473:11

Deng X, Zhang H (2005) Developing high order weighted compact nonlinear schemes. J Comput Phys 203:22–44

Derigs D, Winters AR, Gassner GJ, Walch S (2017) A novel averaging technique for discrete entropy-stable dissipation operators for ideal MHD. J Comput Phys 330:624

DeVore CR (1991) Flux-corrected transport techniques for multidimensional compressible magnetohydrodynamics. J Comput Phys 92:142–160

Diot S, Clain S, Loubère R (2012) Improved detection criteria for the Multi-dimensional Optimal Order Detection (MOOD) on unstructured meshes with very high-order polynomials. Comput Fluids 64:43–63

Diot S, Loubère R, Clain S (2013) The Multidimensional Optimal Order Detection method in the three-dimensional case: very high-order finite volume method for hyperbolic systems. Int J Numer Methods Fluids 73:362–392

Dubiner M (1991) Spectral methods for triangles and other domains. J Sci Comp 6:345–390

Dumbser M, Balsara DS (2016) A new, efficient formulation of the HLLEM Riemann solver for general conservative and non-conservative hyperbolic systems. J Comput Phys 304:275–319

Dumbser M, Käser M (2007) Arbitary high order non-oscillatory finite volume schemes on unstructured meshes for linear hyperbolic systems. J Comput Phys 221:693–723

Dumbser M, Zanotti O (2009) Very high order \({\rm P}_{N}{\rm P}_{M}\) schemes on unstructured meshes for the resistive relativistic MHD equations. J Comput Phys 228:6991–7006

Dumbser M, Käser M, Titarev VA, Toro EF (2007) Quadrature-free non-oscillatory finite volume schemes on unstructured meshes for nonlinear hyperbolic systems. J Comput Phys 226:204–243

Dumbser M, Balsara DS, Toro EF, Munz C-D (2008) A unified framework for the construction of one-step finite volume and discontinuous Galerkin schemes on unstructured meshes. J Comput Phys 227:8209–8253

Dumbser M, Zanotti O, Hidalgo A, Balsara DS (2013) ADER-WENO finite volume schemes with space-time adaptive mesh refinement. J Comp Phys 248:257–286

Dumbser M, Zanotti O, Loubere R, Diot S (2014) A posteriori subcell limiting of the discontinuous Galerkin finite element method for hyperbolic conservation laws. J Comp Phys 278:47–75

Einfeldt B (1988) On Godunov-type methods for gas dynamics. SIAM J Numer Anal 25(3):294–318

Einfeldt B, Munz C-D, Roe PL, Sjogreen B (1991) On Godunov-type methods near low densities. J Comput Phys 92:273–295

Eulderink F (1993) Numerical relativistic hydrodynamics, PhD Thesis, Rijksuniverteit te Leiden,Leiden, Netherlands

Eulderink F, Mellema G (1995) General relativistic hydrodynamics with a Roe solver. Astron Astrophys Suppl 110:587–623

Evans CR, Hawley JF (1989) Simulation of magnetohydrodynamic flows: a constrained transport method. Astrophys J 332:659

Falle SAEG (2001) On the inadmissibility of non-evolutionary shocks. J Plasma Phys 65:29–58

Falle SAEG, Komissarov SS (1996) An upwind numerical scheme for relativistic hydrodynamics with a general equation of state. Mon Not R Astron Soc 278:586–602

Falle SAEG, Komissarov SS, Joarder P (1998) A multidimensional upwind scheme for magnetohydrodynamics. Mon Not R Astron Soc 297:265–277

Fan P (2014) Higher-order weighted essentially nonoscillatory WENO-\(\eta \) schemes for hyperbolic conservation laws. J Comput Phys 269:355–385

Fan P, Shen Y, Tian B, Yang C (2014) A new smoothness indicator for improving the weighted essentially nonoscillatory scheme. J Comput Phys 269:329–354

Fedkiw RP, Marquina A, Merriman B (1999) An isobaric fix for the overheating problem in multimaterial compressible flows. J Comput Phys 148:545–578

Florinski V, Guo X, Balsara DS, Meyer C (2013) MHD modeling of solar system processes on geodesic grids. Astrophys J Suppl 205:19

Friedrichs O (1998) Weighted essentially non-oscillatory schemes for the interpolation of mean values on unstructured grids. J Comput Phys 144:194–212

Font JA (2003) Numerical hydrodynamics in general relativity. Living Rev Relativ 6:4

Font JA, Ibáñez JM, Martí JM, Marquina A (1994) Multidimensional relativistic hydrodynamics: characteristic fields and modern high-resolution shock-capturing schemes. Astron Astrophys 282:304–314

Fuchs FG, McMurry AD, Mishra S, Waagan K (2011) Simulating waves in the upper solar atmosphere with SURYA: a well-balanced high-order finite-volume code. Astrophys J 732:75

Gammie CF, McKinney JC, Tóth G (2003) HARM: a numerical scheme for general relativistic magnetohydrodynamics. Astrophys J 589:444

Garain S, Balsara DS, Reid J (2015) Comparing Coarray Fortran (CAF) with MPI for several structured mesh PDE applications. J Comput Phys 297:237–253

Gardiner T, Stone JM (2005) An unsplit Godunov method for ideal MHD via constrained transport. J Comput Phys 205:509

Gardiner T, Stone JM (2008) An unsplit Godunov method for ideal MHD via constrained transport in three dimensions. J Comput Phys 227:4123

Gerolymos GA, Sénéchal D, Vallet I (2009) Very high order WENO schemes. J Comput Phys 228:8481–8524

Giacomazzo B, Rezzolla L (2006) The exact solution of the Riemann problem in relativistic magnetohydrodynamics. J Fluid Mech 562:223–259

Giacomazzo B, Rezzolla L (2007) WhiskyMHD: a new numerical code for general relativistic magnetohydrodynamics. Class Quantum Gravity 24(12):S235–S258

Godunov SK (1959) A difference method for the numerical calculation of discontinuous solutions of the equations of hydrodynamics. Mat Sb 47:271–306

Goetz CR, Dumbser M (2016) A novel solver for the generalized Riemann problem based on a simplified LeFloch–Raviart expansion and a local space-time discontinuous Galerkin formulation. J Sci Comput 69:805–840

Goetz CR, Iske A (2016) Approximate solutions of generalized Riemann problems for nonlinear systems of hyperbolic conservation laws. Math Comp 85:35–62

Goetz CR, Balsara DS, Dumbser M (2017) A family of HLL-type solvers for the generalized Riemann problem. Comput Fluids. https://doi.org/10.1016/j.compfluid.2017.10.028

Goldstein ML (1978) An instability of finite amplitude circularly polarized Alfvén waves. Astrophys J 219:700

Gottlieb S (2005) On higher-order strong stability preserving Runge–Kutta and multistep time discretizations. J Sci Comput 25(1/2):105

Gottlieb D, Orzag S (1977) Numerical analysis of spectral methods: theory and applications, CBMS-NSF regional conference series in applied mathematics, 26. SIAM, Philadelphia

Gottlieb S, Shu C-W (1998) Total-variation-diminishing Runge–Kutta schemes. Math Comput 67:73–85

Gottlieb S, Shu C-W, Tadmor E (2001) Strong stability-preserving higher-order time discretization methods. SIAM Rev 43(1):89–112

Gottlieb S, Ketcheson D, Shu C-W (2011) Strong stability preserving Runge–Kutta and multistep time discretizations. World Scientific, Singapore

Gourgoulhon E (2013) Special relativity in general frames: from particles to astrophysics. Springer, Berlin

Gurski KF (2004) An HLLC-type approximate Riemann solver for ideal magnetohydrodynamics. SIAM J Sci Comput 25:2165

Harten A (1977) The artificial compression method for computation of shocks and contact discontinuities. I. Single conservation laws. Commun Pure Appl Math 30:611

Harten A (1983) High resolution schemes for conservation laws. J Comput Phys 49:357–393

Harten A (1989) ENO schemes with subcell resolution. J Comput Phys 83:148

Harten A, Lax PD, van Leer B (1983) On upstream differencing and Godunov-type schemes for hyperbolic conservation laws. SIAM Rev 25:289–315

Harten A, Engquist B, Osher S, Chakravarthy S (1987) Uniformly high order essentially non-oscillatory schemes III. J Comput Phys 71:231–303