Multigrid Schemes for High Order Discretizations of Hyperbolic Problems

Total variation diminishing multigrid methods have been developed for first order accurate discretizations of hyperbolic conservation laws. This technique is based on a so-called upwind biased residual interpolation and allows for algorithms devoid of spurious numerical oscillations in the transient phase. In this paper, we justify the introduction of such prolongation and restriction operators by rewriting the algorithm in a matrix-vector notation. This perspective sheds new light on multigrid procedures for hyperbolic problems and provides a direct extension for high order accurate difference approximations. The new multigrid procedure is presented, advantages and disadvantages are discussed and numerical experiments are performed.

procedure allow for 2 L times faster wave propagation by ensuring that the Total Variation Diminishing (TVD) property is preserved. In the Total Variation Diminishing Multi-Grid scheme (TVD-MG), the residual restriction is performed using an upwind biased interpolation rather than the classical linear one, and this also guarantees convergence of the iterative procedure. However, a significant drawback of the TVD-MG is the intrinsic need for first order accurate space discretizations.
In this article, we will replicate and extend the previously mentioned low order TVD-MG method to upwind Summation-By-Parts (SBP) based high-order finite difference methods [6,7] for linear hyperbolic problems. The crucial role played by the upwind biased interpolation in the TVD-MG will be investigated from a matrix-vector perspective, leading to a direct extension of the residual restriction to higher order approximations. In our approach, the wave propagation for high-order discretizations on a fine grid is complemented with the TVD-MG for first order schemes on coarse grids. New restriction operators that are needed to couple the high order discretization on the fine grid with the first order scheme on the second finest grid will be presented. We will also show how the new residual restriction operators developed for scalar advection problems can be used to accelerate the wave propagation for linear hyperbolic systems and nonlinear problems.
The article is structured as follows: in Sect. 2 we introduce a new first order upwind SBP discretization for one-dimensional scalar advection problems. A multigrid technique for such approximations, inspired by the TVD-MG, is presented in Sect. 3. In this section we also revisit the upwind-biased residual restriction needed for convergence acceleration from a matrix-vector point of view. The extension to higher order discretizations is given in Sect. 4. Several modifications needed to implement the multigrid algorithm for hyperbolic systems such as the linear compressible Euler equations are discussed in Sect. 5. In Sect. 6, we present an extension of the multigrid procedure to nonlinear problems. Numerical results for two-dimensional problems are shown in Sect. 7. Finally, conclusions are drawn in Sect. 8.

Upwind Discretization of the Advection Equation
Consider the linear wave propagation problem where f and g are known data independent of time. A semi-discretization of (1) can be written by using SBP operators on upwind form [7]. Likewise, D − has order p in the interior and p/2 at the right boundary; (ii) the diagonal matrix P defines a discrete norm; (iii) Q + + Q T − = 0; (iv) Q + + Q T + and Q − + Q T − are positive and negative semi-definite, respectively.
Consider an equidistant grid Ω 1 = x j = jΔx, j = 0, 1, 2, . . . , N on [0, 1], with N Δx = 1. The space discretization of (1) with upwind SBP operators reads where the boundary condition is weakly imposed with a Simultaneous Approximation Term (SAT) [6]. In (2), the vector U = (U 0 , U 1 , . . . , U N ) T indicates the approximate solution with U j (t) ≈ u x j , t and f = ( f (x 0 ) , f (x 1 ) , . . . , f (x N )) T . Moreover, we have used e 0 = (1, 0, . . . , 0) T . Note that the positive variant of the SBP upwind operator has been used, since the solution of (1) is a wave propagating towards the right boundary. As for the usual centered SBP operators [6,8], it is straightforward to show that (2) leads to stability [7]. Indeed, by letting f = 0 and applying the energy method, i.e. multiplying (2) from the left by U T P and adding the transpose of the resulting expression, we get Since Q + + Q T + is positive semi-definite, U P = U T PU will be bounded by data.

New First Order Upwind SBP Operators
The upwind SBP operators in Definition 1 were derived from second up to ninth order [7]. However, in the present study we will also need first order upwind discretizations satisfying the SBP property in order to replicate and extend the TVD-MG technique [4,5]. Such operators (for the positive variant) are given by The operator D + is first order accurate in all the nodes with the exception of the left boundary closure, where the order of accuracy drops to zero. The particular structure of D + makes it easy to compute the steady-state solution of (2) as which is the discrete analogous of the steady-state solution of (1) Remark 1 The matrix P in (3) does not represent a consistent quadrature formula, namely it does not correctly integrate constant grid functions. Consistency can be restored by relaxing the constraint on the matrix P defining a discrete norm. In particular, the modified matrix P + = Δx · diag (0, 1, . . . , 1) exactly integrates constants and leads to the same D + as in (3), and defines a semi-norm. However, this matrix can only be used for the positive variant D + , since the corresponding negative variant D − requires P − = Δx · diag (1, . . . , 1, 0). Hence, to make the matrix P both strictly positive definite and equal for the two variants, the consistency requirement is relaxed. This lack of consistency is in line with the previously derived upwind SBP operators: indeed, the norm P of a pth order SBP upwind first derivative operator exactly integrates only ( p − 2)th degree polynomials, if p is odd.

Convergence to Steady-State for Single Grid Methods
The steady state solution (4) can also be found by discretizing (2) in time and computing an approximation of U(t) for large t. By marching in time with Euler Forward (EF) we get U n+1 = U n − Δt D + + P −1 e 0 e T 0 U n + Δt F, (6) where F = f + g(Δx) −1 e 0 and the superscript n denotes the approximation at time t n = nΔt. The discretization (6) converges to steady-state without spurious oscillations if Δt ≤ Δx. Moreover, it can be recast in compact form by introducing and In (7,8), I 1 is the identity matrix, λ = Δt/Δx denotes the CFL number and the subscript 1 is used to denote quantities on the grid Ω 1 . With this notation, the method (6) can be rewritten as If the eigenvalues of L 1 have strictly positive real parts, then the convergence of (9) is guaranteed [9,10].

Remark 2
The invertibility of L 1 can be shown for pseudo-spectral approximations [9], but not in general for discretizations based on finite-difference methods [11]. The 1st and 2nd order upwind SBP operators lead to matrices L 1 with triangular and block-triangular structure, respectively, for which invertibility follows in a straightforward way. However, proving this result for a higher order approximation requires a considerable effort.
For a first order upwind operator D + and λ = 1, the convergence to steady-state is achieved in N + 1 iterations, since Other explicit schemes can of course also be used for the time advancement of (2). For example, the fourth order Runge-Kutta (RK4) scheme leads to the iterative procedure (9) with Although also this iterative method converges to the steady-state solution, the EF time marching for first order discretizations leads to faster convergence. In Fig. 1 the approximate solution of (2) with U 0 = (1, . . . , 1) T , f = 0 and g = 0 is displayed at t = 0.5 for both procedures with λ = 1. The RK4 method damps the initial discontinuity, and thereby slows down the convergence to the steady-state solution U = (0, . . . , 0) T . Using (10) leads to a transient phase consisting of both a convective part and a damping part. Purely convective convergence is a distinctive feature of first order upwind space discretizations using EF with λ = 1, see Fig. 2. For λ < 1, dissipation effects arise also for EF, but the convergence to steady-state is still faster when compared to RK4. For these reasons, in the following we use EF as time-marching for first order discretizations. For higher order schemes, using EF is inappropriate due to its poor stability properties and RK4 is preferred.

Convergence Acceleration for First Order Upwind Schemes
Consider the discrete problem (6), which converges to the steady-state solution U = L −1 1 F 1 in N + 1 iterations. The wave propagation speed λ = 1 can be increased by using the following two-level algorithm, which is inspired by the TVD-MG [4,5]: Fine grid evolution: Fine grid residual: Solution injection: Coarse grid residual: Coarse grid evolution: Fine grid update: Quantities on the fine and coarse grid are indicated with superscripts 1, 2, respectively. In (11) we have introduced: for j = 0, 2, . . . , N ; -A residual restriction operator I r which conveys the information from Ω 1 to Ω 2 , which is upwind-biased [4,5] at interior nodes and inconsistent at the left boundary node. This somewhat odd choice will be explained later. See Fig. 3 for details; -A space discretization L 2 = D +,2 + P −1 2 e 0,2 e T 0,2 and a smoother S 2 = I 2 − Δt 2 L 2 on the coarse grid Ω 2 which are counterparts to the operators L 1 and S 1 in (7,8) on Ω 1 ; -The prolongation operator I p = I I p + I E p in the fine grid update step [12] has been split into two contributions: I I p for the nodes included on the coarse grid and I E p for the other nodes, see Fig. 3. In particular, are introduced to avoid overshoots in the transient phase, and preserve the TVD property [4].
In (11), the coarse grid evolution step converges to the steady-state solution L −1 2 F 2 . This vector can be recast as Fig. 3 The interpolation operators used in the two-level algorithm (11) are the injection operator R u (top left), the restriction operator I r (top right), the prolongation operator for the included nodes I I p (bottom left) and the prolongation operator for the excluded nodes I E p (bottom right). Note that I r is inconsistent at the left boundary node the steady-state solution for the coarse grid evolution step becomes In other words, if (14) holds, the coarse grid problem converges towards the steady-state solution R u U, i.e. the injection of the steady-state solution U onto the coarse grid nodes. For this reason, (14) plays a crucial role in our multigrid method. Henceforth, we will refer to this condition as the approximation assumption. The convergence of the algorithm in (11) can be studied by rewriting it in compact form as where is the multigrid iteration matrix.

Remark 3
The only formal difference between the algorithm in (11) and the conventional two-grid procedure [12] is in the fine-grid update step. Indeed, by changing that to the usual the resulting multigrid iteration matrix for the two approaches has the same structure, i.e.
Using the first order upwind scheme discretized in time with EF (6) for both the fine-and coarse-grid discretizations, M becomes It is easy to verify that the multigrid iteration scheme (15) converges for λ ≤ 1, since the eigenvalues of M are given by its diagonal entries. Moreover, for the specific choice λ = 1, the method has a purely convective transient phase and convergence is achieved in exactly N /4 + 1 iterations. It is also possible to recursively apply grid coarsening to speed up the procedure: for L grids, the multigrid algorithm can be written as k = 1; (1) .
In (17), the superscript (m → n) indicates that the interpolation operator transfers information from the grid m to the grid n. For notation simplicity, this superscript is neglected when considering the finest and the second finest grid.
The multigrid algorithm (17) converges in N /2 L + 1 iterations (see Fig. 4). These results are in line with the ones obtained with the TVD-MG [4] where the boundary conditions were strongly imposed.

Remark 5
The multigrid algorithm (17) is referred to as a multiplicative scheme [4,5]. A socalled additive scheme can be obtained by replacing (k) . However, the resulting algorithm turns out to be slower than the multiplicative version and hence only (17) will be considered in the following.

Initial Modifications for Higher Order Discretizations
The matrix-vector notation introduced in (11, 17) can be used to generalize and adapt the multigrid algorithm for higher order discretizations. As a first attempt, we substitute D + in (2) with the SBP upwind operators of 3rd, 4th, 5th and 6th order for all the grids involved. For all the levels, the RK4 time integrator with λ = 0.5 is used. The convergence results in Fig. 5 show that the multigrid procedure either becomes ineffective or ceases to converge for more than 2 grid levels.
A second attempt to extend the multigrid procedure to higher order discretizations was made by recalling that the accuracy of the steady state solution does not depend on the accuracy on the coarse grids. Thus, we kept a first order upwind discretization for L k , k = 2, 3, . . .. The convergence plots for the multigrid procedure with high order discretization on the fine grid and first order discretization on coarser grids are shown in Fig. 6. Once again, the multigrid algorithm does not produce satisfactory results, at least not for orders of accuracy higher than 3.
The outcome of these numerical experiments shows that an extension to higher order discretizations requires a deeper understanding of the interpolation operators and further analysis.

Revisiting the Upwind-Biased Residual Restriction
Due to the pointwise notation and strong imposition of the boundary conditions used for the TVD-MG, the upwind biased operator I r (12) was previously presented only for the interior nodes of the discretization. In that case, it can be shown that this residual restriction, for F 1 = 0, led to [4] ( since Δx (2) = 2Δx (1) . On the other hand, the coarse grid residual step of the algorithm (11) can be recast as For first order upwind discretizations, in the interior nodes we can write j .  Hence, each step of (18) is mimicked by (19), if F 1 = 0. By comparing these two expressions, it is clear that the upwind biased restriction operator [4,5] satisfies L 2 R u = I r L 1 , verifying exactly the approximation assumption (14), on the interior nodes of Ω 2 . To extend this property also to the boundary nodes we compute which explains the choice (12) which is identical to (20). I + r in (12,20) is inconsistent at the left boundary node and matches the previously developed upwind biased restriction on the other nodes.
The derivation of I + r can of course be repeated for propagation problems with left-traveling waves discretized in space with the negative variant of the first order SBP upwind operators. This leads to a residual restriction operator which is rotated with respect to its positive counterpart In the following sections we will use I + r and I − r for the right-and left-traveling waves, respectively. Similarly, the prolongation operators will be denoted with the superscript + or − depending on the direction of propagation.

Extension to Higher Orders of Accuracy
To generalize the multigrid procedure for (1) to higher order accuracy, we recall that a unique restriction operator I + r such that L 2 R u = I + r L 1 exists if L 1 = D + + P −1 e 0 e T 0 is invertible. This relation should hold also for the residual restriction between coarser grids. However, since the order of convergence does not depend on the accuracy of the operators on the coarse grids, we choose, as we did in Sect. 3, to use a first order upwind scheme for the coarse grid operators L i , i = 2, 3, . . .. The obvious advantage is that the residual restriction in (12,20) automatically satisfies the constraint L k+1 R In other words, to accelerate the convergence for higher order discretizations we use the already existing TVD-MG for first order schemes [4,5] on coarse grids. The crucial modification to the former algorithm is the introduction of new residual interpolation operators between the fine and the second finest grids, which depends on the order of accuracy of L 1 .
Note that by demanding I + r = L 2 R u L −1 1 we can simplify the two-grid matrix M in (16) to where I + p = I I p + I E,+ p . This matrix and, in general, the convergence properties of the multigrid scheme are now formally independent of I + r . However, to use the algorithm in practice we need to compute F 2 = I + r F 1 and hence we still need the restriction operators I + r in closed form.

Prolongation Operators
For the interpolation from coarse to fine grid, we will consider two classes of prolongation operators: 1. The linear prolongation operator I + p in (13). 2. The prolongation operator I + p = I I p + I E,+ p such that I I p = R T u and I E,+ p is obtained from the SBP-preserving relation [12], limited to the nodes not included on the coarse grid Ω 2 , i.e.
For simplicity, these prolongation operators will be referred to as SBP-preserving.
The SBP-preserving choice (22) also leads to (13) for first order discretizations. In Sect. 4.4 we will perform a few numerical experiments involving the SBP-preserving prolongation.
However, unless otherwise stated, we will use the linear prolongation operator in the following.

Second Order Discretizations
The second order space discretization of (1) is (2) with (23) Discretizing (2) in time with RK4 we get, as for the first order case, an iterative solver that converges to an approximation of the steady-state solution (5). To accelerate the convergence, we apply (11) with a first order SBP upwind coarse-grid discretization. The residual restriction operator I + r = L 2 R u L −1 1 in this case can be written in closed form as . . . · · · · · · · · · . . . . . .
Note that the resulting interpolation is consistent for all nodes except the first one, as for (12). Unlike the first order case, the residual restriction consists of nonlocal contributions which increase the computational effort. A possible solution to this drawback is to cancel all contributions smaller than a given threshold ε in absolute value. Furthermore, we round the remaining entries to the number nearest to ε. We have chosen ε to be equal to 10 −6 . Figure 7 shows the sparsity patterns for the residual restriction in (24) and its approximated counterpart.

Remark 6
The storage of I r is not required, it suffices to compute its action on vectors.
In Fig. 8 (left) we compare the convergence of the multigrid algorithm with the exact (24) and the approximate residual restriction operator. The approximate version does not significantly change the convergence results compared to the use of the exact version in (24), and hence it is preferred.

Remark 7
As for the single-grid case mentioned in Sect. 2.2, the convergence to steady-state consists of a convective and a damping phase. The residual restriction I + r = L 2 R u L −1 1 makes the convective phase faster and keeps the damping phase almost unaltered.

Remark 8
The fine grid wave propagation is not TVD for discretizations with order higher than one. As a consequence, our multigrid algorithm is not designed to be TVD preserving.

Extension to Higher Orders
To extend the multigrid algorithm to higher order discretizations, we need to find I + r such that I + r L 1 = L 2 R u . As for the second order case, it is possible to compute these restriction operators by explicitly inverting the matrix L 1 . However, in general the entries of I + r depend on the number of nodes. This makes it unfeasible to exactly represent the restriction operator needed for high order discretizations. However, the real representation of two consecutive central rows of I + r is only slightly different from one another. In particular we observe that the farther two rows are from the boundaries, the smaller is the difference in absolute value between two corresponding terms. Hence, we identify a repeating stencil by canceling all the contributions smaller than a given threshold ε in absolute value and by rounding the entries to the number nearest to ε.

Remark 9
A repeating stencil structure for the residual restriction operator I + r can be identified only for N greater than a minimum dimension N * , which depends on the threshold chosen. For each order, we found N * through a straightforward trial and error procedure.
We have therefore computed, considering a tolerance ε = 10 −6 , the approximate version of the residual restriction operators satisfying I + r L 1 = L 2 R u for a 3rd, 4th, 5th and 6th order fine-grid discretization. Since these discretizations are upwind biased, the resulting residual restrictions also depend on downwind components. In Fig. 9 the sparsity patterns for these operators are shown for N = 160. Once again, the restriction operators are consistent at all nodes except the first one due to the particular structure of the first order upwind operator D + in (3).
In Fig. 10 we show the convergence plots of the multigrid procedure using the new residual restriction operators I + r . Similarly to the first (Fig. 4) and second ( Fig. 8) order discretizations, the new residual restrictions lead to L-grid algorithms with approximately 2 L times faster wave propagation, for all orders of accuracy.
For completeness, Table 1 shows the orders of convergence (computed and expected) for different SBP-SAT upwind discretizations of (1). For a pth-order SBP upwind discretization fulfilling Definition 1, the order of convergence is expected to be at least p/2 + 1 [7,8]. Our results show a superconvergence behavior for some discretizations and do not contradict the theory. Note that in practice it is not meaningful to aim for machine precision with multigrid algorithms. It is enough to get an error below the truncation error of the scheme.

Remark 10
The multigrid algorithm (17) with the new interpolators can also be applied to non-constant coefficient problems, for details see "Appendix A".

The SBP-Preserving Prolongation
The use of the SBP-preserving prolongation instead of (13) leads to the convergence results shown in Fig. 11 for 3rd, 4th, 5th and 6th order discretizations. In terms of iterations to reach convergence, (22) does not yield any substantial improvement (cf. Fig. 10). However, the cost in terms of arithmetical operations needed for the SBP-preserving prolongation is relatively   The orders of convergence are computed with the truncation errors of the two finest grids high compared to the linear prolongation, due to the sparsity pattern comparable to the one of I + r . Nonetheless, the prolongation in (22) has the advantage of having better stability properties compared to the linear prolongation in (13), hence allowing for slightly increased CFL numbers. In Fig. 12, the convergence results for the 2nd order discretization with RK4 timemarching and λ = 0.6 using both classes of prolongation operators are shown. In this case the SBP-preserving prolongation prevents overshoots in the transient phase of the two-grid  Single grid Two grids Three grids Four grids Five grids Fig. 12 Convergence plots for the multigrid algorithm satisfying I + r L 1 = L 2 R u applied to a 2nd order discretization on the fine grid. The linear (left) and the SBP-preserving prolongation (right) were used in the fine grid update step. Moreover, RK4 time marching with λ = 0.6 was used on each grid for wave propagation procedure. Despite this benefit, the SBP-preserving prolongation remains costly compared to the linear prolongation. Hence, in the following (13) will be used in the fine grid update step of (11).

Extension to Hyperbolic Systems of Equations
Consider the linear hyperbolic system of equations where A ∈ R s×s is a symmetric matrix with constant coefficients and all the vectors involved have s components. To start with, we associate to (25) the set of characteristic boundary conditions where X + and X − indicate the eigenvectors related to the positive and negative eigenvalues of A, respectively. Let A = X X T be the eigendecomposition of A: we define A ± = X ± X T = X ± ± X T ± such that A = A + + A − , where + and − are matrices consisting of the positive and negative eigenvalues of A. Furthermore, it is convenient to introduce the projection matrices I ± s = X ± X T ± in order to split left-and right-traveling waves. The following relations hold where I s ∈ R s×s is the identity matrix. The boundary conditions (26) lead to well-posedness of (25), since for f = 0 the energy-method gives d dt A semidiscrete SBP-SAT approximation of (25, 26) can be written as [7,13] where we have introduced the vector e N = (0, . . . , 0, 1) T . Moreover, in (27) the operator ⊗ denotes the Kronecker product defined by The discrete operator (A + ⊗ D + )+(A − ⊗ D − ) in (27) can be recast to highlight the relation between the SBP upwind operators (D + , D − ) and the SBP operators based on centered difference methods (D c ) [8] as In (28), |A| = A + − A − has non-negative eigenvalues and S = 1 2 (Q + − Q − ) = 1 2 Q + + Q T + is a positive semi-definite matrix which can be seen as an artificial dissipation term [7,14]. As a consequence, the stability of (27) can be proved [6] in the same way as for the usual SBP operators.

Remark 11
The centered operator D c = 1 2 (D + + D − ) satisfies the usual SBP property. Indeed, by denoting D c with P −1 Q we can write By adding the transpose of Q to itself, we get which implies that D c is a discrete operator fulfilling the usual SBP property [6].

Modifications of the Multigrid Procedure for Systems of Equations
To apply the multigrid algorithm to hyperbolic systems, we need to recast the semi-discrete formulation (27) in compact form, as we did for the scalar problem (2). By introducing L In a similar manner as for the scalar case, convergence to steady-state can be accelerated by using the algorithm (11). For the system case, L 2 = A + ⊗ L + 2 + A − ⊗ L − 2 indicates a coarse-grid counterpart of L 1 and S i is a matrix representing a time-marching procedure on Ω i , i = 1, 2. The fine grid update step for the system case can be built from the characteristic modes. Applying directly the same idea of the scalar problem to the characteristic components, we obtain Here, we used the identity matrix I N +1 ∈ R (N +1)×(N +1) and I I ,e p indicates the prolongation operator for the included nodes in the scalar case. Multiplying from the left by (X ⊗ I N +1 ) yields This formula can be recast as the fine grid update step in (11) by using the following prolongation operators Likewise, we must also consider the restriction operators where R e u indicates the injection operator for the scalar case. These restriction operators lead to the following Lemma 1 The operators in (30) yield I r L 1 = L 2 R u for the characteristic boundary conditions (26).

Proof
The result can be proved by computing I r L 1 as Since I + r and L + 1 are the same matrices as in the scalar problem, their product fulfills 2 , and hence

Remark 12
The residual restriction I r in (30), which consists of the projection matrices I ± s , has the advantage of making the mixed contributions in I r L 1 disappear. Lemma 1 implies that the operators I + r and I − r , previously computed for the scalar case, can be used to write a residual restriction I r which verifies the approximation assumption (14). This results suggests that the algorithm for the system case behaves similarly to the scalar constant coefficient case for problems with characteristic boundary conditions such as (25, 26).

Remark 13
The general L-grid algorithm (17) can be similarly used for the system case.

Numerical Results for Characteristic Boundary Conditions
To test the algorithm (17) for the system case, we study the linearized one dimensional symmetrized form of the compressible Euler equations [15] In (31) we have with u = 1, c = 2, ρ = 1 and γ = 1.4. The variables ρ, u and θ are the density, velocity and temperature perturbations of the fluid, respectively. The characteristic boundary conditions (26) become Consider the manufactured solution with ν = 0.1, ξ = 2 and a random initial data compatible with the boundary conditions. The multigrid convergence results for the 1st, 2nd, 3rd, 4th, 5th and 6th order upwind discretizations of (31), (32) are shown in Fig. 13. As expected, applying the algorithm in (17) makes wave propagation faster by a factor of 2 L . Note that for the two-grid algorithm applied to the 2nd order discretization overshoots occur. As mentioned in Sect. 4.4, this side effect can be eliminated by using the SBP-preserving prolongation (22) in the fine-grid update step of (11).

Numerical Results for Other Boundary Conditions
Other sets of boundary conditions can also lead to a well-posed problem. Consider a rotation of the matrix A = Y ΩY T using Since we consider a subsonic flow (u < c), two boundary conditions must be imposed at x = 0 while the remaining boundary condition is set at x = 1. In particular, and a well-posed problem. The semi-discrete formulation  can be shown to be stable [13] (it can be rewritten in terms of central difference operators D c due to (28)). Applying the L-grid algorithm (17) to (35) leads to the convergence results in Fig. 14. Lemma 1 does not hold in this case, and I r in (30) fulfills I r L 1 = L 2 R u only at the interior nodes. Despite this fact, the multigrid procedure leads to faster convergence for all the order of accuracy. The convergence to steady-state is slower compared to the one with the characteristic boundary conditions in (32) (cf. Fig. 13), since the non-characteristic boundary condition in (34) are reflecting [16]. In particular, the reflective effects seem to be dominating over the wave propagation for both the single-and multi-grid iterative methods. Since the proposed multigrid method is designed to accelerate wave propagation, slower convergence is somewhat expected.
More generally, the following set of boundary conditions lead to the well-posedness of (31) if If (37) holds, the semi-discrete SBP-SAT approximation of (31), (36) is stable [13]. As an example, the matrices verify (37) for the rotation (33) and hence can be used in (36) to produce stable boundary conditions for (31). The convergence results for the multigrid algorithm in this case are shown in Fig. 15. Once again, faster convergence to steady-state is achieved proportionally to the number of grids used, but the results are slower compared to the one with R 0 = R 1 = 0 as in (34) (cf. Fig. 14).

Extension to Nonlinear Problems
Consider the nonlinear conservation law with appropriate Dirichlet boundary conditions. In order to extend the multigrid algorithm (17) to problems such as (39), we must modify both the residual restriction I r and the prolongation for the excluded nodes I E p . Since the characteristic directions change from grid points to grid points, these interpolation operators must be constructed by solving local boundary problems The extension of I r and I E p to nonlinear problems was first presented in [4] for first-order schemes. Here, we generalize this technique to higher orders.

Interpolation Operators for the Nonlinear Case
We start by considering the residual restriction operator I r in the two-grid algorithm (11). Since the nonlinear problem (39) can be rewritten as u t + f (u) u x = 0, the sign of f (u) determines the direction of the wave propagation. As a consequence, I r r (1) depends on the sign of f evaluated at U (1) .
Consider the 2 jth node, which is included in the coarse grid. If the sign of f does not change in a neighborhood of this node, it is easy to identify the direction of the upwind-biased restriction. In particular, if both f U (1) 2 j−1 and f U (1) 2 j+1 are non-negative, the problem gives rise to a right-traveling wave and the residual restriction at x (2) 2 j is given by 2 j+1 ≤ 0, then the problem propagates from right to left and the restriction at x (2) 2 j is computed with I − r . In Fig. 16, these two cases are illustrated for the first-order residual restriction operators I ± r . Sign changes of f yield either shocks or rarefaction fans. For example, if f U (1) 2 j−1 is positive and f U (1) 2 j+1 is negative, a discontinuity is expected to occur in a neighborhood of x (2) 2 j . In this case, the sign of f U (1) 2 j determines the direction of the residual restriction I r , see Fig. 17. Vice versa, rarefaction fans are experienced when f U (1) 2 j−1 is negative and f U (1) 2 j+1 is positive. In that case, the node x (2) 2 j is not reached by any traveling wave and the residual restriction acts locally as the injection operator R u , see Fig. 18.
2 j−1 ≤ 0 and f U 2 j+1 ≥ 0, then the node x 2 j is not reached by any traveling wave. As a result, the residual restriction acts locally as an injection operator The L-grid algorithm for nonlinear problems is based on the interpolation operators given by (40) and (41).

A Stable Upwind SBP-SAT Spatial Discretization of the Burgers' Equation
As an example of a nonlinear conservative law, we consider the Burgers' equation The orders of convergence are computed for the manufactured solution e −x and with the truncation errors of the two finest grids , g 0 , g N given boundary data and Applying the energy-method to (43) with zero boundary data yields Since 0 , the energyrate (44) can be rewritten as which leads to stability of (43) since S is positive semi-definite. The orders of convergence of the semi-discrete problem (43) are shown in Table 2. The truncation errors were computed by using the smooth steady manufactured solution u (x) = e −x , which is solution to the inhomogeneous problem u t + uu x = −e −2x . The manufactured solution was also used to provide boundary and initial data. For each test, the steady-state solution was computed with 5000 iterations of the fourth order Runge-Kutta method with λ = 0.5. The orders of convergence match the expected order p/2 + 1 for a discretization involving pth-order SBP upwind operators [7,8].

Numerical Results for the Burgers' Equation
The convergence to steady-state of the spatial discretization (43) can be accelerated by using the multigrid algorithm (17) modified with the interpolation operators given in (40) and (41). Here, we consider the two test cases used in [4].
As a first example, we consider a shock problem with initial data Both u 0 (x) and the analytical steady-state solution for the Burgers' equation (42) are shown in Fig. 19. Due to the discontinuity at x = 1 2 , the steady-state solution for the semi-discrete problem (43) differs slightly from the analytical one in a neighborhood of x = 1 2 , see Fig. 20. Likewise, also the multigrid solutions exhibit minor differences with respect to each other. Note that we have not used any specific shock treatment in these calculations. Since the steady-state solution changes slightly from case to case, the convergence plots below show the relative error U n+1 − U n in the P-norm. The multigrid convergence for the Burgers' equation with the discontinuous initial data u 0 (x) in (45) is shown in Fig. 21. As for linear  problems, the multigrid algorithm leads to approximately 2 L times faster wave propagation, with the only exception of the two-grid procedure applied to the second order discretization. In this case, the convergence is only twice as fast as the single-grid method. Next, we consider the initial data u 0 (x) = cos (5π x) which develops both shocks and rarefactions, leading to the same analytical steady-state solution as before, see Fig. 22. The multigrid convergence for this problem is shown in Fig. 23. Furthermore, in Fig. 24 we show the multigrid solution with five grid levels at different iterations for a fifth order discretization. The algorithm (17) with the nonlinear modifications in (40) and (41) leads to approximately 2 L times faster wave propagation only in the first order case. For higher orders, the speedup factor drops to approximately 2 L−1 .

Remark 15
The approximation assumption (14) holds only approximately for the nonlinear residual restriction in (40) applied to the SBP upwind discretization in (43). Nonetheless, we could in some cases obtain the optimal speedup factor. This results suggests that fulfilling the approximation assumption (14) is not absolutely necessary in order to obtain L-grid procedures with 2 L times faster wave propagation.

Two-Dimensional Problems
Consider the linear advection problem in two space dimensions with a and b positive constants.

Remark 16
Multigrid techniques for two-dimensional hyperbolic problems require a double full-coarsening [5], for details see "Appendix B".   As a first attempt to accelerate the convergence to steady-state, the residual restriction is computed as a linear combination of the one-dimensional operators as in [5], i.e.
The resulting multigrid convergence to the manufactured steady-state solution u (x, y) = cos x 2 + y 2 is shown in Fig. 25 for a = b = 1 in (46). Convergence is achieved for one, two and three grid levels. For the three level procedure, even though the wave propagation is accelerated by a factor of 2 3 = 8 as expected, the convergence to steady-state is mostly the same as the one of the two level algorithm. Furthermore, overshoots occur for more than three grid levels. These spurious oscillations both slow down the propagation and lead to an inaccurate steady-state solution. Single grid Two grids Three grids Four grids Five grids Fig. 26 Convergence plots for the multigrid algorithm (17) with the residual restrictions operators satisfying the approximation assumption applied to the two-dimensional advection equation (46) However, the multigrid algorithm with the residual restriction operators satisfying the approximation assumption (14) prevents overshoots in all the L-level procedures. Moreover, it provides 2 L times faster wave propagation, see Fig. 26. However, the convergence rate becomes significantly worse for higher order fine grid discretizations. This effect seems to make the algorithm less effective. We envision that other interpolation operators in (17) might both overcome this drawback and prevent overshoots in the transient phase.

Conclusions and Future Work
In this paper, we have replicated and extended the first order accurate TVD-MG scheme [4,5] to upwind Summation-By-Parts (SBP) based high-order accurate finite difference methods [6,7] for linear hyperbolic problems. We have reinterpreted the upwind biased interpolation from a matrix-vector perspective, leading to more general residual restriction operators. These new operators, which satisfy the approximation assumption, allowed us to complement the wave propagation for high-order discretizations with the TVD-MG scheme for first order approximations on coarse grids.
Furthermore, we have shown that the restriction operators needed to accelerate wave propagation for the linear advection equation can be used to improve the procedure for hyperbolic systems. For characteristic boundary conditions, the new multigrid scheme leads to wave propagation with increasing speed as the number of grids increases. For other stable boundary conditions, the effect of this algorithm is to increase the convergence rate.
We have also extended the nonlinear modification of the TVD-MG scheme to higher order SBP-SAT upwind discretizations. Convergence is achieved for all the orders of accuracy, even when dealing with shocks and rarefaction fans. For almost all the one-dimensional (linear and nonlinear) convection-dominated problems that we have investigated, the L-grid algorithm led to 2 L times faster wave propagation. The speedup factor dropped to 2 L−1 only for the multigrid procedure applied to the Burgers' equation with both shocks and rarefactions.
Finally, two-dimensional problems have been studied. We have shown that fulfilling the approximation assumption is a sufficient condition for obtaining a speedup factor of 2 L for the wave propagation. However, the resulting procedure is costly. More research is needed in order to design two-dimensional multigrid algorithms which prevent overshoots and do not require the approximation assumption to hold.
Similarly to the previous spatial discretization (2), this problem can be proved to be stable in the A −1 P norm. Consider the advection coefficient the forcing function f (x) = 2 + xe −x 2 and the boundary data g = 1. The advection coefficient (50) and the corresponding steady-state solution to (48) are shown in Fig. 27. For this problem, the approximation assumption (14) is not strictly satisfied. Nonetheless, the convergence results for the single-and L-grid procedure (17) in Fig. 28 show that the wave propagation is accelerated approximately by a factor of 2 L , as for the constant coefficient problem (1).

B The Double Full-Coarsening for Two-Dimensional Problems
Consider the one-dimensional advection problem (1) in a two-dimensional domain (0, 1) 2 . The standard full-coarsening (shown in the left side of Fig. 29) suggests that wave propagation, which in this case propagates the signal towards the right boundary, can not be accelerated by multigrid in the lines consisting of only excluded nodes. To address this issue, a second full-coarsening is needed (see the middle side of Fig. 29). By combining these two coarse grids, one gets a double full-coarsening (as in the right side of Fig. 29) which allows for wave propagation through all the lines of the fine grid. As a consequence, two-dimensional problems require 2 k−1 grids for the kth level and a total amount of 2 L − 1 grids in order to accelerate wave propagation with a L-level algorithm. Analogously, a three-dimensional problem would need 2 2(k−1) grids for the kth level and a total amount of 4 L − 1 /3 grids [5].
The double full-coarsening also allows for an easier interpretation of the fine-grid update step in (17) for two-dimensional problems: the solution at the excluded nodes is reconstructed by interpolating the known data from the neighboring points. As in the one-dimensional problems, the direction of the interpolation is determined by the direction of the wave propagation.  Note that the second full-coarsening in Fig. 29 can not include the boundary nodes. Indeed, a staggered grid retaining the boundary nodes would lead to the same CFL limitation of the fine grid. Due to this constraint, a limited amount of interpolation operators must be derived in order to accelerate the convergence of u t + u x = f to steady-state. For this problem, which propagates the modes along the x-line, we consider the interpolation operators I r = I r ,x ⊗ R u,y , I E p = I E p,x ⊗ I I p,y , I I p = I I p,x ⊗ I I p,y R u = R u,x ⊗ R u,y , where I r ,α , R u,α , I I p,α , I E p,α are the one-dimensional interpolation operators in the coordinate direction α ∈ {x, y}. These operators depend on the kind of coarsening considered. The four possible configurations are shown in Fig. 30 Fig. 29 In the left and middle figure, two single full-coarsenings of a fine grid are presented. The white dots are not included in any coarse grid, the black dots belong to the standard full-coarsened grid whereas the staggered full-coarsened grid consists of the crosses. The right figure shows a double full-coarsening, which is obtained by combining the first two operators only for the configuration where the boundary nodes are included in both the fine and coarse grids. With a similar procedure it is possible to obtain the residual restrictions for the other three cases, as well.

Remark 17
Our multigrid method makes use of high order wave propagation in the fine grid, while the first order discretization is needed for all the coarse grids. Hence, the interpolation operators conveying the information between grids involving an even number of nodes are uniquely determined by the direction of the wave propagation.
The modified two-dimensional L-level algorithm for u t + u x = f was studied, for both the single standard and double full-coarsenings, in Fig. 31. The standard coarsening fails to accelerate wave propagation, as expected. The double coarsening leads to 2 L times faster wave propagation as also demonstrated in [5] for first order discretizations. However, the convergence to steady-state is rather slow.